Opennebula 5.7 Deployment Guide Release 5.7.80

Total Page:16

File Type:pdf, Size:1020Kb

Opennebula 5.7 Deployment Guide Release 5.7.80 OpenNebula 5.7 Deployment guide Release 5.7.80 OpenNebula Systems Jan 16, 2019 This document is being provided by OpenNebula Systems under the Creative Commons Attribution-NonCommercial- Share Alike License. THE DOCUMENT IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IM- PLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DOCUMENT. i CONTENTS 1 Cloud Design 1 1.1 Overview.................................................1 1.2 Open Cloud Architecture.........................................2 1.3 VMware Cloud Architecture.......................................7 1.4 OpenNebula Provisioning Model.................................... 13 2 OpenNebula Installation 19 2.1 Overview................................................. 19 2.2 Front-end Installation.......................................... 19 2.3 MySQL Setup.............................................. 25 3 Node Installation 27 3.1 Overview................................................. 27 3.2 KVM Node Installation......................................... 28 3.3 LXD Node Installation.......................................... 35 3.4 vCenter Node Installation........................................ 37 3.5 Verify your Installation.......................................... 45 4 Authentication Setup 52 4.1 Overview................................................. 52 4.2 SSH Authentication........................................... 53 4.3 x509 Authentication........................................... 55 4.4 LDAP Authentication.......................................... 58 5 Sunstone Setup 64 5.1 Overview................................................. 64 5.2 Sunstone Installation & Configuration.................................. 65 5.3 Sunstone Views.............................................. 72 5.4 Labels.................................................. 77 5.5 User Security and Authentication.................................... 86 5.6 Cloud Servers Authentication...................................... 91 5.7 Configuring Sunstone for Large Deployments.............................. 94 6 VMware Infrastructure Setup 100 6.1 Overview................................................. 100 6.2 vCenter Driver.............................................. 100 6.3 vCenter Datastores............................................ 127 6.4 Datastore clusters with Storage DRS.................................. 134 6.5 Marketplace with vCenter Datastores.................................. 135 6.6 Tuning and Extending.......................................... 135 6.7 vCenter Networking Overview...................................... 135 ii 6.8 Consuming existing vCenter port groups................................ 136 6.9 Creating Port Groups from OpenNebula................................. 136 6.10 Network monitoring........................................... 142 7 Open Cloud Host Setup 143 7.1 Overview................................................. 143 7.2 KVM Driver............................................... 143 7.3 LXD Driver................................................ 154 7.4 Monitoring................................................ 159 7.5 PCI Passthrough............................................. 163 8 Open Cloud Storage Setup 170 8.1 Overview................................................. 170 8.2 Filesystem Datastore........................................... 172 8.3 Ceph Datastore.............................................. 177 8.4 LVM Datastore.............................................. 182 8.5 Raw Device Mapping (RDM) Datastore................................. 185 8.6 iSCSI - Libvirt Datastore......................................... 187 8.7 The Kernels & Files Datastore...................................... 189 9 Open Cloud Networking Setup 191 9.1 Overview................................................. 191 9.2 Node Setup................................................ 192 9.3 Bridged Networking........................................... 193 9.4 802.1Q VLAN Networks......................................... 195 9.5 VXLAN Networks............................................ 196 9.6 Open vSwitch Networks......................................... 198 9.7 Open vSwitch on VXLAN Networks.................................. 200 10 References 202 10.1 Overview................................................. 202 10.2 ONED Configuration........................................... 202 10.3 Logging & Debugging.......................................... 218 10.4 Onedb Tool................................................ 221 10.5 Large Deployments............................................ 225 iii CHAPTER ONE CLOUD DESIGN 1.1 Overview The first step of building a reliable, useful and successful cloud is to decide a clear design. This design needs to be aligned with the expected use of the cloud, and it needs to describe which data center components are going to be part of the cloud. This comprises i) all the infrastructure components such as networking, storage, authorization and virtualization back-ends, as well as the ii) planned dimension of the cloud (characteristics of the workload, numbers of users and so on) and the iii) provisioning workflow, ie, how end users are going to be isolated and using the cloud. In order to get the most out of a OpenNebula Cloud, we recommend that you create a plan with the features, perfor- mance, scalability, and high availability characteristics you want in your deployment. This Chapter provides infor- mation to plan an OpenNebula cloud based on KVM or LXD or vCenter. With this information, you will be able to easily architect and dimension your deployment, as well as understand the technologies involved in the management of virtualized resources and their relationship. 1.1.1 How Should I Read This Chapter This is the first Chapter to read, as it introduces the needed concepts to correctly define a cloud architecture. Within this Chapter, as first step a design of the cloud and its dimension should be drafted. For KVM and LXD clouds proceed to Open Cloud Architecture and for vCenter clouds read VMware Cloud Architecture. Then you could read the OpenNebula Provisioning Model to identify the wanted model to provision resources to end users. In a small installation with a few hosts, you can skip this provisioning model guide and use OpenNebula without giving much thought to infrastructure partitioning and provisioning. But for medium and large deployments you will probably want to provide some level of isolation and structure. Once the cloud architecture has been designed the next step would be to learn how to install the OpenNebula front-end. 1.1.2 Hypervisor Compatibility Section Compatibility Open Cloud Architecture This Section applies to KVM and LXD. VMware Cloud Architecture This Section applies to vCenter. OpenNebula Provisioning Model This Section applies to both KVM and vCenter. 1 OpenNebula 5.7 Deployment guide, Release 5.7.80 1.2 Open Cloud Architecture Enterprise cloud computing is the next step in the evolution of data center (DC) virtualization. OpenNebula is a simple but feature-rich and flexible solution to build and manage enterprise clouds and virtualized DCs, that combines existing virtualization technologies with advanced features for multi-tenancy, automatic provision and elasticity. OpenNebula follows a bottom-up approach driven by sysadmins, devops and users real needs. 1.2.1 Architectural Overview OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the hosts with the front-end. A cloud architecture is defined by three components: storage, networking and virtualization. Therefore, the basic components of an OpenNebula system are: • Front-end that executes the OpenNebula services. • Hypervisor-enabled hosts that provide the resources needed by the VMs. • Datastores that hold the base images of the VMs. • Physical networks used to support basic services such as interconnection of the storage servers and OpenNebula control operations, and VLANs for the VMs. OpenNebula presents a highly modular architecture that offers broad support for commodity and enterprise-grade hypervisor, monitoring, storage, networking and user management services. This Section briefly describes the different choices that you can make for the management of the different subsystems. If your specific services are not supported we recommend to check the drivers available in the Add-on Catalog. We also provide information and support about how to develop new drivers. 1.2.2 Dimensioning the Cloud The dimension of a cloud infrastructure can be directly inferred from the expected workload in terms of VMs that the cloud infrastructure must sustain. This workload is also tricky to estimate, but this is a crucial exercise to build an efficient cloud. The main aspects to take into account at the time of dimensioning the OpenNebula cloud follows. OpenNebula front-end The minimum recommended specs are for the OpenNebula front-end are: 1.2. Open Cloud Architecture 2 OpenNebula 5.7 Deployment guide, Release 5.7.80 Resources Minimum Recommended configuration Memory 2 GB CPU 1 CPU (2 cores) Disk Size 100 GB Network
Recommended publications
  • Linux on the Road
    Linux on the Road Linux with Laptops, Notebooks, PDAs, Mobile Phones and Other Portable Devices Werner Heuser <wehe[AT]tuxmobil.org> Linux Mobile Edition Edition Version 3.22 TuxMobil Berlin Copyright © 2000-2011 Werner Heuser 2011-12-12 Revision History Revision 3.22 2011-12-12 Revised by: wh The address of the opensuse-mobile mailing list has been added, a section power management for graphics cards has been added, a short description of Intel's LinuxPowerTop project has been added, all references to Suspend2 have been changed to TuxOnIce, links to OpenSync and Funambol syncronization packages have been added, some notes about SSDs have been added, many URLs have been checked and some minor improvements have been made. Revision 3.21 2005-11-14 Revised by: wh Some more typos have been fixed. Revision 3.20 2005-11-14 Revised by: wh Some typos have been fixed. Revision 3.19 2005-11-14 Revised by: wh A link to keytouch has been added, minor changes have been made. Revision 3.18 2005-10-10 Revised by: wh Some URLs have been updated, spelling has been corrected, minor changes have been made. Revision 3.17.1 2005-09-28 Revised by: sh A technical and a language review have been performed by Sebastian Henschel. Numerous bugs have been fixed and many URLs have been updated. Revision 3.17 2005-08-28 Revised by: wh Some more tools added to external monitor/projector section, link to Zaurus Development with Damn Small Linux added to cross-compile section, some additions about acoustic management for hard disks added, references to X.org added to X11 sections, link to laptop-mode-tools added, some URLs updated, spelling cleaned, minor changes.
    [Show full text]
  • Dm-X: Protecting Volume-Level Integrity for Cloud Volumes and Local
    dm-x: Protecting Volume-level Integrity for Cloud Volumes and Local Block Devices Anrin Chakraborti Bhushan Jain Jan Kasiak Stony Brook University Stony Brook University & Stony Brook University UNC, Chapel Hill Tao Zhang Donald Porter Radu Sion Stony Brook University & Stony Brook University & Stony Brook University UNC, Chapel Hill UNC, Chapel Hill ABSTRACT on Amazon’s Virtual Private Cloud (VPC) [2] (which provides The verified boot feature in recent Android devices, which strong security guarantees through network isolation) store deploys dm-verity, has been overwhelmingly successful in elim- data on untrusted storage devices residing in the public cloud. inating the extremely popular Android smart phone rooting For example, Amazon VPC file systems, object stores, and movement [25]. Unfortunately, dm-verity integrity guarantees databases reside on virtual block devices in the Amazon are read-only and do not extend to writable volumes. Elastic Block Storage (EBS). This paper introduces a new device mapper, dm-x, that This allows numerous attack vectors for a malicious cloud efficiently (fast) and reliably (metadata journaling) assures provider/untrusted software running on the server. For in- volume-level integrity for entire, writable volumes. In a direct stance, to ensure SEC-mandated assurances of end-to-end disk setup, dm-x overheads are around 6-35% over ext4 on the integrity, banks need to guarantee that the external cloud raw volume while offering additional integrity guarantees. For storage service is unable to remove log entries documenting cloud storage (Amazon EBS), dm-x overheads are negligible. financial transactions. Yet, without integrity of the storage device, a malicious cloud storage service could remove logs of a coordinated attack.
    [Show full text]
  • Understanding Full Virtualization, Paravirtualization, and Hardware Assist
    VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist Contents Introduction .................................................................................................................1 Overview of x86 Virtualization..................................................................................2 CPU Virtualization .......................................................................................................3 The Challenges of x86 Hardware Virtualization ...........................................................................................................3 Technique 1 - Full Virtualization using Binary Translation......................................................................................4 Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5 Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6 Memory Virtualization................................................................................................6 Device and I/O Virtualization.....................................................................................7 Summarizing the Current State of x86 Virtualization Techniques......................8 Full Virtualization with Binary Translation is the Most Established Technology Today..........................8 Hardware Assist is the Future of Virtualization, but the Real Gains Have
    [Show full text]
  • Deliverable No. 5.3 Techniques to Build the Cloud Infrastructure Available to the Community
    Deliverable No. 5.3 Techniques to build the cloud infrastructure available to the community Grant Agreement No.: 600841 Deliverable No.: D5.3 Deliverable Name: Techniques to build the cloud infrastructure available to the community Contractual Submission Date: 31/03/2015 Actual Submission Date: 31/03/2015 Dissemination Level PU Public X PP Restricted to other programme participants (including the Commission Services) RE Restricted to a group specified by the consortium (including the Commission Services) CO Confidential, only for members of the consortium (including the Commission Services) Grant Agreement no. 600841 D5.3 – Techniques to build the cloud infrastructure available to the community COVER AND CONTROL PAGE OF DOCUMENT Project Acronym: CHIC Project Full Name: Computational Horizons In Cancer (CHIC): Developing Meta- and Hyper-Multiscale Models and Repositories for In Silico Oncology Deliverable No.: D5.3 Document name: Techniques to build the cloud infrastructure available to the community Nature (R, P, D, O)1 R Dissemination Level (PU, PP, PU RE, CO)2 Version: 1.0 Actual Submission Date: 31/03/2015 Editor: Manolis Tsiknakis Institution: FORTH E-Mail: [email protected] ABSTRACT: This deliverable reports on the technologies, techniques and configuration needed to install, configure, maintain and run a private cloud infrastructure for productive usage. KEYWORD LIST: Cloud infrastructure, OpenStack, Eucalyptus, CloudStack, VMware vSphere, virtualization, computation, storage, security, architecture. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 600841. The author is solely responsible for its content, it does not represent the opinion of the European Community and the Community is not responsible for any use that might be made of data appearing therein.
    [Show full text]
  • VIRTUAL DESKTOP INFRASTRUCTURE an Rcpsolution Spotlight on a Joint Effort by Microsoft and Citrix to Bring a VDI Solution to the Market
    0309rcp_Supp.v5 2/10/09 4:53 PM Page C1 SPECIAL PULLOUT SECTION Partner’s Guide to VIRTUAL DESKTOP INFRASTRUCTURE An RCPSolution Spotlight on a joint effort by Microsoft and Citrix to bring a VDI solution to the market. By Scott Bekker ChannelRedmond Partner Project15 2/4/09 3:07 PM Page 1 Project15 2/4/09 3:08 PM Page 1 0309rcp_Supp.v5 2/10/09 4:53 PM Page 2 VIRTUAL DESKTOP INFRASTRUCTURE icrosoft and Citrix Systems Inc. are working together on a joint go-to-market strategy for partners to take Virtual MDesktop Infrastructure (VDI) solutions to the market. VDI is a special subset of desktop virtualization, itself a subset of virtualization in general. While virtualization separates hardware from workers, for users who need access to their work software, Microsoft’s overall virtualization environment from anywhere, including from a non- approaches fit into four broad buckets—server virtu- company-owned PC, as well as enterprise customers alization, desktop virtualization, presentation virtu- with a centralized desktop strategy for office work- alization and application virtualization. ers,” the Microsoft materials explain. That’s not to The biggest part of the virtualization market to date say everyone is appropriate for VDI, even when a fat has been consolidating multiple software servers client isn’t the answer for some reason. onto comparatively fewer hardware servers. “Terminal Services, which has been widely Such server consolidation is still a growth adopted for virtualizing the presentation of entire industry, especially in a down economy, in which desktops or individual applications, is an alternative there are savings to be gained by reducing hardware centralized desktop delivery solution from and power expenses.
    [Show full text]
  • SGI™ Propack 1.3 for Linux™ Start Here
    SGI™ ProPack 1.3 for Linux™ Start Here Document Number 007-4062-005 © 1999—2000 Silicon Graphics, Inc.— All Rights Reserved The contents of this document may not be copied or duplicated in any form, in whole or in part, without the prior written permission of Silicon Graphics, Inc. LIMITED AND RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure by the Government is subject to restrictions as set forth in the Rights in Data clause at FAR 52.227-14 and/or in similar or successor clauses in the FAR, or in the DOD, DOE or NASA FAR Supplements. Unpublished rights reserved under the Copyright Laws of the United States. Contractor/ manufacturer is SGI, 1600 Amphitheatre Pkwy., Mountain View, CA 94043-1351. Silicon Graphics is a registered trademark and SGI and SGI ProPack for Linux are trademarks of Silicon Graphics, Inc. Intel is a trademark of Intel Corporation. Linux is a trademark of Linus Torvalds. NCR is a trademark of NCR Corporation. NFS is a trademark of Sun Microsystems, Inc. Oracle is a trademark of Oracle Corporation. Red Hat is a registered trademark and RPM is a trademark of Red Hat, Inc. SuSE is a trademark of SuSE Inc. TurboLinux is a trademark of TurboLinux, Inc. UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company, Ltd. SGI™ ProPack 1.3 for Linux™ Start Here Document Number 007-4062-005 Contents List of Tables v About This Guide vii Reader Comments vii 1. Release Features 1 Feature Overview 2 Qualified Drivers 3 Patches and Changes to Base Linux Distributions 3 2.
    [Show full text]
  • Mobile Video Ecosystem and Geofencing for Licensed Content Delivery
    Mobile Video Ecosystem and Geofencing for Licensed Content Delivery TABLE OF CONTENTS Executive Summary ...................................................................................................................................... 2 1. Introduction ............................................................................................................................................... 4 2. Video Delivery Challenges and Optimizations .......................................................................................... 6 2.1 Delivery technology ............................................................................................................................. 7 2.2 Video Codecs ...................................................................................................................................... 9 2.3 Codec comparison ............................................................................................................................... 9 2.4 Congestion Control Algorithm Enhancements .................................................................................. 11 3. Managing Video Based on Network Load............................................................................................... 12 3.1 Radio Congestion Aware Function (RCAF) ...................................................................................... 12 3.2 Mobile Throughput Guidance ............................................................................................................ 13 3.3 Self Organizing
    [Show full text]
  • Vmware Security Briefing
    VMware Security Briefing Steven Boesel, CISSP Senior Systems Engineer Hosted Virtualization vs. Bare Metal Virtualization Hosted Virtualization Bare-Metal Virtualization VMware Workstation VMware ESX Server VMware Server VMware Player Host OS Changes Security Profile Greatly VMware Confidential/Proprietary Copyright © 2006 VMware, Inc. All rights reserved. 2 VMware Architecture: Isolation and Containment VMM VMM Security Design Highlights Production Use Proof Points • Privileged instructions within a VM are • CC EAL 4+ certification “de-privileged” and run within an isolated virtual memory space • ESX 3.0.2 and VC 2.0.2 • VMs have no direct access to • Passed security audit and put into hardware, only have visibility to virtual production by the largest devices Financial Institutions • VMs can only communicate with each • Passed Defense and Security other through Virtual Switches Agencies scrutiny and audit (NetTop and HAP) • Resource reservations and limits guarantees performance isolation • Large number of customers run mission critical and transaction • OS and applications within a VM run as processing applications is with no modification (hence no recertification required) 3 Are there any Hypervisor Attack Vectors? There are currently no known hypervisor attack vectors to date that have lead to “VM Escape” • Architectural Vulnerability • Designed specifically with Isolation in Mind • Software Vulnerability • Possible like with any code written by humans • Small Code Footprint of Hypervisor (~32MB) Makes it Easier to Audit • Depends
    [Show full text]
  • Buffered FUSE: Optimising the Android IO Stack for User-Level Filesystem
    Int. J. Embedded Systems, Vol. 6, Nos. 2/3, 2014 95 Buffered FUSE: optimising the Android IO stack for user-level filesystem Sooman Jeong and Youjip Won* Hanyang University, #507-2, Annex of Engineering Center, 17 Haengdang-dong, Sungdong-gu, Seoul, 133-791, South Korea E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: In this work, we optimise the Android IO stack for user-level filesystem. Android imposes user-level filesystem over native filesystem partition to provide flexibility in managing the internal storage space and to maintain host compatibility. The overhead of user-level filesystem is prohibitively large and the native storage bandwidth is significantly under-utilised. We overhauled the FUSE layer in the Android platform and propose buffered FUSE (bFUSE) to address the overhead of user-level filesystem. The key technical ingredients of buffered FUSE are: 1) extended FUSE IO size; 2) internal user-level write buffer; 3) independent management thread which performs time-driven FUSE buffer synchronisation. With buffered FUSE, we examined the performances of five different filesystems and three disk scheduling algorithms in a combinatorial manner. With bFUSE on XFS filesystem using the deadline scheduling, we achieved the IO performance improvements of 470% and 419% in Android ICS and JB, respectively, over the existing smartphone device. Keywords: Android; storage; user-level filesystem; FUSE; write buffer; embedded systems. Reference to this paper should be made as follows: Jeong, S. and Won, Y. (2014) ‘Buffered FUSE: optimising the Android IO stack for user-level filesystem’, Int. J. Embedded Systems, Vol. 6, Nos.
    [Show full text]
  • Performance Characteristics of VMFS and RDM Vmware ESX Server 3.0.1
    Performance Study Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine—VMware Virtual Machine File System (VMFS), virtual raw device mapping (RDM), and physical raw device mapping. It is very important to understand the I/O characteristics of these disk access management systems in order to choose the right access type for a particular application. Choosing the right disk access management method can be a key factor in achieving high system performance for enterprise‐class applications. This study provides performance characterization of the various disk access management methods supported by VMware ESX Server. The goal is to provide data on performance and system resource utilization at various load levels for different types of work loads. This information offers you an idea of relative throughput, I/O rate, and CPU efficiency for each of the options so you can select the appropriate disk access method for your application. This study covers the following topics: “Technology Overview” on page 2 “Executive Summary” on page 2 “Test Environment” on page 2 “Performance Results” on page 5 “Conclusion” on page 10 “Configuration” on page 10 “Resources” on page 11 Copyright © 2007 VMware, Inc. All rights reserved. 1 Performance Characteristics of VMFS and RDM Technology Overview VMFS is a special high‐performance file system offered by VMware to store ESX Server virtual machines. Available as part of ESX Server, it is a distributed file system that allows concurrent access by multiple hosts to files on a shared VMFS volume.
    [Show full text]
  • Most Common Events and Errors in Actifio VDP 9.0.X
    Tech Brief Most Common Events and Errors in Actifio VDP 9.0.x The number of errors and warnings encountered by an Actifio appliance are displayed in the upper right-hand corner of the Actifio Desktop Dashboard. Click on the number to display a list of the events in the System Monitor service. This is a list of the most common Event IDs that you may see. You can find detailed solutions for these error codes are in ActifioNOW at: https://actifio.force.com/community/articles/Top_Solution/EventID-TopSolution Job failures can be caused by many errors. Each 43901 event message includes an error code and an error message. See the corresponding Error Code in Most Common Errors that Cause Events, Actifio VDP 9.0.x on page 9. Most Common Error and Warning Events, Actifio VDP 9.0.x Event ID Event Message What To Do 10019 System resource low The target Actifio pool (snapshot or dedup) is running out of space. See the AGM online help for a wealth of information on optimizing your storage pool usage and on optimizing image capture and dedup. 10026 A suitable MDisk or drive A quorum disk is a managed drive that contains a reserved area for for use as a quorum disk data, used exclusively for system management. Both Actifio nodes was not found. A FC path communicate on a constant basis with the three quorum disks from CDS nodes to a configured in the appliance. When one of the quorum disks is not target device has been available to any node, an error message is generated.
    [Show full text]
  • Dell EMC Host Connectivity Guide for Vmware Esxi Server
    Dell EMC Host Connectivity Guide for VMware ESXi Server P/N 300-002-304 REV 52 May 2020 Copyright © 2016-2020 Dell Inc. or its subsidiaries. All rights reserved. Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE. Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the USA. Dell EMC Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.DellEMC.com 2 Dell EMC Host Connectivity Guide for VMware ESXi Server CONTENTS PREFACE 7 Chapter 1 Introduction to VMware Infrastructure 9 VMware vSphere...............................................................................................10 vSphere 6.0..........................................................................................10 vSphere 6.5..........................................................................................10 vSphere 6.7.......................................................................................... 10 VMware ESXi
    [Show full text]