Welcome to EMC Storage Integration with VMware vSphere Best Practices. Click the Notes tab to view text that augments the audio recording. Click the Supporting Materials tab to download a PDF version of this eLearning.

Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C‐Clip, , Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, , ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E‐Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, , HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM‐Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries.

All other trademarks used herein are the property of their respective owners.

© Copyright 2013 EMC Corporation. All rights reserved. Published in the USA.

Revision Date: November 2013 Revision Number: MR‐1WP‐EMCSTORVMWBP

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 1 This course covers the considerations and ramifications of implementing EMC storage arrays with a vSphere vCenter managed, ESXi host environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 2 Before addressing the considerations and best practices for implementation of storage in an ESXi environment, we must consider how resources are consumed on ESXi and their overall impact. Storage presentation is considered a single aspect of any ESXi configuration, but another key factor is the storage array connectivity infrastructure. There are many dependencies between all of these aspects, so it is important that we discuss how these seemingly unrelated topics can affect a storage deployment goal.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 3 In lesson one we will explore some of the general environment considerations needed for implementing a VMware solution.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 4 VMware offers a diverse product line so it is important to clarify what is expected in a vSphere offering. VMware features are governed by the license obtained. If a VMware feature or product is not licensed, it will not be supported. In EMC engagements, it is common to expect the ESXi host to have an Enterprise Plus license and support all the features of this license.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 5 I/Os are the major metric by which storage arrays, applications, and interconnect infrastructures are evaluated. These metrics are often documented in Service Level Agreements (SLA), which are goals that have been agreed to and must be achieved and maintained. As storage plays a very large part of any computing solution, there are many aspects that must be considered when implementing these SLAs and defining service requirements of the solution. Some of these considerations include, but are not restricted to: • Connectivity infrastructure • Physical cables • Protocols • Array type • Physical architecture  Cache  Buses  Spinning or solid‐state drives  Software enhancements • Disk connectivity interfaces: FC, SAS, SATA

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 6 Prior to focusing on storage array presentation, it is important to consider the storage presentation models supported by VMware. Presenting a virtual machine with local storage is a possibility in most deployments, but not a consideration of this course. When it is used, it can restrict some of the functionality expected in an Enterprise level deployment, e.g., a local storage datastore can only be accessed by one machine and is usually a single point of failure (SPOF) in the environment. Storage array presentation is the more preferred method of storage presentation in an Enterprise environment, as this model is typically designed to meet specific needs of an application specific workload or SLA. However, the final configuration of a solution is not restricted to a single type of configuration and most environments may be comprised of many different aspects both array based and local storage dependent upon the most suitable resolution of the solutions expectations.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 7 With any environment there are many factors affecting the infrastructure. Depending upon the considerations and their importance, the design could change radically from one originally envisioned. The design of any infrastructure is to achieve the highest possible success in meeting the majority of the demands expected. This means that compromise and segmentation of purpose are always factors in design along with the points listed here.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 8 The key points for LUN utilization are: • The choice of RAID level and disk type to match the specific workload proposed for the LUN • Each LUN should only contain a single VMFS datastore to segment workload characteristics of differing Virtual Machines and prevent resource contention. However, if multiple Virtual Machines do access the same VMFS datastore, using disk shares to prioritize virtual machine I/O is recommended.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 9 Any solution can produce various combinations of LUN presentation models. Both large and small LUNs can be presented. One reason to create fewer, larger LUNs is to provide more flexibility to create VMs without storage administrator involvement. Another reason is more flexibility for resizing virtual disks and snapshots and fewer VMFS datastores to manage. A reason to create smaller LUNs is to waste less storage space by building in storage overhead for growth and removing it from the global pool of storage in the array. Smaller LUNs may be preferred if there are many differing performance profiles required in the environment along with varied required RAID level support.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 10 One of the key concerns in any storage array architecture is latency. This always causes performance degradation and should be minimized as much as possible. There are many areas that introduce latency but there are some general rules that can be applied to start reducing the impact of latency. One of the first considerations is the use of Flash drives with the vSphere Flash Infrastructure for host swap files and the use of vSphere Flash Read Cache (vFRC). Another key consideration is the use of storage arrays that make use of vStorage APIs (vStorage APIs for Array Integration (VAAI)). This greatly enhances the performance of any infrastructure by off‐loading operations to native array tools and functionality and frees ESXi resources for other task processing.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 11 The final, and possibly the most recognizable storage infrastructure consideration, is overall solution bandwidth. This refers not only to interconnectivity bandwidth, but internal array and server bus bandwidth, too. Any form of connectivity pipe contention reduces performance and in turn reduces SLA compliance. Both traditional networking infrastructure and storage networking infrastructures must be addressed. As a rule, workload segmentation is required to minimize resource contention, but other methods can be used when physical segmentation is not immediately possible, such as peak time bandwidth throttling of non‐critical applications or application scheduling. However, with global enterprises, these options are not always viable alternatives to segmentation. Another source of resource contention is access to the actual disk by the server threads. This is generally referred to as the HBA Queue Depth which is the number of pending I/O requests to a volume. By ensuring that this is set to the maximum permissible limit, there should be no throttling of the I/O stream. However, the underlying infrastructure should be able to support this configured number or further congestion will occur at the volume level due to an overrun of the I/O stream.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 12 The host system bus is also a concern. Local server architecture could cause a bottleneck and the performance degradation could occur before the I/O even left the host. If this is the case, no amount of external environment tuning would improve the performance. Knowing the relative expected I/O transfer rates of the server buses provide a base level from which other performance figures can be determined. • Different PCI‐X specifications allow different rates of data transfer, anywhere from 512 MB to 1 GB of data per second • Oracle’s Sun StorageTek Enterprise 4 Gb/s (FC) PCI‐X Host Bus Adapter (HBA) is a high‐performance 4/2/1 Gb/sec HBA capable of providing throughput rates up to 1.6 GB/sec (dual port) in full‐duplex mode It can be seen here that the physical limitations of the Fibre Channel medium will not be challenged until we try to push multiple connections of this type through a single connection. Hence the segmentation of workload and use of Dual Port connectivity. VMware® Storage VMotion™ enables live migration for running virtual machine disk files from one storage location to another with no downtime or service disruption. It is apparent that when using VMware Storage VMotion that the available storage infrastructure bandwidth is of key importance.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 13 In this lesson we will examine the importance of ESXi host networking. This lesson reviews specific network options, configurations, and ramifications with an emphasis on storage considerations.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 14 The network capability of an ESXi host plays a major role in presenting array‐based storage. The protocol(s) that you use to access storage do not provide equal performance levels. Cost, throughput, and distance all play a role in any solution. When a network infrastructure is used, it can provide valuable metrics when measuring array performance. If the network infrastructure becomes congested, storage array performance will appear to suffer. The array performance will remain capable of SLA standards, but it is not being supplied with enough data to process and maintain its SLA requirements. It is important to always measure and account for performance end‐to‐end rather than simply focusing on just one component of any infrastructure. When presenting storage over a network, it is always recommended to isolate differing performance profile traffic wherever possible. This guarantees known or expected performance levels from specific interconnects, which has increased importance with profile based storage presentation.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 15 ESXi general SAN considerations are very straight forward: • As with any technology solution, the components should all be at compatible software and hardware levels to meet the proposed solution requirements • Diagnostic partitions shouldn’t be configured to use SAN LUNs, as this 110 MB partition is used to collect core dumps for debugging and technical support, and in the event of failure the data may not be successfully copied to an array (depending on the failure). If diskless servers are used, a shared diagnostic partition should be used with sufficient space configured to contain all the connected server information. • For multipathing to work correctly, a LUN should be presented to all the relevant ESXi servers with the same LUN ID • The HBA Queue Depth should be configured to prevent any I/O congestion and throttling to connected volumes A couple of ESXi SAN restrictions include: • Fibre Channel connected tape drives are not supported • Multipathing software at the Guest OS level cannot be used to balance I/O to a single LUN

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 16 Fibre Channel LUNs are presented to an ESXi host as block level access devices, and can either be formatted with VMFS or used as an RDM. All devices presented to the ESXi host are discovered on boot or if a rescan is performed. VMware has predefined connection guidelines, but it is up to the storage vendor, EMC in this case, to determine the best practices for integration with the virtualized infrastructure. With Fibre Channel SANs, the requirements as a rule are to use single‐initiator, single‐target zoning with two zones per initiator, and always ensure consistent speeds end to end.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 17 If you are considering an N‐Port virtualization environment (NPIV), keep in mind: • It can only be used by Virtual Machines with RDM disks • The HBAs must all be the same type/manufacturer • The NPIV LUN number and Target ID must be the same as the physical LUN number and target ID • Only vMotion is supported, not Storage vMotion

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 18 The ideal Fibre Channel environment contains: • No single point of failure • An equal load distribution • Each device presented should match the intended utilization profile • Fabric zoning is single‐initiator, single‐target which reduces problems and configuration issues by its focused deployment

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 19 The emerging enterprise connectivity protocol, Fibre Channel Over Ethernet (FCoE), is supported by two implementation methodologies. The first is a Converged Network Adapter (CNA) referred to as the hardware methodology, and the second is a Software FCoE adapter. When using the CNA, the network adapter is presented to the ESXi as a standard network adapter. The Fibre Channel adapter is presented as a host bus adapter. This allows the administrator to configure connectivity to both the network and the Fibre Channel infrastructures in the traditional way without any specialized configuration requirements, outside of the specialized networking infrastructure components, e.g., FCoE switches. The software adapter uses a specialized NIC card that supports Data Center Bridging and I/O offload, to communicate to the respective storage area infrastructures, once again via specialized switches. At present there is no FCoE pass through to the Guest OS level.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 20 As a general rule for the optimal performance and configurability, any NIC used in the ESXi environment should have the feature set shown in the slide. Supporting these features will enable an administrator to configure and tune connectivity to produce optimal throughput and offload of purpose to the network interface card and free up server cycles to perform other tasks.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 21 IP based storage devices are discovered through the Ethernet network. The ESXi host must be configured with a VMkernel port, which includes an IP address to provide a communication path for iSCSI and NFS traffic, unless Direct Path I/O is being configured. While the VMkernel port serves many functions in an ESXi host, we will only discuss the IP storage role in this course. It is always recommended that the network segment that carries this traffic be private and not routed. VLAN tagging is a permitted alternative but not preferred. An IP storage transaction can be described by following these steps: • Inside the virtual machine the operating system sends a write request • The write request is processed through the virtual machine monitor to the VMkernel • The VMkernel then passes the request through the VMkernel port created by the administrator • That request goes through the Ethernet network • The write request arrives at the storage array The response of the array is the inverse of this process.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 22 When trying to ensure a solid Ethernet network connection, there are proactive strategies that can assist in this objective. Avoid contention between the VMkernel port and the virtual machines network traffic. This can be done by placing them on separate virtual switches and ensuring each is connected to its own physical network adapter. Be aware of network physical constraints because logical segmentation does not solve the problem of physical oversubscription. Application workload profiles can be monitored to prevent excessive resource oversubscription with shared resources.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 23 Monitor CPU utilization of high throughput workloads which can limit the maximum network throughput. Virtual machines that reside in the same ESXi host should be connected to the same virtual switch. This avoids network physical traffic overhead because the VMkernel is processing the transaction which avoids unnecessary use of CPU resources associated with network communication. Ensure virtual machines that require low network latency are using the VMXNET3 virtual network adapter.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 24 The ideal Ethernet network environment can be a difficult goal to achieve. Ethernet networks carry traffic for many more types of communication than just storage. This slide contains ideal characteristics which reduce network related challenges. Addressing these concerns is typically done with the network administrator inside a corporation.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 25 Using Jumbo Frames can be an important network consideration. By adjusting the maximum transfer unit size to 9000 you must be sure that this can be met end‐to‐end. If the same conditions are not met end‐to‐end, further challenges may be introduced into the network infrastructure, such as dropped packets, high retransmission rates, inefficient use of packet size, etc. Possible considerations include: • Virtual machine network adapter type • Ethernet network devices such as switches and routers • Virtual switch configuration

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 26 The prerequisites are typically addressed and performed as part of an initial configuration and are infrequently modified. Both network connectivity and a configured VMkernel port are required (unless Direct Path I/O is used). Providing the ability to protect, aggregate, and load balance are important considerations to any network connection and are not addressed on this slide.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 27 To access iSCSI targets, your host needs iSCSI initiators. The job of the initiator is to transport SCSI requests and responses, encapsulated into the iSCSI protocol, between the host and the iSCSI target. ESXi supports two types of initiators: software iSCSI and hardware iSCSI. A software iSCSI initiator is VMware code built in to the VMkernel. With the software iSCSI initiator, you can use iSCSI without purchasing any additional equipment. Hardware iSCSI initiators are divided into two categories: • Dependent hardware iSCSI • Independent hardware iSCSI A dependent hardware iSCSI initiator adapter depends on VMware networking and on iSCSI configuration and management interfaces that are provided by VMware. An independent hardware iSCSI adapter handles all iSCSI and network processing and management for your ESXi host.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 28 The prerequisites are typically addressed and performed as part of an initial configuration and are infrequently modified. Both network connectivity and a configured VMkernel port are required (unless Direct Path I/O is configured).

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 29 This lesson reviews specific storage options, configurations, and ramifications with an emphasis on the ESXi storage environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 30 When presenting storage to a virtual machine, it is important to understand that there are many methodologies in which this can be achieved. This slide illustrates the most common presentation style. This style reflects the fact that the array presents storage to the ESXi host. The VMkernel will facilitate and allocate what is required to the virtual machine. When VMware facilitates presentation of storage to a VM, it can provide features in addition to simple storage presentation. This can be achieved because all storage requests pass through the VMkernel.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 31 Virtual machine file systems are created on block devices. VMFS is the most common format used in a VMware infrastructure. It is designed to store virtual machine files, templates, and ISO images. Currently VMFS5 is the type of signature that we use today. When this space is presented to a virtual machine for consumption, it does so in the following formats: Thick –Lazy Zero provisioning means that the virtual machine is presented with the space it requires. Zeroing occurs as the virtual machine consumes space and not at time of volume creation. Thick –Eager Zero provisioning means that the virtual machine is presented with all the space it requires and that space is zeroed out prior to use by the virtual machine. Thin provisioning means the virtual machine can be presented with storage space that does not actually exist. It will consume space as needed and only virtual machine data will be reflected in its true size.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 32 VMFS3, used by early versions of the ESXi host, is an older format. In many cases VMFS3 and VMFS5 can co‐exist. It is important to know the difference. VMFS5 should be used when possible. This slide lists the advantages of using the VMFS5 format. Detailed understanding of these versions is outside the scope of the class, but it is mentioned because as an implementer you may be faced with addressing this requirement.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 33 VMFS3 and VMFS5 format maximums are listed on this slide. VMFS5 is more robust and should be used when possible.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 34 When working with VMFS volumes, you need to be concerned about proper partition alignment. Failure to do so can result in negative performance in the environment. Misaligned partitions cause physical disk crossings (multiple access to physical disks for a single transaction), which exacerbates the I/O penalties for a single transaction. For the most part, partition alignment when configured by VMware interfaces, are addressed automatically at the storage array level. The major concern is to ensure the virtual machine guest operating system has been aligned to the storage that has been presented. There are multiple methods in which this can be achieved, but they are outside the scope of this course. The goal is to understand that this can be a reason why a virtual machine is performing poorly.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 35 Boot partition alignment is neither recommended nor required by EMC. In Windows systems starting with Windows 2008, alignment is done by default. If Raw Device Mapping (RDM) is utilized, alignment will be required, which will be the same for the Virtual Machine as it would be for the physical machine.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 36 The RDM storage presentation methodology reflects the fact that the VMkernel, while it may facilitate access to storage, is not necessarily involved in all the aspects of its presentation. For example, you want to present storage to a virtual machine that already has a disk signature but you do not want to place a VMFS disk signature on that storage device. In this situation, while the VMkernel is aware, it is not involved in all facets of its presentation. The RDM format represents a different approach where VMFS is not appropriate.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 37 An NFS datastore serves a similar role to a VMFS volume. It is designed to store virtual machine files, templates, and ISO images. Currently NFSv3 over TCP is the communication method supported. By default, VMs are deployed on thin provisioned storage. However, if VAAI for File is being used, thick provisioned storage is available.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 38 You can reserve Flash Read Cache for any individual virtual disk. The Flash Read Cache is created only when a virtual machine is powered on. It is discarded when a virtual machine is suspended or powered off. When you migrate a virtual machine, you have the option to migrate cache. By default, cache is migrated if the virtual flash module on the source and destination hosts are compatible. Flash Read Cache supports write‐through or read caching. Write‐back or write caching are not supported. Note: Not all workloads benefit from Flash Read Cache. The performance boost depends on your workload pattern and working set size. Read‐intensive workloads with working sets that fit into cache can benefit from a Flash Read Cache configuration. By configuring Flash Read Cache for read‐ intensive workloads, additional I/O resources become available on your shared storage, which can result in a performance increase for other workloads even though they are not configured to use Flash Read Cache.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 39 VMware vSphere Storage APIs – Array Integration (VAAI) is also referred to as hardware acceleration or hardware offload APIs. They are a set of APIs that enable communication between ESXi hosts and storage array. The APIs define a set of “storage primitives” that enable the ESXi host to offload certain storage operations to the array, which reduces resource overhead on the ESXi hosts and can significantly improve performance for storage‐intensive operations such as: • Migrating virtual machines with Storage vMotion • Deploying virtual machines from templates • Cloning virtual machines or templates • VMFS clustered locking and metadata operations for virtual machine files • Writes to thin provisioned and thick virtual disks • Creating fault‐tolerant virtual machines • Creating and cloning thick disks on NFS datastores

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 40 Hardware assisted locking (Atomic Test & Set (ATS)) is a locking mechanism required to maintain file system integrity and prevent another ESXi host from updating the same metadata. There should not be any SCSI reservations on the VMFS5 datastore. If a VMFS5 datastore is formatted on a VAAI‐ enabled array, it is using ATS. ATS continues to be used even if there is contention. On non‐VAAI arrays, SCSI reservations continue to be used for establishing critical sections in VMFS5 datastores. Full copy requests that the array perform a full copy of blocks on behalf of the VMkernel. It primarily is used in clone and migration operations such as a Storage vMotion. Without VAAI, a clone or migrate operation must use the VMkernel software Data Mover driver. If the files being cloned are multiple GB in size, the operation could last for many minutes to many hours. The full copy consumes: • CPU cycles • DMA buffers • SCSI commands in the HBA queue Write Same (Block zeroing) operation offloads the request to zero out a disk to the VAAI‐enabled storage array. This offload task zeros large numbers of disk blocks without transferring the data over the transport link. If not offloaded, this initialization request consumes host resources. The operations that benefit from this feature are: • Cloning operations for eagerzeroedthick target disks • Allocating new file blocks for thin‐provisioned virtual disks • Initializing previously unwritten file blocks for zeroedthick virtual disks

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 41 VAAI NAS functionally is generally available through the use of a storage provider plug‐in. Full File Clone, or Full Copy, enables virtual disks to be cloned by the NAS device rather than by using the VMkernel software Data Mover driver. Another benefit is that cold clones or deploy from template operations can be offloaded to the storage hardware. This makes more ESXi host CPU, memory resources, and network bandwidth available. This cannot be called for a Storage vMotion operation. Fast File Clone enables the creation of virtual machine snapshots to be offloaded to a NAS storage array in Horizon View and vCloud Director environments. You can elect to have desktops based on linked clones created directly by the storage array rather than by using ESXi host CPU resources. You can also elect to have vCloud vApps based on linked clones instantiated by the storage array rather than the ESXi host. Extended statistics enable visibility into space usage on NAS datastores. Useful for thin‐provisioned datastores because actual space usage statistics are reported. Previously, VMware administrators needed to use array‐based tools to manage and monitor how much space a thinly provisioned VMDK was consuming on a back‐end datastore. In previous versions of vSphere, only a thin VMDK could be created on NAS datastores. The reserve space primitive enables the creation of thick VMDK files on NAS datastores, so administrators can reserve all of the space required by a VMDK, even when the datastore is NAS. Users now have the ability to select lazy‐zero or eager‐zero disks on the NAS datastore.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 42 Thin Provisioning Stun was introduced to address concern regarding the impact on virtual machines when thin‐provisioned datastore usage reached 100 percent of capacity. This affects all virtual machines running on the datastore. If a thin‐provisioned datastore reaches 100 percent usage, only those virtual machines requiring extra blocks of storage space are paused; those not needing additional space continue to run. After additional space is allocated to the thin‐provisioned datastore, the paused virtual machines can be resumed. vSphere cannot detect how much space a thin‐provisioned datastore is consuming at the back end. Indication of an overcommitted datastore with no available space: • No more virtual machines can be provisioned • Virtual machines stop running There is also a Thin Provisioning Space Threshold warning. It appears in a VMware vCenter environment if a thin‐provisioned datastore reaches a specific threshold. The threshold values are set on the array and are defined by the storage administrator. vSphere does not set this threshold. Dead Space Reclamation: A VAAI primitive, using the SCSI UNMAP command, enables an ESXi host to inform the storage array that space can be reclaimed that previously had been occupied by a virtual machine that has been migrated to another datastore or has been deleted.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 43 Storage I/O Control (SIOC) is a feature introduced to provide I/O prioritization of virtual machines running on a group of vSphere hosts that access a shared storage pool. It extends the constructs of shares and limits, which already exist for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a group of servers. Its goal is to: • Avoid contention • Prioritize which VMs are critical from an I/O perspective • Provide better performance for I/O intensive and latency‐sensitive applications It is Important to database workloads and Exchange servers. SIOC is disabled by default. If not enabled, all hosts accessing the datastore get an equal portion of the datastore’s I/O resources.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 44 VMware Storage vMotion is a popular feature that provides the ability to move VM related files from one storage location to another without VM interruption. An important feature of Storage vMotion is that it is storage type independent and the VM remains online and accessible during the operation. An important Storage vMotion prerequisite to consider is that the source and destination must be visible to the ESXi host. Performance of this feature is dependent on storage infrastructure bandwidth unless VAAI is being leveraged to assist in the operation. It is best to complete Storage vMotion operations at times of low storage activity and when the workload in the VM being moved is least active. VAAI offers better performance on VAAI‐capable storage arrays. Be aware that the source and destination must be on the same array or VAAI will not be leveraged. N‐Port ID virtualization (NPIV) does not support Storage vMotion functionality. It does support vMotion, therefore, care needs to be taken when planning the SAN infrastructure.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 45 A VMware datastore cluster is a collection of datastores grouped together to present many storage resources as a single object. It is a key component in other VMware features like Storage Distributed Resource Scheduler (SDRS) and Storage Profiles. There are certain configuration guidelines that should be followed: • Different arrays are supported, device characteristics must match • ESXi 5.0 or greater host required • Must contain similar, interchangeable datastores • You can mix VMFS3 and VMFS5 (but it is not recommended)

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 46 VMware’s Storage DRS (Distributed Resource Scheduler) automates moves via Storage vMotion. It leverages the datastore cluster construct and it uses datastore capacity and I/Os to determine the optimal location for VM file placement. The more datastores a datastore cluster has, the more flexibility SDRS has to better balance the cluster’s load. It is recommended to monitor the datastore I/O latency during the peak hours to determine if there are performance problems that can/are being addressed. Make sure thin‐provisioned devices do not run low on space. A VM that attempts to write on space that does not exist will be suspended.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 47 The VMware Direct Path I/O feature allows the virtual machine to directly access the hardware device. It is used to save CPU cycles that no longer need to be processed by the VMkernel. It does represent limitations on what the VMkernel can do because it is no longer interacting with the device being presented.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 48 A network administrator through NIOC can allocate I/O shares and limits to different traffic types, based on their requirements. NIOC capabilities are enhanced such that administrators can create user‐defined traffic types and allocate shares and limits to them. Administrators can provide I/O resources to the vSphere replication process by assigning shares to vSphere replication traffic types.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 49 In this lesson we will discuss the presentation of datastores. We will cover the various protocols used to perform this operation, thick and thin volume presentation, and some multipathing strategies.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 50 VMware offers a diverse product line so it is important to clarify what is expected in a vSphere offering. VMware features are governed by the license obtained. If a VMware feature or product is not purchased, the storage array will not be able to support the feature. In EMC engagements, it is common to expect the ESXi host to have an Enterprise Plus license.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 51 The types of supported storage access for VMware will determine the tools, data presentation, and supported functionality of the data. The interface to the data will vary according to which access methodology is required and will determine the level of administrative and environmental overhead, e.g., a Fibre Channel (FC) infrastructure will require a fabric infrastructure, whereas a Network File System (NFS) infrastructure will require an Internet Protocol (IP) infrastructure.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 52 The access methodology will determine the supported features and functionality of presented storage. These are key considerations when planning the VMware environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 53 How storage is presented is a very important topic. Whether to use a thick or thin provisioning method can be a important decision. Understand that both the array and the VM can independently address thick and thin disks. Let's examine this chart and explain each methodology. In the thin ‐ thin methodology, both the VM and the storage array are only consuming the amount of space required to store the files needed. The actual size presented to the virtual machine may not represent what the storage array can actually provide. While this is a very efficient method, it must be closely watched to ensure a virtual machine does not consume what the array cannot present. Failure to address this will result in virtual machine instability. In the thin ‐ thick methodology, the VM is presented with the storage space that actually exists on the array. The VM only uses the amount required to store the virtual machines files, templates, and snapshots. Storage can be overprovisioned at a VM level. Failure to address this properly at the VM level will result in virtual machine instability. The thick ‐ thin methodology is not commonly used due to its contradictory nature. If you were to allocate a thinly provisioned array volume to a thick provisioned virtual machine, that action could consume or convert the thin array volume thereby defeating the purpose of this provisioning method. The thick ‐ thick methodology is typically associated with virtual machines that require the highest level of performance. While it is the least efficient from a storage consumption point of view, it offers very dependable and reliable access characteristics.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 54 Provisioning a Fiber Channel LUN to an ESXi host for consumption has some major considerations. The first decision is which user interface(s) will be used. The following are available and can be used in combination: 1. vSphere Web Client 2. vSphere Client • Without plug‐ins • With plug‐ins 3. Unisphere Each interface or combination of interfaces has strengths and weaknesses. It is up to the administrator which method of interaction is desired. Regardless of interface used, a Fiber Channel LUN is presented to the ESXi host. Proper zoning as per VMware and EMC guidelines is performed/validated. The ESXi host must either be rescanned (non‐interruptive process) or rebooted to recognize devices presented.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 55 Provisioning an iSCSI LUN to an ESXi host for consumption has some major components. Each interface or combination of interfaces has strengths and weaknesses. It is up to the administrator which method of interaction is desired. Proper Ethernet network and VMkernel port creation (unless Direct Path I/O is configured) is completed/validated as per VMware and EMC guidelines. The ESXi host must either be rescanned (non‐interruptive process) or rebooted to recognize devices presented.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 56 Provisioning an NFS file system to an ESXi host for consumption has some major components. Each interface or combination of interfaces has strengths and weaknesses. It is up to the administrator which method of interaction is desired. Regardless of interface used, an NFS file system is presented to the ESXi host. Proper Ethernet network and VMkernel port creation (or Direct Path I/O) is completed/validated as per VMware and EMC guidelines. The ESXi host is not required to either perform a rescan or reboot to recognize the file system presented; it is automatically and dynamically discovered.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 57 Some general storage presentation guidelines to consider: Try to keep a datastore under 80% capacity utilization as this allows time to allocate more space or allows time to relocate. VM boot disks have different characteristics than an application’s use of storage. Boot disks usually generate limited IOPS, except at boot time when it can represent a high demand. VM boot disks can reside on a VMFS or NFS datastore. Avoid creating more than three VM snapshots. Do not retain for long periods as snapshots consume space and cause logging activity due to change tracking. If possible, enable SIOC (Storage I/O Control), as it offers many benefits in a solution.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 58 More general presentation guidelines include: Use FAST Cache with appropriate workloads so that random I/O can benefit from this VNX feature. Monitor data relocation on FAST VP LUNs and look for constant rebalancing. If this is the case, then consider adding higher tier disks to the pool. The VNX supports FC, FCoE, and iSCSI but accessing the same device via different transport methods is not supported. All designs and implementations should address the concerns based around redundancy and port distribution.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 59 Here are some common SAN boot considerations. All solutions range from the simple to the complex; understanding components and their configuration is important.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 60 HBA queue depth can have a role in achieving performance goals. In specific situations, you may be required to adjust this variable.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 61 There are common goals of multipathing in a Fibre Channel environment to support an ESXi host. These goals can be defined by: • Reliability • Providing two or more I/O paths • Scalability • LUN has both SPs identified for VNX and all presented director ports and engines for Symmetrix • LUN owned by only one VNX SP • Accessed by all presented Symmetrix director ports and engines • Assigned to an SP at creation • Support for LUN trespass

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 62 The multipathing configurations setting resides on the ESXi host. There are three to choose from, but we are only considering the round‐robin method for modern deployments. Ideally, we would prefer to have PowerPath Virtual Edition, but it is not an expectation of any environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 63 EMC PowerPath Virtual Edition provides automated data path management and load‐balancing capabilities. It meets service levels with the highest application availability and performance. PowerPath/VE is the best multipathing option in a VMware environment. While it represents additional cost to any solution, it provides management features across virtual environments including: • Reducing the number of multipathing software variants to manage • Removing the need to monitor and rebalance the environment • Supports Fibre Channel, FCoE, and iSCSI • Adjusts I/O path usage and changes in I/O loads from VMs • Simplifies provisioning by pooling all connections • Optimizes overall I/O performance in VMware environments

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 64 Here are some common iSCSI boot considerations. There are many variables (IP addresses, iSCSI IQN) that are required to be either configured and/or recorded. Validate your environment prior to proceeding.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 65 When there is a goal of optimum host throughput, you can configure iSCSI port binding to create multiple paths. Multipathing is made possible via configuration at the ESXi VMkernel network adapter level. The default creates only one path from the software iSCSI adapter (vmhba) to each target. You can now bind the ESXi NIC card to the software iSCSI adaptor. This enables failover at the path level, load‐balances I/O traffic between paths, and binds up to 8 VMkernel adaptors.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 66 These are a few topics that the implementer may face when configuring an iSCSI infrastructure. • TCP Delayed Acknowledgement is an optimization mechanism intended to reduce network packet traffic, by combining multiple TCP acknowledgements into a single response to the ESXi host. TCP Delayed Acknowledgements can occur during sequential writes and has the potential to insert a 200ms delay/timeout (delayed acknowledgement timeout value), which would affect performance. This can be addressed by disabling the TCP Delayed Acknowledgement on 10 GbE card. • Be aware that VMs can be configured to access iSCSI LUNs directly. This means that data transfer is no longer a VMkernel port transaction and is viewed as a VM network transaction. This could prevent a rescanning delay when using multiple storage arrays on different broadcast domains.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 67 When addressing NFS multipathing, there are three major areas to consider: the EMC array, the Ethernet network, and the vCenter server or ESXi host configuration. From the array perspective, you need to create an LACP device and present the NFS file system. Network requirements are to validate end to end connectivity. vCenter requirements include creating a vSwitch for NFS datastores, adding two NICs from the ESXi host, and using IP Hash NIC teaming policy. If there is no support for Multi‐Chassis Link Aggregation, you can use the array Fail‐Safe Network (FSN) as an alternative.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 68 This lesson addresses specific aspects of a VNX deployment with VMware.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 69 EMC is responsible for providing steps to integrate a solution into any environment. EMC storage presentation best practices are guidelines, not rules. Understanding all the variables in a solution prior to implementation is key to success. Technical documentation, guides, VMware configuration maximums, and EMC TechBooks are frequently updated; make sure to check sources often.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 70 Understanding what must be achieved and how that achievement is measured is part of all solution deployments. There are several common metrics mentioned on this slide that should be gathered and analyzed to assist with the deployment and tuning of a VNX integration with VMware. Tuning and maintenance of any solution is an ongoing process and should be constantly monitored for changes in conditions or SLA non‐compliance.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 71 iSCSI connectivity provides an example of how configuration can impact the environment. A typical minimum configuration for an iSCSI environment is shown here, with a single VMkernel port per vSwitch and a single physical NIC. This configuration provides a secondary path in the event of an SP port or other storage failure, but will not protect against other single points of failure, such as, a network switch or NIC failure. The next slide will show the recommended configuration for an iSCSI environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 72 To maximize the configuration for iSCSI, iSCSI Port binding is a valuable tool when configuring port redundancy and client connectivity. It can make use of advanced networking features like NIC teaming and Jumbo Frames (recommended). In the case of VNX systems, shown on this slide, the SP paths to the owning SP are active until failure occurs. The secondary paths (blue dashed line) are configured in a stand‐by mode. This configuration ensures that any failure will not impact client connectivity or bandwidth. It enables the segmentation of iSCSI clients to the different network address segments, and to ensure the failover to a secondary path in the event of any network connectivity failure. This example’s context can be applied to any connectivity methodology, application workload, and SLA desired objective.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 73 There are a large number of diverse options and features available to enhance EMC storage and VMware solutions. Not all of them will be implemented in every environment. However, they can be considered upon their merits and ability to enhance the final design and efficiency of a solution. In most cases, the goals set for a solution, will determine which features are implemented. It is important that these features and their interaction with the intended objectives is well understood and implemented. This also means that a broad range of knowledge and expertise is required, to fully implement, tune, and deploy the complex solutions to monitored by IT today. This slide covers common or previously discussed VNX features that might be a factor in storage presentation. All solutions range from the simple to the complex. Understanding which feature will be used is important.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 74 Some general VNX storage guidelines are listed here and most are good general practices for any storage environment. Limiting datastores to 80% of their capacity reduces the possibility of out of space conditions and provides administrators time to extend datastores when high storage utilization alerts are triggered. It is advisable to use no more than three VM snapshots for extended periods and better to use VM clones for point‐in‐time copies to avoid the overhead of change tracking and logging activity. Enable Storage I/O Control (SIOC) to control periods of high I/O; if response times are consistently high, then redistribute the VMs to balance workloads. Utilize FAST Cache for frequently accessed random I/O workloads. Sequentially accessed workloads often require a longer time to warm FAST Cache, as they typically read or write data only once during an access operation. This means that sequentially accessed workloads are better serviced by SP Cache. Monitoring of data relocation on FAST VP LUNs enables administrators to increase the number of disks in the highest tier if a large percentage of data is constantly being rebalanced.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 75 One of the most important features of a VMware solution is the replication features offered to the solution, and understanding their interaction. This slide lists the tools available for cloning Virtual Machines and their supporting protocols and technology, e.g., Fibre Channel, FCoE, iSCSI, Copy On First Write (COFW), Redirect On Write (ROW), etc.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 76 The replication tool decision is further complicated when remote replication is added to the solution. This is due to the fact that many more environmental considerations are added to the mix. Some of these are infrastructure bandwidth, host transaction acknowledgment requirements, distance of replication, and purpose of replication, etc. This slide illustrates EMC tools for Remote Replication of Virtual Machines stored on VNX arrays and which protocols are supported by each of the tools.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 77 This lesson addresses the goals of a Symmetrix deployment, components you would expect to use, storage, and ESXi boot guidelines.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 78 VMware is vendor neutral and its documents do not discuss specific detail in EMC storage array presentation. EMC is responsible for providing steps to integrate a solution into any environment. EMC storage presentation best practices are guidelines, not rules. Understanding all the variables in a solution prior to implementation is key to success. Technical documentation, guides, VMware configuration maximums, and EMC TechBooks are frequently updated; make sure to check sources often.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 79 For Symmetrix storage arrays, there are several prerequisites required for ESXi operations: • Common serial number • Auto negotiation enabled • SCSI 3 set enabled • Unique world wide name • SPC‐2 flag set or SPC‐3 which is set by default at Enginuity 5876 or later Most of these are ‘on’ by default. Consult the EMC Support Matrix for the latest port settings.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 80 This slide depicts the recommended port connectivity for ESXi servers and Symmetrix arrays. If there is only a single engine in the array, then the recommended connectivity is that each HBA should be connected to odd and even directors within the engine (top picture). If there are multiple engines in the array, then each HBA should be connected to different directors on different engines (bottom picture).

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 81 Gatekeepers are small devices presented to management server to act as SCSI targets for SYMMCLI commands. Generally, devices smaller than 8 MB are designated as gatekeeper devices. From Enginuity 5876 and Solutions Enabler v7.4, thin devices can be used as gatekeeper devices. These need not be bound to thin pools. However, gatekeepers cannot be shared among multiple Virtual Machines. If VMware Native Multipathing is being used, a round‐robin policy can be supported if the Enginuity code is higher than 5875. If the code is 5875 or lower, only Fixed Path policy is supported. A management server is a server where the user has installed the EMC Solutions Enabler product and has SAN access to the Symmetrix array being managed. The following management applications require gatekeeper access: • EMC Solutions Enabler • EMC Symmetrix Manager Console (SMC) • EMC ControlCenter Infrastructure Symmetrix and SDM Agents Each management server running EMC Solutions Enabler or SMC requires a minimum of five gatekeepers. If a management server is also running an EMC ControlCenter Agent, one additional gatekeeper is required, resulting in a total of six. A host not performing management operations needs no gatekeepers.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 82 Striped meta volumes perform better than concatenated metavolumes when there are enough spindles to support them. However, if the striping leads to the same physical spindle hosting two or more members of the metavolume, striping loses its effectiveness. In cases, using concatenated metavolumes may be better. It is not a good idea to stripe on top of a stripe. If host striping is planned and metavolumes are being used, concatenated metavolumes are better.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 83 EMC recommended using “zeroedthick” instead of “thin” virtual disks when using Symmetrix Virtual Provisioning because using thin provisioning on two separate layers (host and storage array) increases the risk of out‐of‐space conditions for the virtual machines. The thin on thin is now acceptable in vSphere 4.1 or later and in Symmetrix VMAX Enginuity 5875 and later. The use of thin virtual disks with Symmetrix virtual provisioning is facilitated by many new technologies in the vCenter server and Symmetrix VMAX Enginuity 5875 including features such as vStorage API for Storage Awareness (VASA), Block Zero and Hardware‐Assisted Locking (ATS), and Storage DRS. It is important to remember that using “thin” rather than “zeroedthick” virtual disks does not provide increased space savings on the physical disks. The “zeroedthick” only writes data written by the guest OS; it does not consume more space on the Symmetrix thin pool than a similarly sized “thin” virtual disk.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 84 Symmetrix Virtual LUN migration allows an administrator to transparently migrate the host accessible LUN from one storage tier to another without any host I/O disruption. The LUN migration is allowed within the same Symmetrix storage system. Once the data is migrated successfully to a target LUN, the source LUN data is erased using the instant VTOC process. The virtual LUN migration uses a new virtual RAID architecture which abstracts the device protection from its logical representation to a server. This virtual RAID architecture allows a device to have more simultaneous protection types such as BCV, SRDF, Concurrent SRDF, and spares. Symmetrix VMAX with Enginuity 5875, the virtual LUN, features support for thin‐to‐thin migration. This feature allows users to move Symmetrix thin devices from one thin pool to another thin pool on the same Symmetrix VMAX array. Virtual LUN migration supports Flash, Fibre Channel, and SATA disk drives and RAID types: 1, 10, 5, and 6. In the VMware environment, if the Symmetrix FAST technology is not used for storage tiering and auto‐load balancing, the VMware administrator can choose VLUN migration to move the datastore from low performing disk drives (e.g., SATA) to a high performing disk drives (FED).

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 85 VMware Storage Distributed Resource Scheduler (SDRS) operates on a datastore cluster. A datastore cluster is a collection of datastores with shared resources. SDRS provided initial placement and ongoing balancing recommendations to datastores in an SDRS enabled datastore cluster. The aim is to minimize risk of overprovisioning one datastore, storage I/O bottlenecks, and performance impact on virtual machines. A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be from different arrays and vendors. However, EMC does not recommend mixing datastores backed by devices that have different properties, i.e., different RAID types or disk technologies, unless the devices are part of a FAST VP policy. Replicated datastores cannot be combined with non‐replicated datastores in the SDRS cluster. If SDRS is enabled, only manual mode is supported with replicated datastores. When EMC FAST (DP or VP) is used in conjunction with SDRS, only capacity based SDRS is recommended. Storage I/O load balancing is not recommended. Simply uncheck the “Enable I/O metric for SDRS recommendations” box for the datastore cluster. Unlike FAST DP which operates on thick devices at the whole device level, FAST VP operates on thin devices at the far more granular extent level. Because FAST VP is actively managing the data on disks, knowing the performance requirements of a VM (on a datastore under FAST VP control) is important before the VM is migrated from one datastore to another. This is because the exact thin pool distribution of the VM’s data may not be the same as it was before the move. Therefore, if a VM houses performance sensitive applications, EMC advises not using SDRS with FAST VP for that VM. Preventing SDRS from moving the VM can be achieved by setting up a rule or using Manual Mode for SDRS.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 86 When using the VMware vSphere Storage Distributed Resource Scheduler (SDRS), it is recommended that all devices in the datastore cluster should have the same host I/O limit. All datastores should come from the same VMAX family array. These recommendations are given because Host I/O Limit throttles I/O to those devices. If a datastore contains devices –whether from the same or a different array –that do not have a Host I/O Limit, there is always the possibility in the course of its balancing that SDRS will relocate virtual machines on those Host I/O limited devices to non I/O limited devices. Such a change might alter the desired quality of service or permit the applications on the virtual machines to exceed the desired throughput. It is therefore prudent to have device homogeneity when using EMC Host I/O Limit in conjunction with SDRS.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 87 Symmetrix local replication tools are listed on this slide with their recommended purpose within the VMware environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 88 EMC replication technologies like TimeFinder and SRDF operate at the Symmetrix device level. Replicating a VMFS residing on a Symmetrix device requires that all the extents of the file system be replicated. If proper procedures are not followed, it forces the replication of all the data that is present on a VMFS—including virtual disks images that are not needed. Therefore, EMC recommends separation of virtual machines that require the use of storage array based replication on one or more VMFS volumes. This does not completely eliminate replication of unneeded data, but minimizes storage overhead. Storage overhead can be eliminated by the use of Raw Device Mappings (RDMs) on virtual machines that require storage array based replication. VMware vSphere allows the creation of VMFS on partitions of the physical devices. VMFS also supports spanning of the file systems across partitions, so it is possible to create a VMFS with part of the file system on one partition on one disk and another part on a different partition on a different disk. However, such designs complicate the management of the VMware storage environment. If TimeFinder or SRDF replication of VMFS is required, EMC strongly recommends creating only one partition per physical disk, and one device for each VMFS datastore.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 89 When VAAI Full Copy clone technology is being utilized with the Symmetrix array, there are several caveats to be aware of. • Limit the number of Full Copy clones to a maximum of four. • There are several volumes that do not support the Full Copy technology. The default VMware copy will be performed if any clone or Storage vMotion operation is performed on these volumes. • Full Copy is not supported by Open Replicator. • Certain RDF operations will be blocked until all data copy operations have been completed. • If any form of metavolume reconfiguration is being performed, Full Copy will not be used.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 90 This table compares the various Symmetrix clone technologies in a VMware environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 91 This table presents a guideline for choosing the cloning tool in a VMware environment.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 92 VMware ESXi assigns a unique signature to all VMFS volumes when they are formatted with VMFS. The unique signature and the VMFS label are also stored on the device. Storage array replication technologies create exact replicas of the source volumes, all information including the unique signature and the label is replicated. If a copy of a VMFS volume is presented to any VMware ESXi host or cluster group, the ESXi computes the signature for the device and compares the computed signature with the signature stored on the device. A TimeFinder replica device or an SRDF R2 device will have a different unique ID compared to the source device, hence the computed signature will be different when compared to the stored signature. The different signature indicates this is a replica VMFS volume to the ESXi. By default, the ESXi automatically masks the copy (i.e., the replica VMFS is not mounted).

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 93 VMware vSphere has the ability to individually resignature and/or mount VMFS volume copies through the use of the vSphere Client or with the CLI utility esxcfg‐volume (vicfg‐volume for ESXi). Volume specific resignaturing allows for much greater control in the handling of snapshots. This is very useful when creating and managing volume copies created by TimeFinder and/or SRDF.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 94 If the VMware environment utilizes the frequent presentation of clone volumes back to the original owning ESXi server, then to minimize the administrative overhead of resignaturing volumes, the LVM.enableResignature flag can be set. By setting this flag all snapshot LUNs will be automatically resignatured. This is a host‐wide setting, not a per LUN setting. Care must be exercised when setting this flag. The CLI is used to set this flag.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 95 The Symmetrix Remote Data Facility Adapter (SRA) is a lightweight application that enables VMware Site Recovery Manager to interact with remote data copies being performed on a Symmetrix array. The EMC Virtual Storage Integrator (VSI) can be used in conjunction with the vSphere Client to provide a GUI interface for configuration and customization of the SRA.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 96 In a VMware vSphere 5.x environment, mounting a copy of a remote production VMFS volume requires planning. The issue is that, when the R2 is replicated with TimeFinder, the VMFS volume signature is copied to the TimeFinder replica. The R2 volumes have a different unique ID to their source R1 volume, therefore, the computed signature on the R2 will be different than the signature on the R1. Similarly, the TimeFinder replica device will have a different unique ID compared to its source R2 volume and its computed signature will be different than the R2. If the same ESXi host sees both the R2 and the TimeFinder replica of the R2, it will see that both have the same device signature, but the computed signatures of the R2 and the TimeFinder replica do not match the device signature. When the ESXi is presented with duplicates, by default, it will not allow the mount of either of the VMFS volumes. The CLI can be utilized to detect the presence of duplicate extents.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 97 In a VMware vSphere environment the easiest way to replicate a VM using RDMs is to use a copy of the configuration file of the source VM on the target ESXi server, instead of replicating the entire VMFS holding this information. The mapping file includes the unique ID and LUN number of the device it is mapping. The configuration file for the VM also contains an entry that includes the label of the VMFS holding the RDM and its name. If the VMFS holding the information is replicated and presented on the target ESXi, the virtual disks that provide the RDM mapping are also available on the target in addition to the configuration files. However, the mapping files cannot be used on the target since the cloned VM must be provided access to the devices holding the cloned copy of the data. Therefore, EMC recommends using a copy of the source VM’s configuration file instead of replicating the VMFS.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 98 In this lesson we will discuss monitoring tools, APIs, and plug‐ins found in the EMC storage environment with VMware.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 99 There are various tools that can be used to monitor the VMware vSphere and vCenter environments. These tools can be command line or graphical user interface. Be careful using tools inside a virtual machine, as there is a level of abstraction from the physical resources by the ESXi host, that may obscure the actual metrics of the environment and provide inaccurate data for the actual environment. This may skew expectations and not provide required performance goals.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 100 The VMware esxtop tool provides a real‐time view (updated every five seconds, by default) of ESX server worlds sorted by CPU usage. (The term world, refers to processes running on the VMkernel.) ESXTOP requires the operator to understand the different modes in which it provides data. ESXTOP provides insight to the operator to identify and isolate performance related issues. Two examples of using ESXTOP to address astorage related concern are: 1. Check that the average latency of the storage device is not too high by verifying the GAVG/cmd metric. • If Storage I/O Control (SIOC) is applied, then the GAVG/cmd value must be below the SIOC setting. • Default SIOC is 30 ms. In VNX storage with EFD or other SSD storage, the value might be reduced to accommodate the fast disk type. 2. Monitor QFULL/BUSY errors, if Storage I/O Control (SIOC) is not used. • Consider enabling and configuring queue depth throttling • Reduction of the number of commands returned from the array • Queue depth throttling is not compatible with Storage DRS resxtop –is the remote version of the esxtop tool. Because VMware ESXi lacks a user‐accessible service console where you can execute scripts, you can't use ''traditional'' esxtop with VMware ESXi. Instead, you have to use ''remote'' esxtop or resxtop. The resxtop command is included with the vSphere Management Assistant (vMA), a special virtual appliance available from VMware that provides a command‐line interface for managing both VMware ESX and VMware ESXi hosts.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 101 vCenter provides the ability to monitor performance at many levels. These are common tasks for VMware administrators. The advanced chart options contain a wide array of metrics that can be sorted in many ways. Understand the variables inside this interface to correlate its possible impact on your solution. vCenter is also a good tool to demonstrate that everything is functioning within expectations.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 102 VMware vCenter Operations Manager is a component of the vCenter Operations Management Suite. It provides a more simplified approach to operations management of vSphere infrastructure. vCenter Operations Manager provides operation dashboards to gain insights and visibility into health, risk and efficiency, performance management, and capacity optimization capabilities. This is an advanced component and represents an additional cost above vCenter.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 103 VMware vStorage APIs for Storage Awareness (VASA) is aset of APIs that enable vCenter to see the capabilities of EMC storage array LUNs and corresponding datastores. With visibility into capabilities underlying a datastore, it is much easier to select the appropriate disk for virtual machine placement. Storage capabilities, such as RAID level, thin or thick provisioned, replication state, and much more can now be made visible within vCenter. VASA forms the basis for Profile Driven Storage.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 104 Using the Virtual Storage Integration (VSI) plug‐in methodology, EMC has created a series of tools designed to enhance feature functionality. VSI for VMware vSphere Storage Viewer has the following goals: • Facilitate the discovery and identification of EMC storage arrays • Present underlying storage details to virtual datacenter administrator • Merge the data of several different storage mapping tools into a few seamless vSphere Client views • Allow VMware administrator to view the performance monitoring for storage array within vSphere client

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 105 Using the Virtual Storage Integration (VSI) plug‐in methodology, EMC has created a series of tools designed to enhance feature functionality. VSI for VMware vSphere Unified Storage Management has the following goals: • Provision new storage –VMFS, NFS, and RDM volumes • Extend existing NFS and VMFS storage • Provide additional NFS VMware datastore features (compression, Fast clone, Full clone) • Assist with VMware view integration • Provide properties of datastores and VMs

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 106 vStorage APIs for Data Protection (VADP) is a vCenter interface used to create and manage Virtual Machine snapshots that utilize Change Block Tracking (CBT) to facilitate backups and reduce the amount of time and data transferred to back up a Virtual Machine (after the initial full backup.)

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 107 This course covered EMC storage best practices pertaining to an ESXi host integration. It examined important considerations and some presentation guidelines when introducing an EMC array into a vSphere environment. This concludes the training.

Copyright © 2013 EMC Corporation. All Rights Reserved. EMC Storage Integration with VMware vSphere Best Practices 108