Mythbusting Goes Virtual Debunking Four Common Vsphere “Truths”
Total Page:16
File Type:pdf, Size:1020Kb
Mythbusting Goes Virtual Debunking Four Common vSphere “Truths” Written by Scott D. Lowe Introduction as truisms nonetheless even as the version count rises ever The information being presented in this paper comes courtesy higher. In this white paper, we will expose four such myths of the great minds of Eric Sloof, a VMware Certified Instructor, about vSphere. vExpert, consultant and active VMware community member; and Mattias Sundling, vExpert and Dell evangelist focused on Myth #1: RDMs have better performance than VMFS. the virtualization space. The information presented here was What is RDM? discussed in depth during an April 2, 2012 webcast with Mattias A raw device mapping (RDM) is created when a vSphere Sundling and Eric Sloof. administrator has configured a virtual machine’s virtual disk to point directly to, for example, a LUN (logical unit number) on Regardless of the underlying technology solution, as anything a storage array. With an RDM in place, a virtual machine can becomes increasingly popular and widespread in use, certain access storage just like it’s any other disk. pieces of sometimes inaccurate information about that product become permanent fact, often taking on legend-like status. RDMs operate as follows: The virtual machine’s initial access to Moreover, as a product matures, it changes; it evolves by an RDM virtual disk results in the virtual machine being pointed taking on new features, shedding old ones and improving the to a small mapping file. This mapping file is a symbolic link functionality everywhere else. However, no matter how much containing the raw ID of the intended storage on the storage a product matures and no matter how much it evolves, many array. Once it learns that raw ID, the virtual machine points products carry with them myths that follow through the ages. directly to the raw ID on the storage array and no longer needs Myths that may or may not have once been true, but are used to make use of the mapping file, as illustrated in Figure 1. Figure 1. A VM initially accesses an RDM virtual disk using a mapping file, but subsequently uses the raw ID. The source of the myth which provides the greatest flexibility in Because the virtual machine is accessing managing the volume using native SAN storage directly and not going through tools. However, physical RDMs lose some some of the abstraction that takes of the features found with virtual volumes, place when the hypervisor is placed including the ability to be snapshotted, in the middle, there is a myth that cloned, made into a template, or migrated RDMs have superior performance over if the migration involves copying the disk. virtual storage devices that make use of vSphere Virtual Machine File System It’s not unreasonable to make the (VMFS) datastores. Evidence of this assumption that “raw” would translate myth abounds in forum articles and into increased performance of the other resources outlining administrators’ virtual machine, but this myth has attempts to use RDMs to eke out as been well and truly busted and, in much performance as possible for fact, RDMs operate with performance storage-intensive workloads, such as characteristics on par with VMFS those supporting databases. storage. This is demonstrated as one starts to peer under the covers at what’s RDMs have two modes: virtual happening with the host system and, in and physical particular, how these storage calls are When considering the use of RDMs, interacting with the hypervisor kernel. bear in mind that they come in two By monitoring the kernel, the entire different flavors: story of how storage operates becomes • Virtual compatibility mode—When an RDM clear. Through this monitoring, an is configured in virtual mode, it appears to administrator can watch the “hidden” the guest operating system just like a virtual story of storage activities and what disk does when it’s housed inside a VMFS impact these activities have on volume. With this mode, administrators overall performance. are still able to enjoy the benefits that come with the use of VMFS, including Testing the myth advanced file locking and snapshots. To evaluate this myth, Eric performed Further, because virtual mode continues to tests using three distinct scenarios, two provide a level of hardware abstraction, it involving RDMs and one using VMFS as is more portable across storage hardware primary storage. The tests use a single than physical mode. virtual machine configured with a SCSI • Physical compatibility mode—When an adapter, but with four different volumes, RDM is in physical mode, the volume each configured like this: the characteristics of the mapped device, • Virtual RDM 2 • Physical RDM The results? • VMDK file on VMFS Busted! • A disk that connects to an iSCSI target through the use of the Microsoft iSCSI In testing, Eric discovered that there was initiator that ships with all current editions very little difference between either of of Windows the RDM configurations and the VMFS configuration. In other words, while Otherwise, the environment was there may be other reasons to choose configured as follows: a RDM-based volume over a VMFS- • vSphere 5.0, virtual machine hardware based volume, doing so for performance version 8. reasons alone isn’t necessary. • The virtual machine was running Windows Server 2008. VMware’s test results • It was configured with 4 gigabytes Even VMware has busted this myth in of memory. While there may a pretty big way, as shown in Figure • A single virtual CPU was added to the 2. The PDF file from which the chart virtual machine. be other reasons was sourced includes a wide variety of • The virtual machine was connected to the test cases that fully debunk the RDM vs. to choose a RDM- local area network’s Cisco 2960 switch. VMFS myth. • The storage being used is an Iomega PX6. based volume over Reasons to choose VMFS over RDMs In measuring directly the latency as a VMFS-based Now, understanding that performance storage commands make their way isn’t a reason to choose RDMs, what are volume, doing so through the kernel, Eric discovered that some better reasons to choose VMFS? there isn’t much of a difference in any VMware has spent years improving for performance of the storage configurations since they VMFS and, with vSphere 5, had made all have to go through the kernel, except reasons alone tremendous improvements to this the iSCSI option, which just goes out robust, cluster-aware file system with isn’t necessary. over the network and connects to an features such as: iSCSI target directly. However, at 1 Gbps, • Storage I/O control iSCSI had a top speed throughput rate of • Storage vMotion 112.6 MBps. • Storage DRS Figure 2. Random mixed I/O per second (higher is better) 3 • Large volume size: 64 TB blocks in a virtual machine that have • Large VMDK file size: 2 TB changed since a point in time. This • Changed block tracking (CBT) support (CBT feature is incredibly powerful because tracks all of the storage blocks in a virtual backup and replication technologies machine that have changed since a point can rely on vSphere’s own vStorage in time.) advanced programming interfaces (APIs), rather than either on drivers and When to choose RDMs over VMFS software developed from scratch or on Even tough RDMs don’t offer better traditional full and incremental backup performance, there are times when methodologies for data protection. an RDM should be considered. When a virtual machine needs access to a Requirements for using CBT particularly large single volume—one A number of requirements must be met that is greater than 2 TB in size—an Using CBT can for CBT to operate: administrator might consider using a • Since CBT was introduced in vSphere 4, the physical RDM, which provides direct drastically shrink host must be running at least that version access to a volume of up to 64 TB in size of vSphere. backup windows, and is not subject to VMDK file size limits, • CBT must actually be enabled for the virtual which remain at 2 TB. Note that this 64 machine. This will be discussed below. and the disk TB capability is valid only for physical • The virtual machine being tracked must be RDMs; virtual RDMs are still limited to a subsystem will also running virtual hardware version 7 or above. size of 2 TB. get less utilized • The virtual machine must be using a storage mechanism that runs through the vSphere Another time when RDMs may storage stack. Such mechanisms include during backups. reasonably come into play is when there VMFS, NFS and RDMs in virtual compatibility is a need to perform SAN snapshotting, mode. However, an RDM in physical which results in snapshots not supported compatibility mode is not supported. iSCSi by vSphere. Before a SAN can take a initiators installed inside a virtual machine snapshot, the virtual machine must be do not work with CBT, either. quiesced, which means that the virtual machine needs to flush buffered data to Benefits of CBT disk and prepare for the snapshot. If you By using CBT, administrators can are using SAN snapshots, which are not drastically shrink their organization’s communicating with the vSphere layer, backup windows since the backup then you need to use RDM with native application doesn´t need to scan the file systems, such as like NTFS or EXT3. VMDK files for block changes when Another scenario that requires the use doing incremental or differential of RDMs comes when there is a need to backups. Even when a full backup is cluster virtual machines with Microsoft performed, CBT can be useful in that the Clustering Services.