Quick viewing(Text Mode)

Thin Space Reclamation with EMC VPLEX Technical Notes Part Number H14055

Thin Space Reclamation with EMC VPLEX Technical Notes Part Number H14055

Technical Notes

Thin Reclamation with EMC® VPLEX™

 VMware ESXi, Microsoft Windows, Generic UNIX / Linux  EMC XtremIO™, EMC VMAX3™, and EMC VNX™

Abstract This document describes manual procedures that can be used to reclaim consumed storage on thin LUNs using host-based tools along with VPLEX data mobility.

March 2015

Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.

Published March 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims immplied warranties of merchantability or fitness for a particular purpoose. Use, copying, and disstribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corpr oration in the United States and othher countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC prroduct names, see EMC Corporation Trademarks on EMC.com.

Thin Space Reclamation with EMC VPLEX Technical Notes Part Number h14055

2 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Contents

Contents

Chapter 1 Introduction 6 Purpose ...... 7 Scopo e ...... 7 Audience ...... 7 Document Organization ...... 7 Process Overview ...... 7

Chapter 2 Thin Provisioning 9 VPLEX Thin Provisioning ...... 10 VPLEX Rebuilds for Thin Devices ...... 10 VPLEX Mobility to Reclaim Unused Space ...... 11 Extent Migrations ...... 11 Device Migrations ...... 11 VMware API for Array Integration (VAAI) Support ...... 11 Compare and Write ...... 11 WriteSame (16) ...... 12 VNX2 Thin Provisioning ...... 12 VMAX3 Thin Provisioning ...... 13 VMware vStorage API for VMAX3 ...... 13 XtremIO Thin Provisioning ...... 13 XtremIO’s support for the VAAI ...... 13 Thin Provisioning Summary ...... 14

Chapter 3 VMware ESXi 15 VMware ESXi Reclaim ...... 16 Virtual Machine Disks (VMDKs) ...... 16 Raw Data Mappings (RDMs) ...... 16 Datastores (VMFS) ...... 17

Chapter 4 Generic UNIX / Linux 18 UNIX / Linux Filesystem Reclaim ...... 19 The “dd” Command ...... 19 The “mount –o discard” Command ...... 19 The “fstrim” Command ...... 20

Thin LUN Space Reclaim Using EMC VPLEX 3 Technical Notes Contents

Chapter 5 Microsoft Windows 21 Thin Provisioning LUN Identification ...... 22 Storage Space Reclamation ...... 22 Using the UNMAP Command ...... 22 UNMAP Requests from Hyper-V ...... 22 Using the sdelete.exe Command ...... 23 Scripi ting with PowerShell ...... 23

Appendix A VMware ESXi UNMAP Examples 24 Spaace Reclamation with VMware ESXi ...... 25 vmkfstools --punchzero ...... 25

Appendix B Windows RDM Example 28 Spaace Reclamation with Microsoft Windows ...... 29 sdelete.exe ...... 29

Appendix C Linux with EMC VPLEX and VNX 31 Spaace Reclamation through VPLEX Mobility Jobs ...... 32 How Data Mobility Works ...... 32 VPLEX Data Mobility ...... 33

4 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Contents

Figures Figure 1 - Operating System Process Flow ...... 8 Figure 2 - VPLEX Virtualized Storage ...... 10 Figure 3 - VMware Storage Layers ...... 16 Figure 4 - LUN Utilization Prior to File Deletion ...... 25 Figure 5 - Deleting Files on the Guest Host ...... 25 Figure 6 - LUN Utilization After File Deletion ...... 26 Figure 7 - Using “dd” to Fill the Free Disk Space with Zeroes ...... 26 Figure 8 - LUN Utilization after Space Reclamation ...... 26 Figure 9 - Inflated VMDK Size prior to vmkfstools ...... 27 Figure 10 - Exampm le of Running “vmkfstools --punchzero" ...... 27 Figure 11 - Deflated VMDK Size after running vmkfstools ...... 27 Figure 12 - File Size Prior to Running sdellete.exe ...... 29 Figure 13 - Exampm le of running sdelete.exe ...... 30 Figure 14 - File size after running sdelete.exe ...... 30 Figure 15 - SuSE_OS_LUN_0 Consumed Capactiy ...... 32 Figure 16 - Deleting a file and zeroing the filesystem ...... 33 Figure 17 - SuSE_OS_LUN_0 Consumed Capacity unchanged ...... 33 Figure 18 – Setting the Thin Rebuild Attriibute ...... 33 Figure 19 - VPLEX Data Mobility ...... 34 Figure 20 - Create Device Mobility Job ...... 34 Figure 21 - Select Virtual Volume ...... 34 Figure 22 - Create Source / Target Mobilitty mapping ...... 35 Figure 23 - SuSE_OS_LUN_1 Consumed Capacity ...... 35

Thin LUN Space Reclaim Using EMC VPLEX 5 Technical Notes Chapter 1: Introduction

Chapter 1 Introduction

This chapter presents the following topics:

Purpose ...... 7

Scope ...... 7

Audience ...... 7

Document Organization ...... 7

Process Overview ...... 7

6 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 1: Introduction

Purpose Many applications have the potential to write zeroes to free space as part of the standard initialization, allocation, or migration processes. Depending on the way zeroes are written. The potential exists to reclaim the storage spaace allocated as a result of these processes. This technical note discusses some of the most common situations that cause zeroes to be written to storage devices.

Scope This technical note outline shows how to reclaim all-zzero space and also how to reclaim previously used non-zero space wwith host-based applications.

Audience This technical note is intended for EMC field personnel, partners, and customers who will be configuring, installing, and suppoorting VPLEX. An understanding of these core technologies is required:

 Server and Application Administrattion  Storage Architecture and Network Design  Fiber Channel Block Storage Conceepts  VPLEX Concepts and Components

Document Organization This technical note is divided into multiple sections:

Section One: Each host opeerating systems and their specific requirements for reclaiming all-zero marked sps ace.

Section Two: The appendix section will contain real world examplles for each host opeerating system.

Process Overview The foundation of thin device space reclamation is that a zero written to disk can be reclaimed for the thin pool that provides the backing storage for the thin device. Depending on the back-end array, zeroes may be deduplicated in the array, or in other cases leveraging VPLEX’s built-in thin awareness to deduplicate zeroes with a data mobility job. In many, if not all cases, when a file is written to a filesystem, then deleted the space that was originally written for the file will not be overwritten with zeroes by the standard delete command(s) used in UNIX and Windows systems. A manual process must be used to overwrite the newly freed space with zeroes so the spaace can then be reclaimed by the back-end storage array.

Thin LUN Space Reclaim Using EMC VPLEX 7 Technical Notes Chapter 1: Introduction

The first thing to consider is whether or not the system is a virtual machine or it is running on hardware. If it is running as a virtual machine, additional clean-up will be required to fully reclaim the space, both at the hyperrvvisor layer and the storage array layer. The procedures for zeroing filesystems for UNIX and Windows are the same, respectively, regardless if the system is running virtualized or not.

Deduplication on back-end storage array Storage arrays that support deduplication, such as the EMC XtremIO, will automatically reclaim and free space on thin LUNs as the zeroes are written to the newly claimed sps ace. No further action is required.

On storage arrays that do not support deduplication, VPLEX Data Mobility can be leveraged to move a zeroed thin LUN to a new thin LUN. Although VPLEX is not thin aware, VPLEX will preserve the thin-ness of devices and will only transfer the non-zero data to the new LUN, thereby re-thinning the device.

The following flowchart diagrams the basic procedures required to reclaim unused spaace on Thin LUNs.

Figure 1 - Operating System Process Flow

8 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 2: Thin Provisioning

Chapter 2 Thin Provisioning

This chapter presents the following topics:

VPLEX Thin Provisioning ...... 10

VPLEX Rebuilds for Thin Devices ...... 10

VPLEX Mobility to Reclaim Unused Space ...... 11

VMware API for Array Integration (VAAI) Support ...... 11

VNX2 Thin Provisioning ...... 12

VMAX3 Thin Provisioning ...... 13

XtremIO Thin Provisioning ...... 13

Thin Provisioning Summary ...... 14

Thin LUN Space Reclaim Using EMC VPLEX 9 Technical Notes Chapter 2: Thin Provisioning

VPLEX Thin Provisioning Traditional (thick) provisioning anticipatees future growth and thus allocates storage capacity beyond the immediate requirement. This implies that during a rebuild process all the data will be copied from the source to the target.

With “Thin” provisioning you may allocate only the storage capacity needed as the appplication needs it — when it writes. Which means that if a target is claimed as a “thin” device, VPLEX will read the storagge volumes but will not write any unallocated blocks to the target, preserving the target’s thin provisioning.

Benefits for VPLEX Thinly provisioned volumes:

 Expand dynamically depending onn the amountt of data written to them.  Do not consume physical space until written to.  Thin provisioning optimizes the available storage space to be used

Figure 2 - VPLEX Virtualized Storage

Note: By default, VPLEX treats all storage vollumes as if they were thickly provisioned. You can tell VPLEX to claim arrays that are thinly provisioned using the thin-rebuild attribute.

VPLEX Rebuilds for Thin Devices When claiming back-end storage, VPLEX requires the user to specify “Thin” provisioning for each back-end storage volume. Storage volumes that have been claimed as “Thin” devices allow that storage to migrate onto a thinly provisioned storage volumes while allocating the exact amount of consumed thin storage pool capacity.

VPLEX preserves the unallocated thin pool space of the target storage volume by detecting zeroed data content prior to writing, and then skipping those unused blocks to prevent unnecessary allocation. If a storage volume is thinly provisioned,

10 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 2: Thin Provisioning

the "thin-rebuild" attribute must be to "trrue" prior to the storage volumes being used for Data Mobility, Raid-1 or DR1.

If a thinly provisioned storage volume contains non-zero data before being connected to VPLEX, the peerformance of the migration or RAID 1 rebuilld is adversely affected because it must copy all blocks.

VPLEX Mobility to Reclaim Unused Space Among the many different use cases for VPLEX Mobillity, one of these use cases is to move from “Thick” to “Thin” -or- “Thin” to “Thin” devices (or extents) to reclaim unused space due to the limitations of VPLEX’s inability to leverage the SCSI UNMAP functions.

Note: In most cases, modern Operating Systems now offer methods of reclaiming unused (or deleted) space from mounted storage volumes. However, there are many older versions that do not offer SCSI UNMAP support and VPLEXX Mobility offers a great method of resolving this problem.

Extent migrations move data between extents in the same cluster. Extent Migrations Use extent migrations to:

 Move extents from a “hot” storagee volume shared by other busy extents  Defragment a storage volume to create more contiguous free space  Perform migrations where the source and target have the same number of volumes with identical capacities Device migrations move data between devices on the same cluster or between Device Migrations devices on different clusters.

Use device migrations to:

 Migrate data between dissimilar arrrays  Relocate a “hot” volume to a faster array  Relocate devices to new arrays in a different cluuster

VMware API for Array Integration (VAAI) Support On VPLEX, VAAI is implemented using the following twwo SCSI commands:

 “Compare and Write” (CAW) offloads coordination of powering virtual machines (VMs) on/off, and moving them between ESX servers.

 “WriteSame (16)” offloads copying data to and from the array through the hypervisor. The CompareAndWrite (CAW) SCSI command is used to coordinate VMware Compare and Write opeerations such as powering-on/off VMs, moving VMs from one ESXi to another

Thin LUN Space Reclaim Using EMC VPLEX 11 Technical Notes Chapter 2: Thin Provisioning

without halting applications (VMotion), and Distributed Resource Scheduler (DRS) opeerations.

CAW is used by VMWare ESXi servers to relieve storage contention, which may be caused by SCSI RESERVATION in distributed VM envirronments. CAW assists storage hardware acceleration by allowing ESX servers to lock a region of disk instead of entire disk.

VPLEX allows CAW to be enabled/disabled at either a system level or by a storage view level. When CAW is disabled on VPLEX, VPLEX viirtual volumes, do not include CAW support information in their responses to inquiries from hosts.

Note: VM operations may experience significant performance degradation if CAW is not enabled.

The WriteSame (16) SCSI command provides a mechanism to offload initializing WriteSame (16) virtual disks to VPLEX. WriteSame (16) requests the server to write blocks of data transferred by the application client multiple times to consecutive logical blocks.

WriteSame (16) is used to offload VM provisioning and snapshotting in vSphere to VPLEX which enables the array to perform copy operations indepeendently without using host cycles. The array can schedule and execute the copy function much more efficiently.

VNX2 Thin Provisioning For native VMware environments, the Virtual Machine File System (VMFS) has many characteristics that are thin-friendly. Firsst, a minimal number of thin extents are allocated from the pool when a VMware ffile system is created on thin LUNs. Also, a VMFS Datastore reuses previously allocated blocks that are beneficial to thin LUNs.

When using RDM volumes, the file system or device created on the guest OS dictates whether the RDM volume is thin-friendly.

When creating a VMware virtual disk, LUNs can be prrovisioned as:

 Thick Provision Lazy Zeroed  Thick Provision Eager Zeroed  Thin Provision Thick Provision Lazy Zeroed is the default and recommended virtual disk typee for thin LUNs. When using this method, the storage required for the virtual disk is reserved in the Datastore, but the VMware kernel dooes not initialize all the blocks at creation.

As of vSphere 5, there is also the ability tto perform thin LUN space reclamation at the storage system level. VMFS 5 uses the SCSI UNMAP command to return space to the storage pool when created on thin LUNs. SCSI UNMAAP is used any time VMFS 5 deletes a file, such as Storage vMotion, delete VM, delete snapshot, etc. Earlier

12 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 2: Thin Provisioning

versions of VMFS would only return the capacity at the file system level. vSphere 5 greatly simplifies the process by conductting space reclaim automatically.

Note: When using Thin Provision, space required for the virtual disk is not allocated at creation. Instead, it is allocated and zeroed out on demand.

VMAX3 Thin Provisioning All VMAX3 arrays are pre-configured with Virtual Provisioning (VP) to help reduce cost, improve capacity utilization, and simpliffyy storage management. The VMAX3 in fact only supports thin devices and no longer uses any thick devices.

VMware vStorage VMware vStorage APIs for Array Integratioon (VAAI) will offload Virtual Machine (VM) opeerations to the VMAX3 array to optimize server performance. VAAI enable the ESXi API for VMAX3 servers to free upu server resources by offloading certain operations. For VMAX3, these opeerations are:

 Full Copy - This operation offloads replication to VMAX3 to enable much faster deployments of VMs, snaps, clones, and storage vMotion ooperations.

 Block Zero - This operation allows you to rapidly initialize file system blocks and virtual disk space

 Hardware-Assisted Locking - This ooperation optimizes metadata updates and assists with virtual desktop deployments

 UNMAP - This operation will allow VMs to reclaim zeroed space within VMDK files and Datastores making more efficient use of disk space. This unused space is automatically returned to the thin pool where it originated.

 VMware vSphere Storage API for Storage Awareness (VASA)

XtremIO Thin Provisioning XtremIO arrays are inherently thinly provisioned. When the host allocates a thick- eager-zero virtual disk with VAAI block zeroing, the XtremIO array still thinly provisions the sps ace, starting with absolutely no consumed SSD space at all! The preparation or initialization of such an EZT disk is super-fast because it is all metadata operations as a result of writing zeroes. With every written unique 4KB block, exactly 4KB of space is incrementally consumed. So you get the best of both worlds: Deduplication and Thin Provisioning benefits with no run-time overhead of lazy-zero or thin-format virtual disks on the ESX hosts.

XtremIO’s support When the ESX host issues an unmap command, the sspecific LBA-to-fingerprint mappp ing is erased from the metadata. The reference count of the underlying block for the VAAI corresponding to that fingerprint is decremented. Whhen a subsequent read comes for that erased LBA, the XtremIO array will return a zero block (assuming the reference count was decremented to zero) because the entry no longer exists in the mappping metadata. There is no need to immediately erase the now-de-referenced 4K block on SSD, avoiding any erase overhead.

Thin LUN Space Reclaim Using EMC VPLEX 13 Technical Notes Chapter 2: Thin Provisioning

When a host writes a zero block to an XtremIO array at a certain LBA, we immediately recognize this is a 4KB block filled with zeroes — because all zero blocks have the same unique content fingerprint which is well known by the array. Upon identification of this fingerprrint, we immeediately acknowledge the write to the host without doing anything internally.

XtremIO has global inline deduplication, which means that no matter how many times a specific 4KB data pattern is written to the array, there is only ever one copy of it stored on flash in the array. You can imagine for all those logical 4KB zero blocks, there would be mappings from their logical addresses (LBA) to the same uniqque fingerprint for all zero blocks. And the finngerprint woould be mappped to the single zero block stored on SSD.

Thin Provisioning Summary In summary, it’s important to note that evven though VPLEX fully supports “Thin” provisioning between dozens of heterogeeneous back-end arrays, there is still some work to be done to facilitate SCSI UNMAP commands between VPLEX, Back-end Storage Arrays, and Host OS’s.

The VNX2, VMAX3, and XtremIO back-end arrays all natively suppport SCSI UNMAP commands and VAAI feature sets, but all of these back-end arrays handle spaace reclamation differently while being virtualized with VPLEX.

This is where VPLEX Mobility can help resolve these issues by enabling the transparent movement of data between extents and/or devices to trim the unclaimed spaace and reclaim that space for each respective “Thin” pool.

Note: VPLEX Mobility jobs are all done online without the requirement to take the host offline for any reason. This ensures complete transparency to the user environments.

14 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 3: VMware ESXi

Chapter 3 VMware ESXi

This chapter presents the following topics:

VMware ESXi Reclaim ...... 16

Virtual Machine Disks (VMDKs) ...... 16

Raw Data Mappings (RDMs) ...... 16

Datastores (VMFS) ...... 17

Thin LUN Space Reclaim Using EMC VPLEX 15 Technical Notes Chapter 3: VMware ESXi

VMware ESXi Reclaim In the VMware ESXi environment, there are two layers of the storage stack that must be zeroed for storage reclamation to take place. The VVM’s filesystem are contained on the Virtual Machine Disk File (vmdk) on the Virtual Machine layer and the Datastore which is created as Virtual Machine File System (vmffss) on the ESXi Layer. This section will discuss procedures for each of these layers.

Figure 3 - VMware Storage Layers

Virtual Machine Disks (VMDKs) If the space to be reclaimed is part of a VMDK file and is in use by a guest opeerating system, the guest operating system’s filesystems must first be zero-written before continuing on with ESXi-specific procedures. This will be covered in detail for each OS later in this document.

Something to consider is that if the VMDDK files were allocated as thin VMDKs. They must first be deflated to ensure the guest operating system does not run out of space while running the procedure to zero out the guest operating system’s filesystem. This is done by creating a temporary file that wwill inflate the filesystem to its maximum capacity then deleting that temporary file. As a result, the space allocation of this thinly provisioned VMDK will also inflate. So VMware has provided a CLI tool to trim the zero space and “re-thin” the device after it was temporarily inflated. This tool is called: “vmkfstools –puunchzero” For more information see VMware vSpheere 5.5 Documentation Center: Using vmkfstools Help File

Note: The VMDK must be free of any locks prrior to runningg vmkfstools –punchzero on it. As a result, the virtual machine that is using the VMDK must be powered off prior to running the -- punchzero command.

Raw Data Mappings (RDMs)

16 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 3: VMware ESXi

If the filesystem is located on a Raw Deviice Mapping (RDM) device, only follow the procedure for the VM’s respeective operating system. No further action will be required since RDMs are not under hypy ervisor control.

Datastores (VMFS) Similar to deleting and reclaiming space for files located on a VMDK, deleting Virtual Machines and the associated files stored on a datastore will not automatically reclaim the storage on that datastore. This can be done by using the command. “dd”

Using the “dd” command The procedure to write zeroes to the unused space on a UNIX filesystem is done by using the command. This will creatte a temporary file that will write zeroes until “dd” it fully consumes all available disk space, then will immediately delete the temmp file.

# dd if=/dev/zero of=/zeroes bs=102400 # rm /zeroes

Note: It is critical to understand that this procedure will temporarily completely fill the Datastore filesystem. Any Virtual Machines with VMDKs on the datastore using thin LUUNs may experience out of space errors during thhis time. It is recommended that all Virtual Machines associated with that datastore to be powered down prior to zeroing the Datastore filesystem.

Thin LUN Space Reclaim Using EMC VPLEX 17 Technical Notes Chapter 4: Generic UNIX / Linux

Chapter 4 Generic UNIX / Linux

This chapter presents the following topics:

UNIX / Linux Filesystem Reclaim ...... 19

The “dd” Command ...... 19

The “mount –o discard” Command ...... 19

The “fstrim” Command ...... 20

18 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 4: Generic UNIIX / Linux

UNIX / Linux Filesystem Reclaim Deleting files on UNIX or Linux filesystems does not automatically zero out the data. Only the pointer in the filesystem header is removed, leaving the data still intact on the disk. There are a couple of ways to resolve this issue:

1. Using the command “dd” 2. Using the command “mount –o discard” 3. Using a cron job to run to trim at a scheduled interval “fstrim”

The “dd” Command The procedure to write zeroes to unused space on UNIX is to use the command to dd create a zero-filled file that fully consumees all available disk space, then immediately delete the file.

# dd if=/dev/zero of=/zeroes bs=102400 # rm /zeroes

Note: It is critical to understand that this procedure will temporarily completely fill the filesystem. Any applications on the host that tries to write the filesystem may receive out of space errors during this time. It is recommended that all applications associated with that filesystem be shutdown prior to zeroing the filesystem.

The “mount –o discard” Command The option allows you to automatically TRIM deleted file “mount –o discard” that were using the EXT4 file system. There is however a noticeable performance penalty in sending TRIM commands after every delete which can make deletion much slower than usual on some drives.

To enable automatic TRIM on a mount pooint, it must be mounted with the discard option in . Follow these steps: fstab 1) Backup your then open it for editing fstab

# cp /etc/fstab ~/fstab-

# vi /etc/fstab

2) Add discard to the fstab options for each drive or mount .

/dev/sdb1 /app1 ext4 discard,errors=remount-ro 0 1 3) Save & Exit , then reboot. Automatic TRIM is now Enabled. fstab

Thin LUN Space Reclaim Using EMC VPLEX 19 Technical Notes Chapter 4: Generic UNIX / Linux

The “fstrim” Command The command is used on a mounted filesystem to discard (or "trim") "fstrim" blocks which are not in use by the filesystem. This is extremely useful for thinly- provisioned storage where you need to discard all unused blocks in the filesystem.

Scheduling for most storage volumes should start with a trimming "fstrim" frequency of once a week. Once a for behavior has been established, increase or decrease the frequency to meet your needs. To schedule “fstrim” follow these stepe s:

1) Create a CRON job to run once a week:

vi /etc/cron.weekly/fstrim 2) Add the following to the file: fstrim

#! /bin/sh

# By default we assume only / is on a “Thin” device # You can add more “Thin” mount points, separated by spaces. # Make sure all mount points are within the quotes. # For example: # THIN_MOUNT_POINTS='/ /boot /home /opt/app1 /opt/app2'

THIN_MOUNT_POINTS='/'

for mount_point in $SSD_MOUNT_POINTS do fstrim $mount_point done

3) Make the script executable:

sudo chmod +x /etc/cron.weekly/fstrim

4) And finally, Run it:

sudo /etc/cron.weekly/fstrim

Note: Trim has been defined as a non-queueed command by the T13 subcommittee, and consequently incurs massive execution penalty if used after each filesystem delete command. The non-queued nature of the command requires the driver to first finish any operation, issue the trim command then resume all normal commands. For this reason Trim can take a lot of time to complete and may evven trigger some garbage collection depending on your back end storage array.

20 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 5: Microsoft Windows

Chapter 5 Microsoft Windows

This chapter presents the following topics:

Thin Provisioning LUN Identification ...... 22

Storage Space Reclamation ...... 22

Using the UNMAP Command ...... 22

UNMAM P Requests from Hyper-V ...... 22

Using the sdelete.exe Command ...... 23

Scripting with PowerShell ...... 23

Thin LUN Space Reclaim Using EMC VPLEX 21 Technical Notes Chapter 5: Microsoft Windows

Thin Provisioning LUN Identification With Microsoft Server 2012, thin provisioning is an end-to-end storage provisioning solution. Thin provisioning features included with Miicrosoft’s Server 2012 include logical unit (LUN) identification, threshold notificatioon, handles for resource exhaustion, and space reclamation.

Windows Server 2012 has adopted the T10 SCSI Block Command 3 (SBC3) standard speecification for identifying thinly provisioned LUNs. During the initial target device enumeration, the Windows Server will gather the backend storage device properties to determine the provisioning type, the UNMAP, and the TRIM capabilities.

Note: The storage device reports its provisioning type and UNMAP and TRIM capability according to the SBC3 specification.

Storage Space Reclamation Spaace reclamation can be triggered by file deletion, a file system level trim, or a storage optimization operation. File systeem level trim is enabled ffor a storage device designed to perform “read return zero” after a trim or an unmap ooperation

Using the UNMAP Command When a large file is deleted from the file system or a file system level trim is triggered, Windows Server converts file delete or trim notifications into a corresponding UNMAP request. The storage port driver stack translates the UNMAP request into an SCSI UNMAP command or an ATA TRIM commaand according to the protocol type of the storage device. During the storage device enumeration, the Windows storage stack gathers information about whether the storage device supports UNMAP or TRIM commands. Only the UNMAP request is sent to the storage device if the device has SCSI UNMAP or ATA TRIM capability.

Note: Windows Server does not adopt T10 SCSI WRITE SAME command sets.

UNMAP Requests from Hyper-V During the virtual machine (VM) creationn, a Hyper-V host will send an inquiry about whether the storage device where the virtual hard disk (VHD) resides supports UNMAP or TRIM commands. When a large file is deleted from the file system of a VM guest operating system, the guest operatting system sends a file delete request to the virtual machine’s virtual hard disk (VHD) or VHD file. The VM’s VHD or VHD file tunnels the SCSI UNMAP request to the class driver stack of the Windows Hyper-V host, as follows:

 If the VM has a VHD, the VHD converts SCSI UNMAP or ATA TRIM commands into a Data Set Management I/O control code (IOCTL DSM) TRIM request, and then sends the request to the host storage device.

22 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Chapter 5: Microsoft Windows

 If the VM has a VHD file, the VHD file system converts SCSI UNMAP or ATA TRIM commands into file system-level trim requests, and then sends the requests to the host opo erating system.

Note: Windows Hyper-V also supports IOCTL DSM TRIM callls from the guest operating system

Using the sdelete.exe Command Deleting files on older versions of Microsoft Server does not automatically zero out the data. Only the pointer in the filesystem header is removed, leaving the data still intact on the disk.

The procedure to write zeroes to unused space on Microsoft Windows is to use the command. The command provides a ‘ ’ flag to fill sdelete.exe sdelete.exe -z the unused filesystem space with zeroes.

Downloading sdelete.exe Sdelete.exe is available from Microsoft TechNet as paart of the Windows SysInternals tools. It may be downloaded from:

https://technet.microsoft.com/en-us/sysinternals/bb897443.asspx

Using sdelete.exe Usage: sdelete [-p passes] [-s] [-q] ... sdelete [-p passes] [-z|-c] [drive letter] ...

-a Remove Read-Only attribute -c Clean free space -p Passes Specifies number of overwrite passes(default=1) -q Don't print errors (Quiet) -s or -r Recurse subdirectories -z Zero free space (good for virtual disk optimization)

The sdelete.exe command is used againsst a Windows filesystem as follows:

C:\sdelete.exe -z Scripting with PowerShell The following link is for a PowerShell script that will create a file called ThinSAN.tmp on the specified volume then fills that volume up with zeroes leaving 5% percent of free space. This allows the storage array that is thin provisioned to mark that drive spaace as unused and reclaim the space oon the physical disks. Here is the link:

http://blog.whatsupduck.net/2012/03//powershell-alternative-to-sdelete.html Section Break, DO NOT DELETE

Thin LUN Space Reclaim Using EMC VPLEX 23 Technical Notes Appendix A: VMware ESXi UNMAP Examples

Appendix A VMware ESXi UNMAP Examples

This appendix presents the following topics:

Space Reclamation with VMware ESXi ...... 25

24 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Appendix A: VMware ESXi UNMAP Examples

Space Reclamation with VMware ESXi The environment used for this example consists of a SuSE Linux Virtual Machine running on ESXi 5.5. A 30 GB datastore (SuSE_OS_Datastore) has been created and the guest operating system installed on iit. The datasttore is a VPLEX LUN with XtremIO storage backing.

In this example, a large file was deleted in a guest VM. The operating system vmkfstools -- immediately makes the space available, but the Datastore and Array layers are not punchzero aware that this sps ace is now available.

Note: The host commands demonstrated here are fully explained in the Generic UNIX / Linux chapter. Likewise for Microsoft Windows Serrvvers and Hyper-V VMs.

Prior to deleting any files, the ESXi server reports that there is 11.5 GB used on the SusE_OS_LUN volume. This volume is prresented to thhe host as an RDM.

Figure 4 - LUN Utilization Prior to File Deletion At this point we delete the desired file(s) on the virtual machine.

Figure 5 - Deleting Files on the Guest Host

Thin LUN Space Reclaim Using EMC VPLEX 25 Technical Notes Appendix A: VMware ESXi UNMAP Examples

After the deletion, notice that the ESXi Server still reports that 11.5 GB is being utilized. This means that no space has been reclaimed by the array.

Figure 6 - LUN Utilization After File Deletion Since this VM is leveraging a version of the SuSE Opeeration System that is not currently supporting the TRIM and UNMAP operations, we will use the “dd” command to write zeroes over the unclaimed disk space.

Figure 7 - Using “dd” to Fill the Free Disk Space with Zeroes Since our back-end array is XtremIO thatt automatically de-dups zeroes, the array will automatically identify and optimize the freed space. In this example, we have cleared the 4.5 GB file and the XtremIO array has de-duplicated the zeroes, immediately freeing up that sps ace without any further actions.

Figure 8 - LUN Utilization after Space Reclamation

26 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Appendix A: VMware ESXi UNMAP Examples

At this point, the RDM disk has been zeroed and the space has been automatically de-duped by the XtremIO back-end array. However, the VMDK file that is stored on the ESXi datastore has been inflated from both the original file and the temporary zeroes file that was created during the thinning process. In order to resolve this discrepancy we will also need to run the command “vmkfstools –punchzero” .

Figure 9 - Inflated VMDK Size prior to vmkfstools This 16 GB includes the deleted file pluss the zeroes that were written over the deleted file. Follow these steps:

1. Shut down the guest VM using the VMDK to reelease any file/device loccks. 2. Run on the VMDK. vmkfstools --punchzero

Figure 10 - Example of Running “vmkfstools --punchzero"

3. Verify VMDK has been resized in vCenter.

Figure 11 - Deflated VMDK Size after running vmkfstools

Thin LUN Space Reclaim Using EMC VPLEX 27 Technical Notes Appendix B: Windows RDM Example

Appendix B Windows RDM Example

This appendix presents the following topics:

Space Reclamation with Microsoft Windows ...... 29

Page Break – DO NOT DELETE

28 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Appendix B: Windows RDM Example

Space Reclamation with Microsoft Windows The environment used for this example consists of a Windows Server Virtual Machine running on a 40 GB VMDK file located on the WINServ_Datastore.. Our test will consist of copying an ISO file to a 10 GB RDM alllocated from a VPLEX that has been back- ended by XtremIO.

Note: It should be noted that Windows Server 2012 does space reclamation by default and sdelete.exe is not needed to deflate the RDM devices.

In this example, a large file was copied to a Raw Device Mapped (RDM) volume sdelete.exe attached to a Guest VM. The operating system does not automatically makes the spaace available so it will require the use of the Sdelete.exe command to free that unused space. Also needed will be the vmkfstools command on ESXi to free the inflated files on the datastore.

Prior to deleting any files, the Windows Server reports that there is 4.49 GB used on the WIN2012_RDM-1 volume.

Figure 12 - File Size Prior to Running sdelete.exe As previously discussed, file deletion on older versions of Microsoft Server does not automatically zero out the data. Only the pointer in the filesystem header is removed, leaving the data still intact on the disk.

This is where the command provides a quick and easy way to fill sdelete.exe unused filesystem space with zeroes and ultimately facilitate spaace reclamation.

Thin LUN Space Reclaim Using EMC VPLEX 29 Technical Notes Appendix B: Windows RDM Example

Figure 13 - Example of running sdelete.exe After deleting the file(s), the Windows Server reports that there is 145 MB used on the WIN2012_RDM-1 volume.

Figure 14 - File size after running sdelete.exe At this point, the RDM disk has been zeroed and the space has been reclaimed by the backc -end storage array. However, the VMDK file that is stored on the ESXi datastore has been inflated from both the original file and the temporary zeroes file that was created during the thinning process. In order to resolve this discrepancy we will also need to run the command as in the previous “vmkfstools –punchzero” example in Appendix-A .

30 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Appendix C: Linux with EMC VPLEX and VNX

Appendix C Linux with EMC VPLEX and VNX

This appendix presents the following topics:

Space Reclamation through VPLEX Mobility Jobs ...... 32

Thin LUN Space Reclaim Using EMC VPLEX 31 Technical Notes Appendix C: Linux with EMC VPLEX and VNX

Space Reclamation through VPLEX Mobility Jobs

How Data Mobility VPLEX Data Mobility, along with VPLEX’s thin-awareness can be leveraged to re-thin a device that is backed by a storage array that does not automatically duplicate zeroes Works written to a thin disk. The environment in this example is a Linux virtual machine that is running on a datastore on a VPLEX Virtual Volume backed by an EMC VNX thin LUN.

In this example, a large file was deleted in the guest VM. The opeerating system immediately makes the space available, but the space is not returned to the thin pool in the storage array.

Note: This example is used to demonstrate thhin-to-thin migration in order to re-thin a device. It can also be used from thick-to-thin to convert a thickly provisioned device to a thin device.

Prior to beginning, make note of the consumed space on the VNX LUN that provides storage to the VPLEX Virtual Volume. A 4.5GB file has been copied to the filesystem and approximately 14.7GB of capacity has been consumed.

Figure 15 - SuSE_OS_LUN_00 Consumed Capacity The file is then deleted and the filesystem is zeroed per the Generic UNIX / Linux dd procedure.

32 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Appendix C: Linux with EMC VPLEX and VNX

Figure 16 - Deleting a file and zeroing the filesystem After zeroing, no change in Consumed Caapacity is obbserved on the VNX LUN. This is expected because the VNX will not automatically deduplicate zeroes.

Figure 17 - SuSE_OS_LUN_00 Consumed Capacity unnchanged VPLEX Data Mobility to a new thin device will facilitate reclaiming the zeroed sspace. VPLEX’s thin-awareness will not write the zeroes to the target device thereby freeing the space on the LUN.

VPLEX Data In this example, a thin-to-thin VPLEX data mobility job will be commpleted due to the “Thin Rebuild” attribute being set during the claiming process. Mobility

Figure 18 – Setting the Thin Rebuild Attribute Under the Data Mobility tab, select Move Data within Cluster to set up the Data Mobility job.

Thin LUN Space Reclaim Using EMC VPLEX 33 Technical Notes Appendix C: Linux with EMC VPLEX and VNX

Figure 19 - VPLEX Data Mobility Then click on the Create Data Mobility Jobs button.

Figure 20 - Create Device Mobility Job After selecting the local cluster to create the Data Mobility on, select the Virtuual Volume that is being used by the operating system filesystem. In this case, it’s the SuSE_OS_vol Virtual Volume.

Figure 21 - Select Virtual Volume Then select the backing Device and create the Source-Target mappping by identifying an unused Device that is backed by a new thin LUN on the array.

34 Thin LUN Spaace Reclaim Using EMC VPLEX Technical Notes Appendix C: Linux with EMC VPLEX and VNX

Figure 22 - Create Source / Target Mobility mapping Start the Data Mobility job and commit it upon completion. The data has been transferred to the new LUN, named SuSE_OS_LUN_1. The space that was previously consumed by the 4.5GB file has been reclaimed and the new LUN is only consuming appproximately 10.2GB.

Figure 23 - SuSE_OS_LUN_11 Consumed Capacity At this point, the original LUN can be removed from VPLEX and deleted from the array. This will return all of its consumed space to the thin pool to be available for future use.

VPLEX Data Mobility in coordination with operating system based utilities offers a seamless method for re-thinning a LUN after freeing sspace on the host operating system.

Thin LUN Space Reclaim Using EMC VPLEX 35 Technical Notes