Storage Protocol Comparison White Paper

TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Storage Protocol Comparison White Paper

Table of Contents

Introduction...... 3 Storage Protocol Comparison Table...... 4 Conclusion ...... 10 About the Author ...... 10

TECHNICAL WHITE PAPER / 2 Storage Protocol Comparison White Paper

Introduction VMware frequently is asked for guidance regarding the best storage protocol to use with VMware vSphere®. vSphere supports many storage protocols, with no preference given to any one over another. However, many customers still want to know how these protocols stack up against each other and to understand their respective pros and cons. This white paper looks at common storage protocols from a vSphere perspective. It is not intended to delve into performance comparisons, for the following two reasons: • The Performance Engineering team at VMware already produces excellent storage performance white papers. • Storage protocol performance can vary greatly, depending on the storage array vendor. It therefore does not make sense to compare iSCSI and NFS from one vendor, because another vendor might implement one of those protocols far better. If you are interested in viewing performance comparisons of storage protocols, the “Conclusion” section of this paper includes links to several such documents.

TECHNICAL WHITE PAPER / 3 Storage Protocol Comparison White Paper

Storage Protocol Comparison Table

iSCSI NFS FCoE

Description iSCSI presents NFS presents file Fibre Channel (FC) Fibre Channel over devices to a VMware® devices over a presents block Ethernet (FCoE) also ESXi™ host. Rather network to an ESXi devices similar to presents block than accessing host for mounting. iSCSI. Again, I/O devices, with I/O blocks from a local The NFS server/array operations are operations carried disk, I/O operations makes its local file carried out over a out over a network are carried out over a systems available to network, using a using a block access network using a ESXi hosts. ESXi block access protocol. In this block access hosts access the protocol. In FC, protocol, SCSI protocol. In the case metadata and files on remote blocks are commands and data of iSCSI, remote the NFS array/server, accessed by are encapsulated into blocks are accessed using an RPC-based encapsulating SCSI Ethernet frames. by encapsulating protocol. VMware commands and data FCoE has many of SCSI commands currently implements into FC frames. FC is the same and data into TCP/IP NFS version 3 over commonly deployed characteristics as FC, packets. Support for TCP/IP. Support for in the majority of except that the iSCSI was introduced NFS was introduced mission-critical transport is Ethernet. in VMware® ESX® 3.0 in ESX 3.0 in 2006 environments. It has VMware introduced in 2006. been the only one of support for hardware these four protocols FCoE in vSphere 4.x supported on ESX and software FCoE in since the beginning. VMware vSphere® 5.0 in 2011.

Implementation • Network adapter Standard network Requires a dedicated • Hardware Options with iSCSI adapter, accessed host bus adapter converged network capabilities, using using a VMkernel (HBA) (typically two, adapter (CNA). software iSCSI port (vmknic). for redundancy and initiator and multipathing). or: accessed using a VMkernel (vmknic) • Network adapter port. with FCoE capabilities, using or: software FCoE initiator. • Dependent hardware iSCSI initiator.

or:

• Independent hardware iSCSI initiator.

TECHNICAL WHITE PAPER / 4 Storage Protocol Comparison White Paper

iSCSI NFS Fibre Channel FCoE

Performance iSCSI can run over a NFS can run over FC can run on This protocol Considerations 1Gb or a 10Gb TCP/IP 1Gb or 10Gb TCP/IP 1Gb/2Gb/4Gb/8Gb requires 10Gb network. Multiple networks. NFS also and 16Gb HBAs, but Ethernet. With FCoE, connections can be supports UDP, but 16Gb HBAs must be there is no IP multiplexed into a the VMware throttled to run at encapsulation of the single session, implementation 8Gb in vSphere 5.0. data as there is with established between requires TCP. Buffer-to-buffer NFS and iSCSI. This the initiator and VMware supports credits and end-to- reduces some of the target. VMware jumbo frames for end credits throttle overhead/latency. supports jumbo NFS traffic, which throughput to ensure FCoE is SCSI over frames for iSCSI can improve a lossless network. Ethernet, not IP. traffic, which can performance in This protocol This protocol also improve performance. certain situations. typically affects a requires jumbo Jumbo frames send Support for jumbo host’s CPU the least, frames, because FC payloads larger than frames with IP because HBAs payloads are 2.2K in 1,500. Support for storage was (required for FC) size and cannot be jumbo frames with IP introduced in ESX 4. handle most of the fragmented. storage was NFS can introduce processing introduced in overhead on a host’s (encapsulation of ESX 4, but not on all CPU (encapsulating SCSI data into initiators. (See file I/O into TCP/IP FC frames). VMware knowledge packets). base articles 1007654 and 1009473.) iSCSI can introduce overhead on a host’s CPU (encapsulating SCSI data into TCP/IP packets).

Load Balancing VMware Pluggable There is no load VMware Pluggable VMware Pluggable Storage Architecture balancing per se on Storage Architecture Storage Architecture (PSA) provides a the current (PSA) provides a (PSA) provides a round-robin (RR) implementation of round-robin (RR) round-robin (RR) path selection policy NFS, because there is path selection policy path selection policy (PSP) that distributes only a single session. (PSP) that distributes (PSP) that distributes load across multiple Aggregate load across multiple load across multiple paths to an iSCSI bandwidth can be paths to an FC target. paths to an FCoE target. Better configured by Better distribution of target. Better distribution of load creating multiple load with PSP_RR is distribution of load with PSP_RR is paths to the NAS achieved when with PSP_RR is achieved when array, accessing multiple LUNs are achieved when multiple LUNs are some datastores accessed multiple LUNs are accessed via one path and concurrently. accessed concurrently. other datastores concurrently. via another.

TECHNICAL WHITE PAPER / 5 Storage Protocol Comparison White Paper

iSCSI NFS Fibre Channel FCoE

Resilience VMware PSA Network adapter VMware PSA VMware PSA implements failover teaming can be implements failover implements failover via its Storage Array configured so that if via its Storage Array via its Storage Array Type Plug-in (SATP) one interface fails, Type Plug-in (SATP) Type Plug-in (SATP) for all supported another can take its for all supported for all supported iSCSI arrays. The place. However, this FC arrays. FCoE arrays. preferred method to relies on a network do this for software failure and might not iSCSI is with be able to handle iSCSI binding error conditions implemented, but it occurring on the NFS can be achieved by array/server side. adding multiple targets on different subnets mapped to the iSCSI initiator.

Error Checking iSCSI uses TCP, which NFS uses TCP, which FC is implemented as FCoE requires a resends dropped resends dropped a lossless network. lossless network. packets. packets. This is achieved by This is achieved by throttling throughput the implementation at times of of a pause frame congestion, using mechanism at times B2B and E2E credits. of congestion.

Security iSCSI implements the VLANs or private Some FC switches Some FCoE switches Challenge Handshake networks are highly support the concepts support the concepts Authentication recommended, to of a VSAN, to isolate of a VSAN, to isolate Protocol (CHAP) to isolate the NFS parts of the storage parts of the storage ensure that initiators traffic from other infrastructure. infrastructure. and targets trust traffic types. VSANs are each other. VLANs conceptually similar Zoning between or private networks to VLANs. hosts and FCoE are highly targets also offers a recommended, to Zoning between degree of isolation. isolate the iSCSI hosts and FC targets traffic from other also offers a degree traffic types. of isolation.

TECHNICAL WHITE PAPER / 6 Storage Protocol Comparison White Paper

iSCSI NFS Fibre Channel FCoE

VMware vSphere Although VMware Again, these vary Although VAAI Although VAAI Storage APIs vSphere® Storage from array to array. primitives can vary primitives can vary – Array APIs – Array The following VAAI from array to array, from array to array, Integration Integration (VAAI) primitives are FC devices can FCoE devices can (VAAI) primitives can vary available on NFS benefit from the benefit from the Primitives from array to array, devices: following full following full iSCSI devices can • Full copy (but only complement of complement of benefit from with cold block primitives: block primitives: the following full migration—not with • Atomic test/set • Atomic test/set complement of VMware vSphere® • Full copy • Full copy block primitives: Storage vMotion®) • Block zero • Block zero • Atomic test/set • Preallocated space • Thin provisioning • Thin provisioning • Full copy (WRITE_ZEROs) • UNMAP • UNMAP • Block zero • Cloned offload • Thin provisioning using native These primitives are These primitives are • UNMAP snapshots built in to ESXi and built in to ESXi and require no additional require no additional These primitives are A plug-in from the software installed on software installed on built in to ESXi and storage array vendor the host. the host. require no additional is required for software installed on VAAI NAS. the host.

ESXi Boot from Yes No Yes Software FCoE – No SAN Hardware FCoE (CNA) – Yes

RDM Support Yes No Yes Yes

Maximum 64TB Refer to NAS array 64TB 64TB Device Size vendor or NAS server vendor for maximum supported datastore size.

Theoretical size is much larger than 64TB but requires NAS vendor to support it.

Maximum 256 Default: 8 256 256 Number of Maximum: 256 Devices

Protocol Direct Yes, via in-guest Yes, via in-guest No, but FC devices No to Virtual iSCSI initiator. NFS client. can be mapped Machine directly to the virtual machine with NPIV. This still requires prior RDM mapping to the virtual machine, and hardware must support NPIV (FC switch, HBA).

Storage vMotion Yes Yes Yes Yes Support

TECHNICAL WHITE PAPER / 7 Storage Protocol Comparison White Paper

iSCSI NFS Fibre Channel FCoE

Storage DRS Yes Yes Yes Yes Support

Storage I/O Yes, since Yes, since Yes, since Yes, since Control Support vSphere 4.1. vSphere 5.0. vSphere 4.1. vSphere 4.1.

Virtualized No. VMware does not No. VMware does not Yes. VMware No. VMware does not MSCS Support support MSCS nodes support MSCS nodes supports MSCS support MSCS nodes built on virtual built on virtual nodes built on virtual built on virtual machines residing on machines residing on machines residing on machines residing on iSCSI storage. NFS storage. FC storage. FCoE storage. However, the use of software iSCSI initiators within guest operating systems configured with MSCS, in any configuration supported by Microsoft, is transparent to ESXi hosts. There is no need for explicit support statements from VMware.

Ease of Medium – Setting up Easy – This requires Difficult – This Difficult – This Configuration the iSCSI initiator only the IP or FQDN involves zoning at involves zoning at requires aptitude and of the target, plus the the FC switch level the FCoE switch level the FDQN or IP mount point. and LUN masking at and LUN masking at address of the target, Datastores appear the array level after the array level after plus some immediately after the zoning is the zoning is configuration for the host has been complete. It is more complete. It is more initiator maps and granted access from complex to configure complex to configure LUN presentation on the NFS array/server than IP storage. After than IP storage. After the array side. After side. the target has been the target has been the target has been discovered through a discovered through a discovered through a scan of the SAN, scan of the SAN, scan of the SAN, LUNs are available LUNs are available LUNs are available for datastores for datastores for datastores or RDMs. or RDMs. or RDMs.

TECHNICAL WHITE PAPER / 8 Storage Protocol Comparison White Paper

iSCSI NFS Fibre Channel FCoE

Advantages No additional No additional Well-known and Enables hardware is necessary. hardware is necessary. well-understood consolidation of Can use existing Can use existing protocol. storage and other networking hardware networking hardware Very mature and traffic onto the components and components, so it’s trusted. Found in same network via iSCSI driver from inexpensive to majority of mission- converged network VMware, so it’s implement. Well- critical environments. adapter (CNA). Using inexpensive to known and well- implement. understood protocol. Exchange (DCBX) Well-known and It also is very mature. protocol, FCoE has well-understood Administrators been made lossless protocol. Quite with network skills even though it runs mature at this stage. should be able over Ethernet. DCBX Administrators with to implement. does other things, network skills Can be troubleshooted such as enabling should be able to with generic network different traffic implement. Can be tools such as Wireshark. classes to run on the troubleshooted with same network, but generic network that is beyond the tools such as scope of this Wireshark. discussion.

Disadvantages Inability to route Because there is only Still runs only at 8Gb, Somewhat new and with iSCSI binding a single session per which is slower than currently not quite as implemented. connection, other networks mature as other Possible security configuring for (16Gb throttled to protocols. Requires a issues because there maximum bandwidth run at 8Gb in 10Gb lossless is no built-in across multiple paths vSphere 5.0). network encryption, so care requires some care Requires dedicated infrastructure, which must be taken to and attention. No HBA, FC switch, and can be expensive. isolate traffic (e.g., PSA multipathing. FC-capable storage Cannot route VLANs). Software Same security array, which between initiator and iSCSI can cause concerns as with makes an FC targets using native additional CPU iSCSI, because implementation IP routing. Instead, it overhead on the ESX everything is somewhat more must use protocols host. TCP can transferred in clear expensive. Additional such as FIP (FCoE introduce latency text, so care must be management Initialization for iSCSI. taken to isolate traffic overhead (e.g., Protocol). Might (e.g., VLANs). NFS is switch zoning) is prove complex to still version 3, which required. Might troubleshoot/isolate does not have the prove harder to issues, with network multipathing or troubleshoot than and storage traffic security features of other protocols. using the same pipe. NFS v4 or NFS v4.1. NFS can cause additional CPU overhead on the ESX host. TCP can introduce latency for NFS.

TECHNICAL WHITE PAPER / 9 Storage Protocol Comparison White Paper

Conclusion The objective of this white paper is to provide information on storage protocols and how they interoperate with VMware vSphere and related features. Not all supported storage protocols are discussed. Some notable exceptions are ATA over Ethernet (AoE) and shared/switched SAS. However, the protocols that are included in this paper are the ones that VMware is most frequently asked to compare. As mentioned in the introduction, we have intentionally avoided comparing the performance of each of the protocols. The following VMware white papers already have examined the performance of the protocols from a vSphere perspective: • Achieving a Million I/O Operations per Second from a Single VMware vSphere 5.0 Host http://www.vmware.com/files/pdf/1M-iops-perf-vsphere5.pdf • Comparison of Storage Protocol Performance in VMware vSphere 4 http://www.vmware.com/files/pdf/perf_vsphere_storage_protocols.pdf • Comparison of Storage Protocol Performance http://www.vmware.com/files/pdf/storage_protocol_perf.pdf In addition, ease of configuration is highly subjective. In this paper, the author simply shares his own experiences in configuring these protocols from a vSphere perspective.

About the Author Cormac Hogan is a senior technical marketing manager with the Cloud Infrastructure Product Marketing group at VMware. He is responsible for storage in general, with a focus on core VMware vSphere storage technologies and virtual storage, including the VMware vSphere® Storage Appliance. He was one of the first VMware employees at the EMEA headquarters in Cork, Ireland, in April 2005. He spent a number of years as the technical support escalation engineer for storage before moving into a support readiness training role, where he developed training materials and delivered training to technical support and VMware support partners. He has been in technical marketing since 2011. • Follow Cormac’s blogs at http://blogs.vmware.com/vsphere/storage. • Follow Cormac on Twitter: @VMwareStorage.

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www vmware. .com Copyright © 2012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: VMW-WP-STR-PROTOCOL-A4-101