Converged networks with over and Data Center Bridging

Technology brief, 3rd edition

Introduction ...... 2 Traditional data center topology ...... 2 Early attempts at converged networks ...... 2 10 ...... 3 Network convergence with FCoE ...... 4 Data Center Bridging ...... 4 Fibre Channel over Ethernet ...... 4 Industry transition to converged fabrics ...... 6 Practical strategies for moving to FCoE ...... 8 For more information ...... 9

Introduction

Using separate, single-purpose networks for data, management, and storage can be more complex and costly than required for IT organizations or infrastructure deployments. Network convergence is a more economical solution: It simplifies data center infrastructure by consolidating block-based storage and traditional IP-based data communications networks onto a single converged Ethernet network. Network convergence promises to reduce the cost of qualifying, buying, powering, cooling, provisioning, maintaining, and managing network-related equipment. The challenge is determining the best adoption strategy for your business. This technology brief discusses these aspects of converged infrastructure: • Current data center topology • Limitations of previous attempts to create converged networks • Fibre Channel over Ethernet (FCoE) technology • How converged network topologies and converged network adapters (CNAs) work together to tie multiple networks into a single, converged infrastructure

Traditional data center topology

Traditional data center designs include separate, heterogeneous network devices for different types of data. Many data centers support three or more types of networks that serve these purposes: • Block storage data management • Remote management • Business-centric data communications

Each network and device adds to the complexity, cost, and management overhead. Converged networks can simplify typical topologies by reducing the number of physical components. This convergence leads to simplified management and improvements in quality of service (QoS).

Early attempts at converged networks

There have been many attempts to create converged networks over the past decade. Fibre Channel Protocol (FCP) is a lightweight mapping of SCSI to the Fibre Channel (FC) layers 1 and 2 transport protocol (Figure1, yellow shaded oval). Fibre Channel carries not only FCP traffic, but also IP traffic, to create a converged network. The cost of FC and the acceptance of Ethernet as the standard for LAN communications prevented widespread FC use except for data center SANs for enterprise businesses. InfiniBand (IB) technology provides a converged network capability by transporting inter-processor communication, LAN, and storage protocols. The two most common storage protocols for IB are SCSI Remote Direct Memory Access Protocol (SRP) and iSCSI Extensions for RDMA (iSER). These protocols use the RDMA capabilities of IB. SRP builds a direct SCSI to RDMA mapping layer and protocol, and iSER copies data directly to the SCSI I/O buffers without intermediate data copies (Figure 1, green shaded oval). These protocols are lightweight but not as streamlined as FC. Widespread deployment was impractical because of the perceived high cost of IB and the complex gateway and routers needed to translate from these IB-centric protocols and networks to native FC storage devices. High Performance Computing environments that have adopted IB as the standard transport network use SRP and iSER protocols.

2

Figure 1: The various attempts at converged infrastructure produced multiple protocol stacks.

Fibre Channel InfiniBand FCoE/DCB

Internet SCSI (iSCSI) was an attempt to bring a direct SCSI to TCP/IP mapping layer and protocol to the mass Ethernet market. Proponents of iSCSI wanted to drive down cost and to deploy SANs over existing Ethernet LAN infrastructure. iSCSI technology (Figure 1, blue shaded oval) was very appealing to the small and medium business market because of the low-cost software initiators and the ability to use any existing Ethernet LAN. However, iSCSI typically requires new iSCSI storage devices that lack the features of devices using FC interfaces. Also, iSCSI to FC gateways and routers are complex and expensive. They do not scale cost effectively for the enterprise. Most enterprise businesses have avoided iSCSI or have used it for lower tier storage applications or for departmental use. FC over IP (FCIP) and Internet FC Protocol (iFCP) map FCP and FC characteristics to LANs, MANs, and WANs. Both of these protocols map FC framing on top of the TCP/IP protocol stack (Figure 1, red shaded oval). FCIP is a SAN extension protocol to bridge FC SANs across large geographical areas. It is not for host-server or target-storage attachment. The iFCP protocol lets Ethernet-based hosts attach to FC SANs through iFCP-to-FC SAN gateways. These gateways and protocols were not widely adopted except for SAN extension because of their complexity, lack of scalability, and cost.

10 Gigabit Ethernet

One obstacle to using Ethernet for converged networks has been its limited bandwidth. As (10 GbE) technology becomes more widely used, 10 GbE network components will fulfill the combined data and storage communication needs of many applications. As Ethernet bandwidth increases, fewer physical links can carry more data (Figure 2).

3

Figure 2: Multiple traffic types can share the same link using a multifunction adapter.

Network convergence with FCoE

Now that 10 GbE is becoming more widespread, FCoE is the next attempt to converge block storage protocols onto Ethernet. FCoE takes advantage of 10 GbE performance and compatibility with existing Fibre Channel protocols. It relies on an Ethernet infrastructure that uses the IEEE Data Center Bridging (DCB) standards. The DCB standards can apply to any IEEE 802 network, but most often the term DCB refers to enhanced Ethernet. We use the term DCB to refer to an enhanced Ethernet infrastructure that implements at least the minimum set of DCB standards to carry FCoE protocols. Data Center Bridging An informal consortium of network vendors originally defined a set of enhancements to Ethernet to provide enhanced traffic management and lossless operation. The consortium’s proposals have become a standard from the Data Center Bridging (DCB) task group within the IEEE 802.1 Work Group. The DCB standards define four new technologies: • Priority-based Flow Control (PFC), 802.1Qbb allows the network to pause different traffic classes. • Enhanced Transmission Selection (ETS), 802.1Qaz defines the scheduling behavior of multiple traffic classes, including strict priority and minimum guaranteed bandwidth capabilities. This should enable fair sharing of the link, better performance, and metering. • Quantized Congestion Notification (QCN), 802.1Qau supports end-to-end flow control in a switched LAN infrastructure and helps eliminate sustained, heavy congestion in an Ethernet fabric. Before the network can use QCN, you must implement QCN in all components in the CEE data path (CNAs, switches, and so on). QCN networks must also use PFC to avoid dropping packets and ensure a lossless environment. • Data Center Bridging Exchange Protocol (DCBX), 802.1Qaz supports discovery and configuration of network devices that support PFC, ETS, and QCN.

Fibre Channel over Ethernet In legacy Ethernet networks, dropped frames occur under collision or congestion situations. The networks rely on upper layer protocols such as TCP to provide end-to-end data recovery. FCoE is a lightweight encapsulation protocol and lacks the reliable data transport of the TCP layer. Therefore, FCoE must operate on DCB-enabled Ethernet and use lossless traffic classes to prevent loss under congested network conditions. FCoE on a DCB network mimics the lightweight nature of native FC protocols and media. It does not incorporate TCP or even IP protocols. This means that FCoE is a layer 2 (non-routable) protocol just like FC. FCoE is only for short-haul communication within a data center. The main advantage of FCoE is that switch

4 vendors can easily implement logic for converting FCoE on a DCB network (FCoE/DCB) to native FC in high-performance switch silicon. FCoE encapsulates FC frames inside of Ethernet frames (Figure 3).

Figure 3: The FCoE protocol embeds FC frames within Ethernet frames.

The traditional data center model uses multiple HBAs and NICs in each server to communicate with various networks. In a converged network, CNAs in servers can handle both FC and traditional LAN-based communication traffic (Figure 4). That significantly reduces the amount of NIC, HBA, and cable infrastructure.

Figure 4: In converged network architecture, CNAs support both FC and traditional LAN communication traffic.

In a single-hop FCoE/DCB network architecture, a gateway device known as a Fibre Channel Forwarder (FCF) passes encapsulated FC frames between a server’s CNA and the Fibre Channel SANs where the FC storage targets are connected. An FCF is typically an Ethernet switch with DCB, legacy Ethernet, and

5 legacy FC ports. Examples of FCFs include HP Virtual Connect FlexFabric modules and HP Networking 5820X top-of-rack access switches with FC option modules. FCoE has several advantages: • Uses existing OS device drivers (Because the same vendors make devices used on CNAs and native FC HBAs, they use a common FC/FCoE driver architecture.) • Uses the existing Fibre Channel security and management model • Makes storage targets that are provisioned and managed on a native FC SAN transparently accessible through an FCoE FCF

However, there are also some challenges with FCoE: • Must be deployed using a DCB-enabled Ethernet network • Requires CNAs and new DCB-enabled Ethernet switches between the servers and FCFs (to accommodate DCB) • Is a non-routable protocol and used only within the data center. The same is true for native FC protocols today. • Requires an FCF device to connect the DCB network to the legacy FC SANs and storage • Requires validating a new fabric infrastructure that converges LAN communications and FC traffic over DCB-enabled Ethernet. Validating the network ensures that you have applied proper traffic class parameters to meet your IT organizations’ business objectives and service level agreements.

Industry transition to converged fabrics

In one-hop architecture, converged traffic goes from a server to a switch that splits it to Ethernet and Fibre Channel. In two-hop architecture, converged traffic goes to a second switch before the split. The more switch hops there are in a DCB-enabled network, the more difficult it is to keep the network operating at peak efficiency while minimizing congestion. Figure 5 shows the expected industry path to convergence.

Figure 5: The expected industry path to convergence leads to multi-hop architectures in 2013.

6

In the first phase of migration to converged fabrics, CNAs will connect to converged fabric access switches that support DCB-enabled Ethernet, legacy Ethernet, and legacy FC. The CNAs will provide converged connectivity between servers and the first-hop switch before splitting the traffic to the legacy LAN and SAN infrastructure. Figure 6 compares traditional deployment to the first phase of converged network deployment.

Figure 6: In phase 1 of the migration to converged fabrics, CNAs connect to converged fabric access switches.

Figure 7 shows how the next phases of deployment may occur as you update existing data centers or build new ones. Eventually a server will require only a single pair of redundant CNAs (or a single dual-port CNA). Converged network switches will replace separate FC, 10 GbE, and IB switches.

7

Figure 7: Phases 2 and 3 of converged network deployment reduce the required hardware.

Practical strategies for moving to FCoE

You can make the transition to FCoE gracefully with little disruption to existing network infrastructures: Deploy FCoE first at the server-to-network edge and migrate it further into the network (aggregation/core network layers and storage devices) over time. You could also start by implementing FCoE only with those servers requiring access to FC SAN targets. In general, more of a data center’s assets use only LAN attach rather than both LAN and SAN. You should use CNAs only with the servers that actually benefit from them. Don’t needlessly change the entire infrastructure. ProLiant c-Class BladeSystem G7 and later blade servers come with HP FlexFabric adapters (HP CNAs) as the standard LAN-on-Motherboard (LOM) devices. By offering CNAs as a standard on blade servers, we lead the way to very cost effective adoption of FCoE technology. You can use HP VC FlexFabric modules now to get a converged networking solution with FCoE and 10 GbE. VC FlexFabric modules eliminate up to 95% t of network sprawl at the server edge. One device converges traffic inside enclosures and directly connects to LANs and SANs. Transitioning the server-to-network edge first to accommodate FCoE/DCB will maintain existing network architecture, management roles, and the existing SAN and LAN topologies. Updating the server-to-network edge offers the greatest benefit and simplification without disrupting the data center architecture.

8

For more information

Resource description Web address

HP Multifunction Networking Products http://h18004.www1.hp.com/products/servers/proliant- advantage/networking.html HP ProLiant networking http://h18004.www1.hp.com/products/servers/networking/in Ethernet network adapters dex-nic.html

“HP FlexFabric and Flex-10 technology” http://h20000.www2.hp.com/bc/docs/support/SupportManu technology brief al/c01608922/c01608922.pdf

HP Virtual Connect Technology web page www.hp.com/go/virtualconnect

Send comments about this paper to [email protected]

Follow us on Twitter: http://twitter.com/ISSGeekatHP

© Copyright 2010, 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

TC0000757, October 2011