Converged Networks with Fibre Channel Over Ethernet and Data Center Bridging

Converged Networks with Fibre Channel Over Ethernet and Data Center Bridging

Converged networks with Fibre Channel over Ethernet and Data Center Bridging Technology brief, 3rd edition Introduction ......................................................................................................................................... 2 Traditional data center topology ............................................................................................................ 2 Early attempts at converged networks ..................................................................................................... 2 10 Gigabit Ethernet ............................................................................................................................. 3 Network convergence with FCoE ........................................................................................................... 4 Data Center Bridging ........................................................................................................................ 4 Fibre Channel over Ethernet ............................................................................................................... 4 Industry transition to converged fabrics ................................................................................................... 6 Practical strategies for moving to FCoE ................................................................................................... 8 For more information ............................................................................................................................ 9 Introduction Using separate, single-purpose networks for data, management, and storage can be more complex and costly than required for IT organizations or infrastructure deployments. Network convergence is a more economical solution: It simplifies data center infrastructure by consolidating block-based storage and traditional IP-based data communications networks onto a single converged Ethernet network. Network convergence promises to reduce the cost of qualifying, buying, powering, cooling, provisioning, maintaining, and managing network-related equipment. The challenge is determining the best adoption strategy for your business. This technology brief discusses these aspects of converged infrastructure: • Current data center topology • Limitations of previous attempts to create converged networks • Fibre Channel over Ethernet (FCoE) technology • How converged network topologies and converged network adapters (CNAs) work together to tie multiple networks into a single, converged infrastructure Traditional data center topology Traditional data center designs include separate, heterogeneous network devices for different types of data. Many data centers support three or more types of networks that serve these purposes: • Block storage data management • Remote management • Business-centric data communications Each network and device adds to the complexity, cost, and management overhead. Converged networks can simplify typical topologies by reducing the number of physical components. This convergence leads to simplified management and improvements in quality of service (QoS). Early attempts at converged networks There have been many attempts to create converged networks over the past decade. Fibre Channel Protocol (FCP) is a lightweight mapping of SCSI to the Fibre Channel (FC) layers 1 and 2 transport protocol (Figure1, yellow shaded oval). Fibre Channel carries not only FCP traffic, but also IP traffic, to create a converged network. The cost of FC and the acceptance of Ethernet as the standard for LAN communications prevented widespread FC use except for data center SANs for enterprise businesses. InfiniBand (IB) technology provides a converged network capability by transporting inter-processor communication, LAN, and storage protocols. The two most common storage protocols for IB are SCSI Remote Direct Memory Access Protocol (SRP) and iSCSI Extensions for RDMA (iSER). These protocols use the RDMA capabilities of IB. SRP builds a direct SCSI to RDMA mapping layer and protocol, and iSER copies data directly to the SCSI I/O buffers without intermediate data copies (Figure 1, green shaded oval). These protocols are lightweight but not as streamlined as FC. Widespread deployment was impractical because of the perceived high cost of IB and the complex gateway and routers needed to translate from these IB-centric protocols and networks to native FC storage devices. High Performance Computing environments that have adopted IB as the standard transport network use SRP and iSER protocols. 2 Figure 1: The various attempts at converged infrastructure produced multiple protocol stacks. Fibre Channel InfiniBand FCoE/DCB Internet SCSI (iSCSI) was an attempt to bring a direct SCSI to TCP/IP mapping layer and protocol to the mass Ethernet market. Proponents of iSCSI wanted to drive down cost and to deploy SANs over existing Ethernet LAN infrastructure. iSCSI technology (Figure 1, blue shaded oval) was very appealing to the small and medium business market because of the low-cost software initiators and the ability to use any existing Ethernet LAN. However, iSCSI typically requires new iSCSI storage devices that lack the features of devices using FC interfaces. Also, iSCSI to FC gateways and routers are complex and expensive. They do not scale cost effectively for the enterprise. Most enterprise businesses have avoided iSCSI or have used it for lower tier storage applications or for departmental use. FC over IP (FCIP) and Internet FC Protocol (iFCP) map FCP and FC characteristics to LANs, MANs, and WANs. Both of these protocols map FC framing on top of the TCP/IP protocol stack (Figure 1, red shaded oval). FCIP is a SAN extension protocol to bridge FC SANs across large geographical areas. It is not for host-server or target-storage attachment. The iFCP protocol lets Ethernet-based hosts attach to FC SANs through iFCP-to-FC SAN gateways. These gateways and protocols were not widely adopted except for SAN extension because of their complexity, lack of scalability, and cost. 10 Gigabit Ethernet One obstacle to using Ethernet for converged networks has been its limited bandwidth. As 10 Gigabit Ethernet (10 GbE) technology becomes more widely used, 10 GbE network components will fulfill the combined data and storage communication needs of many applications. As Ethernet bandwidth increases, fewer physical links can carry more data (Figure 2). 3 Figure 2: Multiple traffic types can share the same link using a multifunction adapter. Network convergence with FCoE Now that 10 GbE is becoming more widespread, FCoE is the next attempt to converge block storage protocols onto Ethernet. FCoE takes advantage of 10 GbE performance and compatibility with existing Fibre Channel protocols. It relies on an Ethernet infrastructure that uses the IEEE Data Center Bridging (DCB) standards. The DCB standards can apply to any IEEE 802 network, but most often the term DCB refers to enhanced Ethernet. We use the term DCB to refer to an enhanced Ethernet infrastructure that implements at least the minimum set of DCB standards to carry FCoE protocols. Data Center Bridging An informal consortium of network vendors originally defined a set of enhancements to Ethernet to provide enhanced traffic management and lossless operation. The consortium’s proposals have become a standard from the Data Center Bridging (DCB) task group within the IEEE 802.1 Work Group. The DCB standards define four new technologies: • Priority-based Flow Control (PFC), 802.1Qbb allows the network to pause different traffic classes. • Enhanced Transmission Selection (ETS), 802.1Qaz defines the scheduling behavior of multiple traffic classes, including strict priority and minimum guaranteed bandwidth capabilities. This should enable fair sharing of the link, better performance, and metering. • Quantized Congestion Notification (QCN), 802.1Qau supports end-to-end flow control in a switched LAN infrastructure and helps eliminate sustained, heavy congestion in an Ethernet fabric. Before the network can use QCN, you must implement QCN in all components in the CEE data path (CNAs, switches, and so on). QCN networks must also use PFC to avoid dropping packets and ensure a lossless environment. • Data Center Bridging Exchange Protocol (DCBX), 802.1Qaz supports discovery and configuration of network devices that support PFC, ETS, and QCN. Fibre Channel over Ethernet In legacy Ethernet networks, dropped frames occur under collision or congestion situations. The networks rely on upper layer protocols such as TCP to provide end-to-end data recovery. FCoE is a lightweight encapsulation protocol and lacks the reliable data transport of the TCP layer. Therefore, FCoE must operate on DCB-enabled Ethernet and use lossless traffic classes to prevent Ethernet frame loss under congested network conditions. FCoE on a DCB network mimics the lightweight nature of native FC protocols and media. It does not incorporate TCP or even IP protocols. This means that FCoE is a layer 2 (non-routable) protocol just like FC. FCoE is only for short-haul communication within a data center. The main advantage of FCoE is that switch 4 vendors can easily implement logic for converting FCoE on a DCB network (FCoE/DCB) to native FC in high-performance switch silicon. FCoE encapsulates FC frames inside of Ethernet frames (Figure 3). Figure 3: The FCoE protocol embeds FC frames within Ethernet frames. The traditional data center model uses multiple HBAs and NICs in each server to communicate with various networks. In

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us