Future of Ethernet in Data Centers
Total Page:16
File Type:pdf, Size:1020Kb
Future of Ethernet in Data Centers Gopal Hegde, Cisco Pramod Srivatsa, Cisco www.openfabrics.org Agenda of Topics ¾Converged Fabric Overview – Gopal / Pramod ¾IEEE Data Center Bridging Update – Manoj Wadekar / Ilango Ganga ¾UD Extensions for iWARP verbs – Terry Hulett www.openfabrics.org 2 Snapshot – March 2009 Server / Access ¾ 10GbE NIC/CNA Vendors – Broadcom, Chelsio, Emulex, Intel, Mellanox, Myricom, QLogic, Neterion, NetXen, ServerEngines, Sun, Others… ¾ 10GbE Switch Vendors – Arista, BNT, Cisco, Extreme, Force10, Foundry, Fujitsu, HP, Juniper, Woven, Others… ¾ Tier-1 server vendors introducing landed on motherboard 10GbE ¾ 40GbE / 100GbE in IEEE Standardization Process (IEEE 802.3ba – Working Group Ballot; On target for standardization in June 2010) www.openfabrics.org 3 Behind the 10GbE Traction At the Server / Access Performance Drivers: ¾Performance - Multi-core systems, PCIe Gen2, DDR3 memory etc.. Bandwidth / Latency - Increased reliance on numerical Requirements methods in enterprise - Increased number of VMs - Faster storage access (ex: solid ¾Unified Fabric state) Total Cost of Ownership Savings ¾Virtualization Treat VMs like physical servers from a networking perspective www.openfabrics.org 4 Data Center Bridging Performance Benefits Feature Potential Benefits Priority-based Flow Control (PFC) ; Lossless Service, Reduces latency and latency jitter; Provides ability to High Priority VL transport various traffic types (e.g. Storage, RDMA) IEEE 802.1Qbb CoS Based BW Management (ETS) Increases throughput, reduces latency and latency jitter of high priority traffic, when non-uniform traffic demands exist IEEE 802.1Qaz Congestion Management Reduces latency, increases throughput by preventing (QCN) packet discards IEEE 802.1Qau L2 Multi-path for Unicast & Multicast Build very large L2 fabrics ; Increases throughput by utilizing full Bi-Sectional bandwidth with ECMP IETF – TRILL WG DCB Capability Exchange protocol (DCBX) Auto-negotiation for Data Center Bridging capabilities www.openfabrics.orgIEEE 802.1az 5 Fibre Channel over Ethernet Delivering Unified I/O Data Center Bridging Standards T.11 FC-BB-5 WG Letter Ballot Completed Unified I/O Transport On track for standardization on or before June 2009 ¾ Mapping FC frames over Ethernet Transport Ethernet ¾ Enables existing HBA stacks to run over a lossless Ethernet medium Fibre ¾ Open FC – Open source Channel implementation effort for FCoE Traffic ¾ Single Adapter, less device proliferation, lower power consumption ¾ No gateways required FC FC Payload FCS EOF CRC FCoE Header Header Header Ethernet www.openfabrics.org 6 Enabling Converged Network: DCE/FCoE Support in Linux Kernel ¾ Support for negotiating DCB links with DCBXP daemon ¾ Multi-Queue transmit and receive for multiple VLs, including configurable traffic classification for steering packets ¾ Existing FC Stack works seamlessly over FCoE Converged Network Adapters (CNAs) ¾ Software FCoE initiator that works with any Ethernet NIC www.openfabrics.org 7 Thank you! www.openfabrics.org.