InfiniBand Technology Overview

Dror Goldenberg, SNIA Legal Notice

The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee.

InfiniBand Technology Overview 2 © 2008 Storage Networking Industry Association. All Rights Reserved. Abstract

InfiniBand Technology Overview The InfiniBand architecture brings fabric consolidation to the data center. Storage networking can concurrently run with clustering, communication and management fabrics over the same infrastructure, preserving the behavior of multiple fabrics. The tutorial provides an overview of the InfiniBand architecture including discussion of High Speed – Low Latency, Channel I/O, QoS scheduling, partitioning, high availability and protocol offload. InfiniBand based storage protocols, iSER (iSCSI RDMA Protocol), NFS over RDMA and SCSI RDMA Protocol (SRP), are introduced and compared with alternative storage protocols, such as iSCSI and FCP. The tutorial further enumerates value-add features that the InfiniBand brings to clustered storage, such as atomic operations and end to end data integrity.

Learning Objectives: Understand the InfiniBand architecture and feature set. Understand the benefits of InfiniBand for networked storage. Understand the standard InfiniBand storage protocols.

InfiniBand Technology Overview 3 © 2008 Storage Networking Industry Association. All Rights Reserved. Agenda

Motivation and General Overview Protocol Stack Layers Storage Protocols over InfiniBand Benefits

InfiniBand Technology Overview 4 © 2008 Storage Networking Industry Association. All Rights Reserved. The Need for Better I/O

Datacenter trends OS OS OS OS Multi-core CPUs CPU CPU I/O Core Core

Bladed architecture Node Compute Fabric consolidation virtualization & consolidation Increasing storage demand Better I/O is required OS High capacity CPU CPU Efficient I/O

Compute Node Compute Core Core Low latency CPU Offload Scalable Virtualization friendly App App High availability OS Performance I/O Low power CPU CPU Compute Node Compute TCO reduction

InfiniBand Technology Overview 5 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Storage Solutions

The InfiniBand standard - strong alternative to (SAN) (NAS) Superior performance 20Gb/s host/target ports (moving to 40Gb/s 2H08) 60Gb/s switch to switch (moving to 120Gb/s 2H08) Sub 1µs end to end latencies Unified fabric for the Data Center Storage, networking and clustering over a single wire Cost Effective Compelling price/performance advantage over alternative technologies Low power Consumption – Green IT Less than 5W per 20Gb/s port InfiniBand Mission Critical Highly reliable fabric Multi-pathing Automatic failover Highest level of data integrity

InfiniBand Technology Overview 6 © 2008 Storage Networking Industry Association. All Rights Reserved. Fabric Technologies Comparison

Feature Fibre Channel Standard 10 GbE InfiniBand

Line Rate 4.25 (4GFC) 10.3125* 20 (4x DDR) (GBaud) 8.5 (8GFC) 40 (4x QDR) 10.51875* (10GFC) Unidirectioanl 400 (4GFC) 1,250 2,000*** (4x DDR) 800 (8GFC) 4,000 (4x QDR)

(MB/s**) 1,200 (10GFC) Reliable Service Practically no No Yes

Fabric Practically no In the future (FCoE) Yes Consolidation Copper 15m 10GBase-CX4 15m Passive SDR 20m/ DDR 10m Distance 10GBase-T 100m Active DDR 25m Optical 100m 10GBase-SR 300m 300m (SDR) Distance**** 10GBase-LRM 220m 150m (DDR)

* Serial signaling rate ** MB/s = 106 Bytes/sec *** 1,940 MB/s measured InfiniBand Technology Overview 7 **** Datacenter oriented media © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Topologies

Back to Back

2 Level Fat Tree … … … … … … … … … …

… Dual Star … Hybrid … 3D… Torus

Example topologies commonly used Architecture does not limit topology Modular switches are based on fat tree architecture

InfiniBand Technology Overview 8 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Protocol Layers

InfiniBand Node InfiniBand Node

Application Application

ULP ULP

Transport Transport Layer Layer InfiniBand Network Router Network Layer Packet relay Layer InfiniBand Link Switch Link Link

Layer Packet relay Link Layer PHY Physical PHY Physical PHY Layer PHY Layer

InfiniBand Technology Overview 9 © 2008 Storage Networking Industry Association. All Rights Reserved. Physical Layer

Width (1X, 4X, 8X, 12X) including auto-negotiation Speed (SDR/DDR/QDR) including auto-negotiation 4X DDR HCAs and switches are currently shipping Power management 9 Polling / Sleeping Link Speed (109 bit/sec)

Connector Lane Speed SDR DDR QDR → (2.5GHz) (5GHz) (10GHz) Fixed: MicroGiGaCN Link Width Pluggable: QSFP ↓ 1X 2.5 5 10 8/10 encoding Maintain DC Balance 4X 10 20 40 Limited run length of 0’s or 1’s Control symbols (Kxx.x) 8X 20 40 80 Lane de-skew, auto negotiation, training, clock tolerance, framing 12X 30 60 120

* MicroGiGaCN is a trademark of Fujitsu Components Limited InfiniBand Technology Overview 10 © 2008 Storage Networking Industry Association. All Rights Reserved. Physical Layer – Cont’d

Copper Cables*: Width Speed Connector Min Type / Reach Power** 4X – 4X SDR/ Micro- 20m/ Passive MicroGiGaCN→ DDR GiGaCN 10m 4X DDR Micro- 15-25m Active GiGaCN 0.5-0.75W 12X – 12X SDR/ 24pin Micro- 20m/ Passive 24 pair MicroGiGaCN→ DDR GiGaCN 10m

Fiber Optics*: 4X - MicroGiGaCN Widt Speed Connector Type Min Power Fiber MPO Media Converter → h Reach ** Media 4X SDR/ Micro- Media 300m/ 0.8-1W 12 strand DDR GiGaCN Converter 150m MPO 4X - MicroGiGaCN 4X DDR Micro- Optical 100m 1W 12 strand Optical Cable → GiGaCN Cable attached

* currently deployed InfiniBand Technology Overview 11 ** per end © 2008 Storage Networking Industry Association. All Rights Reserved. Link Layer

Addressing and Switching

Local Identifier (LID) addressing VL ARB Unicast LID - 48K addresses Multicast LID – up to 16K addresses Efficient linear lookup Independent Virtual Lanes (VLs) Cut through switching supported Multi-pathing support through LMC High Independent Virtual Lanes Priority Packets WRR Priority to be Flow control (lossless fabric) Select Transmitted Low Service level Priority WRR VL arbitration for QoS H/L Weighted Round Robin (WRR) VL Arbitration Congestion control Forward / Backward Explicit Congestion Notification HCA Switch HCA (FECN/BECN) FECN

Per QP/VL threshold Data Integrity injection BECN BECN Invariant CRC rate control Variant CRC Efficient FECN/BECN Based Congestion Control

InfiniBand Technology Overview 12 © 2008 Storage Networking Industry Association. All Rights Reserved. Network Layer

Global Identifier (GID) addressing Based on IPv6 addressing scheme GID = {64 bit GID prefix, 64 bit GUID} GUID = Global Unique Identifier (64 bit EUI-64) GUID 0 – assigned by the manufacturer GUID 1..(N-1) – assigned by the Subnet Manager

Optional for local subnet access

Used for multicast distribution within end nodes Enables routing between IB subnets Still under definition in IBTA Will leverage IPv6 routing algorithms

IB Router Subnet B Subnet A

InfiniBand Technology Overview 13 © 2008 Storage Networking Industry Association. All Rights Reserved. Transport – Host Channel Adapter (HCA) Model

Asynchronous interface Consumer Consumer posts work requests posting polling HCA processes WQEs CQEs Consumer polls completions HCA QP QP Transport executed by HCA I/O channel exposed to the application Completion Transport services Queue Reliable / Unreliable Send Receive … Send Receive Queue Queue Queue Queue Connected / Datagram Transport and RDMA Offload Engine CPU CPU Mem CPU CPU Mem

VL VL VL … VL VL VL VL … VL Mem Bridge PCIe PCIe Port … Port HCA HCA

InfiniBand InfiniBand

InfiniBand Technology Overview 14 © 2008 Storage Networking Industry Association. All Rights Reserved. Transport Layer

Queue Pair (QP) – transport endpoint Asynchronous interface Send Queue, Receive Queue, Completion Queue Full transport offload Segmentation, reassembly, timers, retransmission, etc Operations supported Send/Receive – messaging semantics RDMA Read/Write – enable zero copy operations Atomics – remote Compare & Swap, Fetch & Add Memory management - Bind/Fast Register/Invalidate Kernel bypass Enables low latency and CPU offload Exposure of application buffers to the network Direct data placement / zero copy Enabled through QPs, Completion Queues (CQs), Protection Domains (PD), Memory Regions (MRs) Polling and Interrupt models supported

InfiniBand Technology Overview 15 © 2008 Storage Networking Industry Association. All Rights Reserved. Partitions

Partition 1 Inter-Host Host B Logically divide the fabric into Host A isolated domains Partial and full membership per partition Partition filtering at switches

InfiniBand fabric Similar to FC Zoning 802.1Q VLANs

I/O D I/O A

Partition 2 I/O B private to host B Partition 3 I/O C Partition 4 private to host A shared

InfiniBand Technology Overview 16 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Data Integrity

Hop by hop VCRC – 16 bit CRC CRC16 0x100B T10/DIF ICRC

VCRC VCRC VCRC

End to end VCRC

ICRC – 32 bit CRC InfiniBand CRC32 0x04C11DB7 InfiniBand Fabric Storage

Same CRC as Ethernet Switch

Switch Gateway

Application level Switch Fibre T10/DIF Logical Block Guard Channel Per block CRC SAN 16 bit CRC 0x8BB7 VCRC VCRC VCRC FC Block Storage ICRC

T10/DIF

InfiniBand Technology Overview 17 © 2008 Storage Networking Industry Association. All Rights Reserved. Management Model

Subnet Manager (SM) Configures/Administers fabric topology SNMP Tunneling Agent Implemented at an end-node or a switch Application-Specific Agent Vendor-Specific Agent Active/Passive model when more than one SM is present Device Mgt Agent Performance Mgt Agent Talks with SM Agents in nodes/switches Communication Mgt Agent Subnet Administration Baseboard Mgt Agent Subnet Mgt Agent Provides path records Subnet Administration Subnet Manager QoS management Communication Management Subnet Management Interface Connection establishment processing General Service Interface

QP1 QP0 (uses VL15)

InfiniBand Technology Overview 18 © 2008 Storage Networking Industry Association. All Rights Reserved. Upper Layer Protocols

ULPs connect InfiniBand to common interfaces Clustering Supported on mainstream operating systems Apps HPC clustering IB MPI Apps

Clustering Socket based Apps Storage Apps InfiniBand Core Services MPI (Message Passing Interface) User RDS (Reliable Datagram Socket) sockets storage Kernel Interfaces interface (file/block) file IPoIB (IP over InfiniBand) TCP/ block storage storage IP SDP RDS NFS SDP (Socket Direct Protocol) IB SRP iSER over Apps IPoIB RDMA

Storage kernel bypass SRP (SCSI RDMA Protocol) InfiniBand Core Services iSER (iSCSI Extensions for RDMA) Device Driver NFSoRDMA (NFS over RDMA) Hardware

Operating system InfiniBand Infrastructure Applications

InfiniBand Technology Overview 19 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Block Storage Protocols

SRP - SCSI RDMA Protocol Defined by T10

iSER – iSCSI Extensions for RDMA Defined by IETF IP Storage WG

InfiniBand specifics (e.g. CM) defined by SAM-3 Fibre Channel SRP iSCSI IBTA Leverages iSCSI management SCSI SCSI SCSI SCSI Application Application Application Application infrastructure Layer Layer Layer Layer

Protocol offload iSCSI SCSI FC-4 Use IB Reliable Connected Transport Mapping SRP Protocol (FCP-3) Layer RDMA for zero copy data transfer iSER

FC-3 (FC-FS, FC-LS) Interconnect InfiniBand / FC-2 (FC-FS) InfiniBand Layer iWARP FC-1 (FC-FS) FC-0 (FC-PI)

InfiniBand Technology Overview 20 © 2008 Storage Networking Industry Association. All Rights Reserved. SRP - Data Transfer Operations

IO Read Send/Receive Initiator Target SR Commands P_CMD Responses Write Task management RDMA RDMA – Zero Copy Path P_RSP Data-In SR

Data-Out IO Write iSER uses the same principles Initiator Target Immediate/Unsolicited data allowed SR P_CMD through Send/Receive Read RDMA RDM A Rea iSER and SRP are part of mainline d Resp kernel SP SRP_R

InfiniBand Technology Overview 21 © 2008 Storage Networking Industry Association. All Rights Reserved. Data Transfer Summary

SRP iSER iSCSI FCP

Request SRP_CMD (SEND) SCSI-Command (SEND) SCSI-Command FCP_CMND

Response SRP_RSP (SEND) SCSI-Response (SEND) SCSI-Response FCP_RSP (or piggybacked on Data-In PDU)

Data-In Delivery RDMA Write RDMA Write Data-In FCP_DATA

Data-Out Delivery RDMA Read RDMA Read R2T FCP_XFER_RDY RDMA Read Resp. RDMA Read Resp. Data-Out FCP_DATA

Unsolicited Data-Out Part of SCSI-Command Part of SCSI- FCP_DATA (SEND) Command Delivery Data-Out (SEND) Data-Out

Task Management SRP_TSK_MGMT Task Management Task Management FCP_CMND (SEND) Function Request/ Function Request/ Response (SEND) Response

InfiniBand Technology Overview 22 © 2008 Storage Networking Industry Association. All Rights Reserved. SRP Discovery

Discovery methods InfiniBand I/O Model Persistent Information {Node_GUID:IOC_GUID} Subnet Administrator (Identify all ports with I/O CapabilityMask.IsDM) Controller Configuration Manager (CFM)*

Locate the Device Administrator through Service I/O Unit I/O Record Controller Boot Manager* Boot Information Service* Identifiers Per LUN WWN (through INQUIRY VPD) SRP Target Port ID {IdentifierExt[63:0], IOC GUID[63:0]} Service Name – SRP.T10.{PortID ASCII} Service ID – Locally assigned by the IOC/IOU

InfiniBand Technology Overview 23 * Not deployed today © 2008 Storage Networking Industry Association. All Rights Reserved. iSER Discovery

Leverages all iSCSI infrastructure Using IP over InfiniBand Same iSCSI mechanisms for discovery (RFC 3721) Static Configuration {IP, port, target name} Send Targets {IP, port} SLP iSNS Same target naming (RFC 3721/3980) iSCSI Qualified Names (iqn.) IEEE EUI64 (eui.) T11 Network Address Authority (naa.)

InfiniBand Technology Overview 24 © 2008 Storage Networking Industry Association. All Rights Reserved. NFS over RDMA

NFS READ Defined by IETF Client Server ONC-RPC extensions for RDMA RPC Ca NFS mapping ll RPC Call/Reply Write Send/Receive – if small RDMA Via RDMA Read chunk list - if big eply Data transfer RPC R RDMA Read/Write – described by chunk list in XDR message Send – inline in XDR message NFS WRITE Uses InfiniBand Reliable Connected QP Client Server

Uses IP extensions to CM R PC Call Connection based on IP address and TCP port ead DMA R Zero copy data transfers R RDM A Rea NFSoRDMA is part of mainline d Resp eply RPC R

InfiniBand Technology Overview 25 © 2008 Storage Networking Industry Association. All Rights Reserved. I/O Consolidation

Storage Network Manage Storage Network Manage App ing ment App ing ment App App App App

OS Converge OS with InfiniBand

FC HCA GbE NIC FC HCA GbE NIC GbE NIC GbE NIC IB HCA

GbE NIC One wire GbE NIC High pipe for capacity provisioning Slower I/O Dedicated I/O channels enable convergence Different service needs – For Networking, Storage, Management different fabrics Application compatibility QoS - differentiates different traffic types No flexibility Partitions – logical fabrics, isolation More ports to manage Gateways - Share remote Fibre Channel and Eth ports More power Design based on average load across multiple servers Scale incrementally – add Ethernet/FC/Server blades More space Scale independently Higher TCO InfiniBand Technology Overview 26 © 2008 Storage Networking Industry Association. All Rights Reserved. I/O Consolidation – VLs and Scheduling Example

Physical: VLs and scheduling can be dynamically configured and InfiniBand fabric adjusted to match application performance requirements

Logical:

Low Latency VL Mainstream Storage VL VL For Clustering Day - at least 40% BW Day – at least 20% BW Night – at least 20% BW Night – at least 70% BW

InfiniBand Technology Overview 27 © 2008 Storage Networking Industry Association. All Rights Reserved. High Availability and Redundancy

Multi-port HCAs Covers link failure Redundant fabric topologies Covers link failure Link layer multi-pathing (LMC) Automatic Path Migration (APM) ULP High Availability Application level multi-pathing (SRP/iSER) Teaming/Bonding (IPoIB) Covers HCA failure and link failure

InfiniBand Technology Overview 28 © 2008 Storage Networking Industry Association. All Rights Reserved. Performance Metrics

IB Verbs Block Storage (SRP) Latency Bandwidth (1MB I/O, no RAID) RDMA Write 0.99us I/O Read 1.7GB/s RDMA Read 1.87us (roundtrip) I/O Write 1.6GB/s Bandwidth Single DDR port 1.5-1.9GB/s (unidirectional) 2.9-3.7GB/s (bidirectional) – Depends on PCIe (2.5-5GT/s)

Clustering (MPI) File Storage (NFSoRDMA) Latency 1.2us Read 1.6GB/s Message rate 30M msg/sec Write 0.59GB/s Single DDR port

InfiniBand Technology Overview 29 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Storage Opportunities & Benefits

InfiniBand Storage Deployment Alternatives Clustering port can connect to storage Gateway Servers InfiniBand Fibre High Bandwidth Fabric Channel Fabric consolidation (QoS, partitioning)

Efficiency – full offload and zero copy Native IB Gateways Native IB (NFS RDMA) One wire out of the server Block Storage (SRP/iSER) Shared remote FC ports - scalability Direct attach Backend native IB InfiniBand Block Storage

Parallel / clustered file-system Parallel NFS Native IB Servers Server JBODs

Clustered/Parallel storage, Backend fabric

InfiniBand benefits: Combined with clustering infrastructure Efficient object/block transfer Atomic operations Ultra low latency OSD/Block Storage Targets High bandwidth

InfiniBand Technology Overview 30 © 2008 Storage Networking Industry Association. All Rights Reserved. Summary

Datacenter developments require better I/O Increasing compute power per host Server virtualization Increasing storage demand InfiniBand I/O is a great fit for the datacenter Layered implementation Brings fabric consolidation Enables efficient SAN, Network, IPC and Management traffic Price/Performance Gateways provide scalable connectivity to existing fabrics Existing storage opportunities with InfiniBand Connectivity to HPC clusters, where IB is the dominant fabric

InfiniBand Technology Overview 31 © 2008 Storage Networking Industry Association. All Rights Reserved. Other SNIA Tutorials

Check out SNIA Tutorials Comparing Server I/O Consolidation Solutions: iSCSI, InfiniBand and FCoE Fibre Channel over Ethernet

InfiniBand Technology Overview 32 © 2008 Storage Networking Industry Association. All Rights Reserved. Q&A / Feedback

Please send any questions or comments on this presentation to SNIA: [email protected]

Many thanks to the following individuals for their contributions to this tutorial. - SNIA Education Committee

Bill Lee Graham Smith Howard Goldstein Ron Emerick Sujal Das Walter Dey

InfiniBand Technology Overview 33 © 2008 Storage Networking Industry Association. All Rights Reserved. Backup

InfiniBand Technology Overview © 2008 Storage Networking Industry Association. All Rights Reserved. Interconnect: A Competitive Advantage Servers End-Users And Blades Enterprise Data Centers ƒ Clustered Database ƒ eCommerce and Retail ƒ Financial ƒ Supply Chain Management Switches ƒ Web Services

High-Performance Computing ƒ Biosciences and Geosciences ƒ Automated Engineering ƒ Digital Content Creation Storage ƒ Electronic Design Automation ƒ Government and Defense

Embedded Embedded ƒ Communications ƒ Computing and Storage Aggregation ƒ Industrial ƒ Medical ƒ Military

InfiniBand Technology Overview 35 © 2008 Storage Networking Industry Association. All Rights Reserved. Applicable Markets for InfiniBand

Data Centers Clustered database, data warehousing, shorter , I/O consolidation, power savings, virtualization, SOA, XTP Financial Real-time risk assessment, grid computing and I/O consolidation Electronic Design Automation (EDA) and Computer Automated Design (CAD) I/O is the bottleneck to shorter job run times High Performance Computing High throughput I/O to handle expanding datasets Graphics and Video Editing HD file sizes exploding, shorter backups, real-time production

InfiniBand Technology Overview 36 © 2008 Storage Networking Industry Association. All Rights Reserved. Interconnect Trends – Top500 Growth rate from Nov 06 to Nov 07 (year) Top500 Interconnect Trends InfiniBand: +52% 280 260 271 All Proprietary: -70% 240 220 GigE: +26% 200 180 Efficiency 160 75% 140 InfiniBand GigE 120 70% 100 125 70% 80 65% 60 40 60% Number of Clusters of Number 20 28 55% 0 InfiniBand All Proprietary High Speed GigE 50% 52% Nov-05 Nov-06 Nov 07 45% Average Cluster Efficiency InfiniBand - the only growing high speed interconnect 52% growth from Nov 2006 Source: http://www.top500.org/list/2007/06/ The TOP500 project was started in 1993 to provide a reliable basis for tracking and detecting trends in high-performance computing. InfiniBand Technology Overview 37 © 2008 Storage Networking Industry Association. All Rights Reserved. The InfiniBand Architecture

Industry standard defined by the InfiniBand Trade Association Defines System Area Network architecture Comprehensive specification: from physical to applications

Rev Rev Rev Rev Rev Processor 1.0 1.0a 1.1 1.2 1.2.1 Node Processor … … Node

2000 2001 2002 2004 2007 HCA InfiniBand Architecture supports Consoles HCA Subnet Processor Host Channel Adapters (HCA) Node HCA Switch

Target Channel Adapters (TCA) Switch HCA Switches Subnet Switch Routers Manager Switch

Facilitated HW design for TCA Gateway TCA Low latency / high bandwidth Gateway Transport offload RAID Fibre Channel Storage Subsystem Ethernet

InfiniBand Technology Overview 38 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Packet Format

8B 40B 12B var 0..4096B 4B 2B

Ext LRH GRH BTH Payload ICRC VCRC HDRs

InfiniBand Data Packet

VL LVer SL rsvd LNH DLID Opcode SMPad TVer Partition Key rsvd Len SLID rsvd Destination QP LRH A rsvd PSN BTH

IPVer TClass Flow Label Payload Len Next Header Hop Lim SGID[127:96] Extended headers: •Reliable Datagram ETH (4B) SGID[95:64] •Datagram ETH (8B) SGID[63:32] •RDMA ETH (16B) SGID[31:0] •Atomic ETH (28B) DGID[127:96] •ACK ETH (4B) •Atomic ACK ETH (8B) DGID[95:64] •Immediate Data ETH (4B) DGID[63:32] •Invalidate ETH (4B) DGID[31:0] GRH (Optional)

InfiniBand Technology Overview 39 © 2008 Storage Networking Industry Association. All Rights Reserved. InfiniBand Resources

InfiniBand software is developed under OpenFabrics Open source Alliance http://www.openfabrics.org/index.html

InfiniBand standard is developed by the InfiniBand® Trade Association http://www.infinibandta.org/home

InfiniBand Technology Overview 40 © 2008 Storage Networking Industry Association. All Rights Reserved. Reference

InfiniBand Architecture Specification Volumes 1-2 Release 1.2.1 www.infinibandta.org IP over InfiniBand RFCs 4391, 4392, 4390, 4755 (www.ietf.org) NFS Direct Data Placement http://www.ietf.org/html.charters/nfsv4-charter.html iSCSI Extensions for RDMA (iSER) Specification http://www.ietf.org/html.charters/ips-charter.html SCSI RDMA Protocol (SRP), DIF www.t10.org

InfiniBand Technology Overview 41 © 2008 Storage Networking Industry Association. All Rights Reserved. Glossary

APM - Automatic Path Migration MPI - Message Passing Interface BECN - Backward Explicit Congestion Notification MR - Memory Region BTH - Base Transport Header NFSoRDMA - NFS over RDMA

CFM - Configuration Manager OSD - Object based Storage Device

CQ - Completion Queue OS -

CQE - Completion Queue Element PCIe - PCI Express

CRC - PD - Protection Domain

DDR - QDR - Quadruple Data Rate

DIF - Data Integrity Field QoS -

FC - Fibre Channel QP - Queue Pair

FECN - Forward Explicit Congestion Notification RDMA - Remote DMA

GbE - RDS - Reliable Datagram Socket

GID - Global IDentifier RPC - Remote Procedure Call

GRH - Global Routing Header SAN -

GUID - Globally Unique IDentifier SDP - Sockets Direct Protocol

HCA - Host Channel Adapter SDR - Single Data Rate

IB - InfiniBand SL - Service Level

IBTA - InfiniBand Trade Association SM - Subnet Manager

ICRC - Invariant CRC SRP - SCSI RDMA Protocol

IPoIB - Protocol Over InfiniBand TCA - Target Channel Adapter

IPv6 - Internet Protocol Version 6 ULP - Upper Layer Protocol

iSER - iSCSI Extensions for RDMA VCRC - Variant CRC

LID - Local IDentifier VL - Virtual Lane

LMC - Link Mask Control WQE - Work Queue Element

LRH - Local Routing Header WRR - Weighted Round Robin

LUN -

InfiniBand Technology Overview 42 © 2008 Storage Networking Industry Association. All Rights Reserved.