Architecture of Parallel Computers CSC / ECE 506

OpenFabrics Alliance Lecture 18

7/17/2006

Dr Steve Hunter Outline

• Infiniband and Ethernet Review • DDP and RDMA • OpenFabrics Alliance – IP over Infiniband (IPoIB) – Sockets Direct Protocol (SDP) – Network File System (NFS)

– SCSI RDMA Protocol (SRP) – iSCSI Extensions for RDMA (iSER) – Reliable Datagram Sockets (RDS)

Arch of Parallel Computers CSC / ECE 506 2 Infiniband Goals - Review

• Interconnect for server I/O and efficient interprocess communications • Standard across the industry – backed by all the major players » 200+ companies • With an architecture able to match future systems: – Low overhead – Scalable bandwidth, up and down – Scalable fanout, few to thousands – Low cost, excellent price/performance – Robust reliability, availability, and serviceability – Leverages Internet Protocol suite and paradigms

Arch of Parallel Computers CSC / ECE 506 3 The Basic Unit: an IB Subnet - Review

• Basic whole IB system is a subnet • Elements:

End – Endnodes Node End – Links Node End – Switches Node End • What it does: Communicate Node Switch – endnodes with endnodes, Links – via message queues, Switch Switch – which process messages over several End transport types, Node – and are SARed into packets, End End – which are placed on links, Switch Node Node – and routed by switches.

End Node End End Node Node

Arch of Parallel Computers CSC / ECE 506 4 End Node Attachment to IB - Review

Host • End nodes attach to IB via Channel Adapters: CPU CPU CPU CPU – Host CAs (HCAs) » O/S API/KPIs not specified » Queues and memory accessible via verbs Memory Controller » QP, CQ, and RDMA engines » Must support three IB Transports Verbs » Can include: HCA • Dual ports – load balancing, availability (path migration) Memory Tables – Attach to same or different subnets QPs CQs • Partitioning • Atomics, … IB Layers – Target CAs (TCAs) » Queue access method is vendor unique Adapter » QP and CQ engines IB Layers » Need only support Unreliable Datagram TCA » ULP can be standard or proprietary » In other words… • A smaller subset of required functions. QPs CQs

IO Controller

Arch of Parallel Computers CSC / ECE 506 5 Infiniband Summary

• InfiniBand architecture is a very high performance, low latency interconnect technology based on an industry-standard approach to Remote Direct Memory Access (RDMA) – An InfiniBand fabric is built from hardware and software that are configured, monitored and operated to deliver a variety of services to users and applications • Characteristics of the technology that differentiate it from comparative interconnects such as the traditional Ethernet include: – End-to-end reliable delivery,

– Scalable bandwidths from 10 to 60 Gbps available today moving to 120 Gbps in the near future

– Scalability without performance degradation

– Low latency between devices

– Greatly reduced server CPU utilization for protocol processing

– Efficient I/O channel architecture for network and storage virtualizations

Arch of Parallel Computers CSC / ECE 506 6 Advanced Ethernet - Review

iSER / RNIC Model shown TCP/IP Model with SCSI application

SCSI Service

RDMA SCSl app Examples Service Internet SCSI (iSCSI) TCP SCSI Service iSCSI Extensions for RDMA IP (iSER) HTTP, SMTP, FTP Application Service Remote Direct Memory Access Protocol (RDMAP) MAC Markers with PDU Alignment Service (MPA)

Direct Data Placement (DDP) TCP, UDP Transport Transmission Control Protocol (TCP) IP Network Internet Protocol (IP) Ethernet Link Media Access Control (MAC) RDMA NIC (RNIC)

Copper, Optical Physical Physical

• It’s expected the OpenFabrics effort (i.e., OpenIB / OpenRDMA merger) will enable even more advanced functions into NIC technology

Arch of Parallel Computers CSC / ECE 506 7 Advanced Ethernet Summary

• The iWARP technology, implemented as RDMA Network Interface Card (RNIC), achieves Zero-copy, RDMA, and protocol offload over existing TCP/IP networks – It was demonstrated that a 10GbE based RNIC can reduce the CPU processing overhead from 80-90% to less than 10% comparing to its host stack equivalent

– Additionally, its achievable end-to-end latency is now 5 microseconds or less. • iWARP together with the emerging low latency (low hundreds of nanoseconds) 10 GbE switches can also provide a powerful infrastructure for clustered computing, server-to-server processing, visualization and file system – The advantage of the iWARP technology includes its ability to leverage the widely deployed TCP/IP infrastructure, its broad knowledge base, and mature management and monitoring capabilities.

– In addition, an iWARP infrastructure is a routable infrastructure, thereby eliminating the need for gateways to connect to the LAN or WAN internet.

Arch of Parallel Computers CSC / ECE 506 8 DDP and RDMA

• IETF RFC http://rfc.net/rfc4296.html

• The central idea of general-purpose DDP is that a data sender will supplement the data it sends with placement information that allows the receiver's network interface to place the data directly at its final destination without any copying. – DDP can be used to steer received data to its final destination, without requiring layer- specific behavior for each different layer. – Data sent with such DDP information is said to be `tagged'. • The central components of the DDP architecture are the “buffer”, which is an object with beginning and ending addresses, and a method (set()), which sets the value of an octet at an address. – In many cases, a buffer corresponds directly to a portion of host user memory. However, DDP does not depend on this; a buffer could be a disk file, or anything else that can be viewed as an addressable collection of octets.

Arch of Parallel Computers CSC / ECE 506 9 DDP and RDMA

• Remote Direct Memory Access (RDMA) extends the capabilities of DDP with two primary functions. – It adds the ability to read from buffers registered to a socket (RDMA Read). » This allows a client protocol to perform arbitrary, bidirectional data movement without involving the remote client. » When RDMA is implemented in hardware, arbitrary data movement can be performed without involving the remote host CPU at all. • RDMA specifies a transport-independent untagged message service (Send) with characteristics that are both very efficient to implement in hardware, and convenient for client protocols. – The RDMA architecture is patterned after the traditional model for device programming, where the client requests an operation using Send-like actions (programmed I/O), the server performs the necessary data transfers for the operation (DMA reads and writes), and notifies the client of completion. » The programmed I/O+DMA model efficiently supports a high degree of concurrency and flexibility for both the client and server, even when operations have a wide range of intrinsic latencies.

Arch of Parallel Computers CSC / ECE 506 10 OpenFabrics Alliance

• The OpenFabric Alliance is an international organization comprised of industry, academic and research groups that have developed a unified core of open source software stacks (OpenSTAC) leveraging RDMA architectures for both the and Windows operating systems over both InfiniBand and Ethernet. – RDMA is a communications technique allowing data to be transmitted from the memory of one computer to the memory of another computer without passing through either devices CPU, without needing extensive buffering, and without calling to an operating system kernel • The core OpenSTAC software supports all the well known standard upper layer protocols such as MPI, IP, SDP, NFS, SRP, iSER, and RDS on top of Ethernet and InfiniBand (IB) infrastructures – The OpenFabric software and supporting services better enables low-latency InfiniBand and 10 GbE to deliver clustered computing, server-to-server processing, visualization and file system access

Arch of Parallel Computers CSC / ECE 506 11 OpenFabrics Software Stack

SA Subnet Sockets IP Based Block Clustered Access to Administrator Application Based Various DB Access App Storage File Level Access MPIs (Oracle MAD Management Access Access Systems (IBM DB2) 10g RAC) Datagram SMA Subnet Manager Diag Open User UDAPL Agent Tools SM Space User PMA Performance User Level SDP User Level Manager Agent APIs MAD API Library Verbs / API IPoIB IP over InfiniBand Kernel SDP Sockets Direct Upper Protocol NFS-RDMA Cluster Space Layer IPoIB SDP SRP iSER RDS RPC File Sys SRP SCSI RDMA Protocol Protocol (Initiator)

iSER iSCSI RDMA Connection Manager Protocol (Initiator) Abstraction (CMA) RDS Reliable Datagram Mid-Layer Service SA Connection Connection MAD SMA Manager Manager UDAPL User Direct Access Client Programming Lib

HCA Host Channel InfiniBand Verbs / API R-NIC Driver API Adapter

R-NIC RDMA NIC Provider Hardware Hardware Specific Specific Driver Driver Common Apps & Key Access InfiniBand Methods Hardware InfiniBand HCA iWARP R-NIC for using iWARP OF Stack

Arch of Parallel Computers CSC / ECE 506 12 IP over IB (IPoIB)

• IETF Standard for mapping Internet protocols to Infiniband – IETF IPoIB Working Group • Covers – Fabric initialization

– Multicast/Broadcast

– Address resolution (IPv4/IPv6) – IP Datagram encapsulation (IPv4/IPv6)

– MIBs

Arch of Parallel Computers CSC / ECE 506 13 IP over IB (IPoIB)

• Communication Parameters – Obtained from Subnet Manager (SM) » P_Key (Partition Key) » SL (Service Level)

» Path Rate

» Link MTU (for IPv6 can be reduced with router advert) » GRH parameters – TClass, Flow Label, HopLimit – Obtained from address resolution » Data Link Layer Address (GID)

• Perstent Data Link layer address necessary

• Enables IB Routers to be deployed eventually » QPN (queue pair number)

Arch of Parallel Computers CSC / ECE 506 14 IP over IB (IPoIB)

• Address Resolution – IPv4 » ARP request is sent on Broadcast MGID » ARP reply is unicast back and contains GID and QPN – IPv6 » Neighbor discovery using all IP-hosts multicast address » Existing RFCs

• Summary – Feels like Ethernet with 2KB MTU – Doesn’t utilize most of Inifinband custom hardware » e.g., SAR, Reliable Transport, Zero Copy, RDMA Reads/Writes, Kernel Bypass

» SDP is the enhanced version

Arch of Parallel Computers CSC / ECE 506 15 Sockets Direct Protocol (SDP)

• Based on ’s Winsock Direct Protocol • SDP Feature Summary – Maps sockets SOCK_STREAM to RDMA semantics – Optimizations for transaction oriented protocols

– Optimizations for mixing of small and large messages • Uses advanced Infiniband features – Reliable Connected (RC) service

– Uses RDMA Writes, Reads, and Sends

– Supports Automatic Path Migration

Arch of Parallel Computers CSC / ECE 506 16 SDP Terminology

• Data Source – Side of connection which is sourcing the ULP data to be transferred • Data Sink – Side of connection which is receiving (sinking) the ULP data • Data Transfer Mechanism – To move ULP data from Data Source to Data Sink (e.g., Bcopy, Receiver Initiated Zcopy, Read Zcopy) • Flow Control Mode – State that the half connection is currently in (Combined, Pipelined, Buffered) • Bcopy Threshold – If message length is under threshold, use Bcopy mechanism. Threshold is locally defined.

Arch of Parallel Computers CSC / ECE 506 17 SDP Modes

• Flow Control Modes restrict data transfer mechanisms

• Buffered Mode – Used when receiver wishes to force all transfers to use the Bcopy Mechanism • Combined Mode – Used when receiver is not pre-posting buffers and uses peek/select interface (Bcopy or Read Zcopy, only one outstanding) • Pipelined Mode – Highly optimized transfer mode – multiple write or read buffers outstanding, can use all data transfer mechanisms (Bcopy, Read Zcopy, Receive Initiated Write Zcopy)

Arch of Parallel Computers CSC / ECE 506 18 SDP Terminology

SDP Private Buffer Copy Buffer Pool Path (Fixed Size) Buffer Copy Path

Data Soure Data Sink User User Buffer Buffer

CA CA Zero Copy Infiniband Zero Copy Path Reliable Path Connection (RC)

• Enables buffer-copy when – Transfer is short

– Application needs buffering • Enables zero-copy when – Transfer is long

Arch of Parallel Computers CSC / ECE 506 19 Network File System (NFS)

• Network File System (NFS) is a protocol originally developed by Sun Microsystems in 1984 and defined in RFCs 1094, 1813, and 3530 (obsoletes 3010), as a distributed file system which allows a computer to access files over a network as easily as if they were on its local disks. – NFS is one of many protocols built on the Open Network Computing Remote Procedure Call system (ONC RPC) • Version 2 of the protocol – originally operated entirely over UDP and was meant to keep the protocol stateless, with locking (for example) implemented outside of the core protocol • Version 3 added: – support for 64-bit file sizes and offsets, to handle files larger than 4GB – support for asynchronous writes on the server, to improve write performance; – additional file attributes in many replies, to avoid the need to refetch them; – a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory; – assorted other improvements.

Arch of Parallel Computers CSC / ECE 506 20 Network File System (NFS)

• Version 4 (RFC 3530) – Influenced by AFS and CIFS, includes performance improvements, mandates strong security, and introduces a stateful protocol. Version 4 was the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols.

• Various side-band protocols have been added to NFS, including: – The byte-range advisory Network Lock Manager (NLM) protocol which was added to support System V UNIX file locking APIs. – The remote quota reporting (RQUOTAD) protocol to allow NFS users to view their data storage quotas on NFS servers.

• WebNFS is an extension to Version 2 and Version 3 which allows NFS to be more easily integrated into Web browsers and to enable operation through firewalls.

Arch of Parallel Computers CSC / ECE 506 21 SCSI RDMA Protocol (SRP)

• SRP defines a SCSI protocol mapping onto the InfiniBand Architecture and/or functionally similar cluster protocols • RDMA Consortium voted to create iSER instead of porting SRP to IP – SRP doesn’t have a wide following – SRP doesn’t have a discovery or management protocol

– Version 2 of SRP hasn’t been updated for 1.5 years

Arch of Parallel Computers CSC / ECE 506 22 iSCSI Extensions for RDMA (iSER)

• iSER combines SRP and iSCSI with new RDMA capabilities • iSER is maintained as part of iSCSI in IETF – Recently extended to IB by IBM, Voltaire, HP, EMC, and others • Benefits to add iSER to IB – Combines same (almost) storage protocol across all RDMA Networks » Easier to train staff » Bridging products more staight-forward

» Motivate storage community to iSCSI/iSER mentality and may help with acceptance on IP – Desire for a common Discovery and Management protocol across iSCSI, iSER/iWARP, and IP » i.e., same Management and discovery process and software to handle IP networks and IB networks

Arch of Parallel Computers CSC / ECE 506 23 iSCSI Extensions for RDMA (iSER)

• iSCSI’s main performance deficiencies stem from TCP/IP – TCP is a complex protocol requiring significant processing

– Stream based, making it hard to separate data and headers – Requires copies that increase latency and CPU overhead

– Using checksums requiring additional CRCs in the ULP • iSER eliminates the bottlenecks through: – Zero copy using RDMA

– CRC calculated by hardware

– Work with message boundaries instead of streams

– Transport protocol implemented in hardware (minimal CPU cycles per iO)

Arch of Parallel Computers CSC / ECE 506 24 iSCSI Extensions for RDMA (iSER)

• iSER leverages on iSCSI management, discovery, and RAS – Zero-Configuration, Discovery and global storage name server (SLP, iSNS)

– Change Notifications and active monitoring of devices and initiators – High-Availability, and 3 levels of automated recovery

– Multi-pathing and storage aggregation

– Industry standard management interfaces (MIB)

– 3rd party storage managers

– Security: Partitioning, Authentication, Central login control, etc. • Working with iSER over IB doesn’t require any changes – Focused effort from both communities • More advanced than SRP

Arch of Parallel Computers CSC / ECE 506 25 iSCSI Extensions for RDMA (iSER)

• iSCSI specification: – http://www.ietf.org/rfc/rfc3720.txt • iSER and DA Introduction – http://www.rdmaconsortium.org/home/iSER_DA_intro.pdf • iSER specification – http://www.ietf.org/internet-drafts/draft-ietf-ips-iser-05.txt • iSER over IB Overview – http://www.haifa.il.ibm.com/satran/ips/iSER-in-an-IB-network-V9.pdf

Arch of Parallel Computers CSC / ECE 506 26 Reliable Datagram Sockets (RDS)

• Goals – Provide reliable datagram service » performance » scalability » high availability » simplify application code

– Maintain sockets API » application code portability » faster time-to-market

Arch of Parallel Computers CSC / ECE 506 27 Reliable Datagram Sockets (RDS)

Stack Overview

Socket Oracle UDP User Applications 10g Applications Kernel

TCP UDP SDP RDS

IP IPoIB

OpenIB Access Layer

Host Channel Adapter

Arch of Parallel Computers CSC / ECE 506 28 Reliable Datagram Sockets (RDS)

• Application connectionless – RDS maintains node-to-node connection – IP addressing – Uses CMA – On-demand connection setup » connect on first sendmsg()or data recv » disconnect on error or policy like inactivity

– Connection setup/teardown transparent to applications

Arch of Parallel Computers CSC / ECE 506 29 Reliable Datagram Sockets (RDS)

• Data and Control Channel – Uses RC QP for node level connections – Data and Control QPs per session – Selectable MTU – b-copy send/recv – H/W flow control

Arch of Parallel Computers CSC / ECE 506 30 The End

Arch of Parallel Computers CSC / ECE 506 31 RDS - Send

• Connection established on first send

• sendmsg() – allows send pipelining

• ENOBUF returned if insufficient send buffers, application retries

Arch of Parallel Computers CSC / ECE 506 32 RDS - Receive

• Identical to UDP recvmsg() – similar blocking/non-blocking behavior

• “Slow” receiver ports are stalled at sender side – combination of activity (LRU) and memory utilization used to detect slow receivers – sendmsg() to stalled destination port returns EWOULDBLOCK, application can retry » Blocking socket can wait for unblock – recvmsg() on a stalled port un-stalls it

Arch of Parallel Computers CSC / ECE 506 33 RDS - High Availability (failover)

• Use of RC and on-demand connection setup allows HA – connection setup/teardown transparent to applications – every sendmsg() could “potentially” result in a connection setup – if a path fails, connection is torn down, next send can connect on an alternate path (different port or different HCA)

Arch of Parallel Computers CSC / ECE 506 34 Preliminary performance RDS on OpenIB

netperf (UDP_STREAM)

4000

3500

3000

UDP GbE 2500 UDP ipoib send 2000 UDP ipoib recv

Mbits/sec Rds (send = recv) 1500

1000 *Dual 2.4GHz Xeon 2G memory 500 4x PCI-X HCA

0 2k 4k 8k 16k 32K 64K **Sdp ~3700Mb/sec msg size (bytes) TCP_STREAM

Arch of Parallel Computers CSC / ECE 506 35 Preliminary performance RDS on OpenIB

netperf (UDP_STREAM)

4000

3500

3000 UDP GbE 2500 UDP ipoib recv 2000 Rds (send = recv) Mbits/sec

1500

1000 *Dual 2.4GHz Xeon 2G memory 500 4x PCI-X HCA

0 2k 4k 8k 16k 32K 64K **Sdp ~3700Mb/sec msg size (byte s) TCP_STREAM

Arch of Parallel Computers CSC / ECE 506 36 Preliminary performance RDS on OpenIB

Latency

500 450 400 350 UDP GigE 300 UDP ipoib

usec 250 Rds 200 150 100 50

2 68 0 8 9 7 4 8 6 1 2 4 5 0 4096 8 3 2 6 2 2 512 2 16384 4 8 16 3 1 1024

Msg size(bytes)

Arch of Parallel Computers CSC / ECE 506 37