Implementation and Evaluation of Iscsi Over RDMA Ethan Burns and Robert Russell

Total Page:16

File Type:pdf, Size:1020Kb

Implementation and Evaluation of Iscsi Over RDMA Ethan Burns and Robert Russell Implementation and Evaluation of iSCSI over RDMA Ethan Burns and Robert Russell {eaburns,rdr}@iol.unh.edu University of New Hampshire InterOperability Laboratory 121 Technology Drive, Suite 2 Durham, NH 03824-4716 Goal ⊲ Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered Evaluation Future Work Questions 2 / 11 SCSI Goal Create an iSCSI implementation that makes use of Remote ⊲ SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered SCSI Evaluation Future Work iSCSI Questions iSER RDMA/iWARP TCP IP Ethernet 3 / 11 SCSI Goal Create an iSCSI implementation that makes use of Remote ⊲ SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered Small Computer System Interface Evaluation Architecture for connecting peripheral devices to computers Future Work Questions Client/Server: – Initiator (Client) – Target (Server) Traditionally an internal parallel SCSI bus Limitations on number of devices and cable length 3 / 11 iSCSI Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. ⊲ iSCSI RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered SCSI Evaluation Future Work iSCSI Questions iSER RDMA/iWARP TCP IP Ethernet 4 / 11 iSCSI Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. ⊲ iSCSI RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered Internet Small Computer System Interface Evaluation Future Work – RFC3720 (2004) Questions A solution to the scalability issues of traditional SCSI A transport for SCSI commands and data over TCP/IP Two phases – Login Phase – for negotiating connection parameters – Full Feature Phase – for data transfer 4 / 11 RDMA Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI ⊲ RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered SCSI Evaluation Future Work iSCSI Questions iSER RDMA/iWARP TCP IP Ethernet 5 / 11 RDMA Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI ⊲ RDMA Evaluate the performance of the implementation. iSER Implementation Issues Uncovered Remote Direct Memory Access Evaluation Typical CPU becomes bottleneck with 10GigE Future Work Questions – Data copying – Network interrupts – Packet processing Zero-copy data transfers Offloads network processing Makes full utilization of a 10GigE link iWARP protocol suite provides RDMA over TCP/IP – RFC5040 (2007), RFC5041 (2007), RFC5044 (2007), ... 5 / 11 iSER Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI RDMA Evaluate the performance of the implementation. ⊲ iSER Implementation Issues Uncovered SCSI Evaluation Future Work iSCSI Questions iSER RDMA/iWARP TCP IP Ethernet 6 / 11 iSER Goal Create an iSCSI implementation that makes use of Remote SCSI Direct Memory Access (iWARP) with the iSER extensions. iSCSI RDMA Evaluate the performance of the implementation. ⊲ iSER Implementation Issues Uncovered iSCSI Extensions for RDMA Evaluation Future Work – RFC5046 (2007) Questions Allow iSCSI to use RDMA hardware Translate and encapsulate iSCSI over RDMA Transition from streaming TCP to RDMA enabled – Negotiate use of iSER during iSCSI negotiation phase – Transition to RDMA mode before iSCSI data transfer phase 6 / 11 Implementation Goal SCSI iSCSI RDMA iSER ⊲ Implementation Issues Uncovered Evaluation Future Work Questions 7 / 11 Implementation Goal Extend UNH-iSCSI to support the iSER extensions SCSI iSCSI – Set of Linux kernel modules RDMA – Created and supported at UNH ⊲iSER Implementation Issues Uncovered Use the OpenFabrics Alliance Stack Evaluation Future Work – Access to RDMA hardware Questions – Included in Linux kernel – Provides a user-space API Create both a kernel-space and user-space solution 7 / 11 Issues Uncovered Goal Current RDMA hardware does not support TCP stream SCSI transitioning iSCSI RDMA iSER Bring up connection in RDMA mode Implementation ⊲ No run-time selection for iSER v.s. traditional iSCSI Issues Uncovered Evaluation Additional iSER operational primitives for connection Future Work establishment Questions 8 / 11 Issues Uncovered Goal Standard iSER header for iWARP does not contain fields for all SCSI data required by current hardware. iSCSI RDMA iSER We added additional iSER header fields to advertise missing Implementation information ⊲ Issues Uncovered Evaluation Future Work Questions 8 / 11 Issues Uncovered Goal SCSI iSCSI RDMA iSER Implementation ⊲ Issues Uncovered Evaluation Future Work Questions 8 / 11 Evaluation Goal MEMORYIO mode (on the target) SCSI iSCSI Four 2.6GHz Intel 64-bit cores RDMA 4GB main memory iSER Implementation Chelsio R310E-CXA 10Gigabit Ethernet iWARP adapters Issues Uncovered ⊲ Evaluation Future Work Questions 9 / 11 Evaluation Goal Kernel-Space iSCSI Reads Over 10 Gigabit Ethernet SCSI 10000 iSCSI RDMA iSER 8000 Implementation ⊲Issues Uncovered Evaluation 6000 Future Work Questions 4000 2000 Theoretical Max RDMA Throughput (9363 Megabits/sec) Throughput (Megabits/second) iSER-assisted iSCSI Over iWARP/TCP Traditional (Unassisted) iSCSI Over TCP 0 0.1 1 10 Size (Megabytes) 9 / 11 Evaluation Goal Kernel-Space iSCSI Writes Over 10 Gigabit Ethernet SCSI 10000 iSCSI RDMA iSER 8000 Implementation ⊲Issues Uncovered Evaluation 6000 Future Work Questions 4000 2000 Theoretical Max RDMA Throughput (9363 Megabits/sec) Throughput (Megabits/second) iSER-assisted iSCSI Over iWARP/TCP Traditional (Unassisted) iSCSI Over TCP 0 0.1 1 10 Size (Megabytes) 9 / 11 Evaluation Goal User-Space iSCSI Writes Over 10 Gigabit Ethernet SCSI 10000 iSCSI RDMA iSER 8000 Implementation ⊲Issues Uncovered Evaluation 6000 Future Work Questions 4000 2000 Theoretical Max RDMA Throughput (9363 Megabits/sec) Throughput (Megabits/second) iSER-assisted iSCSI Over iWARP/TCP Traditional (Unassisted) iSCSI Over TCP 0 0.1 1 10 Size (Megabytes) 9 / 11 Future Work Goal Further Performance Evaluation SCSI iSCSI – Response time RDMA iSER – CPU utilization Implementation Issues Uncovered Further Comparisons Evaluation ⊲ Future Work – Infiniband Questions – TCP offloading – iSCSI offloading iSCSI Parameters – Immediate/Unsolicited data – Multiple outstanding commands – Multiple connections 10 / 11 Questions Goal Source Available at: SCSI http://sourceforge.net/projects/unh-iscsi iSCSI RDMA iSER Implementation Issues Uncovered Evaluation Future Work ⊲ Questions 11 / 11.
Recommended publications
  • Break Through the TCP/IP Bottleneck with Iwarp
    Printer Friendly Version Page 1 of 6 Enter Search Term Enter Drill Deeper or ED Online ID Technologies Design Hotspots Resources Shows Magazine eBooks & Whitepapers Jobs More... Click to view this week's ad screen [Design View / Design Solution] Break Through The TCP/IP Bottleneck With iWARP Using a high-bandwidth, low-latency network solution in the same network infrastructure provides good insurance for generation networks. Sweta Bhatt, Prashant Patel ED Online ID #21970 October 22, 2009 Copyright © 2006 Penton Media, Inc., All rights reserved. Printing of this document is for personal use only. Reprints T he online economy, particularly e-business, entertainment, and collaboration, continues to dramatically and rapidly increase the amount of Internet traffic to and from enterprise servers. Most of this data is going through the transmission control protocol/Internet protocol (TCP/IP) stack and Ethernet controllers. As a result, Ethernet controllers are experiencing heavy network traffic, which requires more system resources to process network packets. The CPU load increases linearly as a function of network packets processed, diminishing the CPU’s availability for other applications. Because the TCP/IP consumes a significant amount of the host CPU processing cycles, a heavy TCP/IP load may leave few system resources available for other applications. Techniques for reducing the demand on the CPU and lowering the system bottleneck, though, are available. iWARP SOLUTIONS Although researchers have proposed many mechanisms and theories for parallel systems, only a few have resulted in working computing platforms. One of the latest to enter the scene is the Internet Wide Area RDMA Protocol, or iWARP, a joint project by Carnegie Mellon University Corp.
    [Show full text]
  • Iser: Frequently Asked Questions
    iSER: Frequently Asked Questions The iSCSI Extensions for RDMA or iSER protocol is an iSCSI translation layer for operation over RDMA transports, such as InfiniBand or iWARP/Ethernet. The most unexpected fact about iSER is that, despite its name, it is not compatible with iSCSI and does not interoperate with the large iSCSI installed base. This FAQ clarifies this and other questions that are associated with this protocol. Is iSER compatible with iSCSI? NO – iSER is effectively an emulation layer that translates iSCSI to RDMA transactions that cannot be understood by non-RDMA peers: iSER transports iSCSI control messages in RDMA SEND/RECV commands and iSCSI payload in RDMA READ/WRITE operations, altering the wire formats in the process. The following figure shows the protocol layering for iSCSI vs. iSER, revealing the number of additional layers introduced by the latter, and the resulting incompatibility on the wire. Figure 1 – iSCSI and iSER Protocol Layers Does iSER interoperate with iSCSI peers? NO – an iSER client or server can only communicate with other iSER end nodes. Interoperating in this case means disabling iSER extensions, and losing offload support and all performance benefits. Does iSER work with software peers? NO – iSER requires RDMA enabled hardware on both ends of the communication. In contrast, iSCSI hardware implementations are not only fully interoperable with software peers, but also preserve all the local performance benefits regardless of the peer type. Does iSER provide good performance? MAYBE – iSER can provide improved efficiencies at large I/O sizes, but benchmarks show degraded performance at small I/O sizes due to the additional overheads introduced.
    [Show full text]
  • Designing High-Performance and Scalable Clustered Network Attached Storage with Infiniband
    DESIGNING HIGH-PERFORMANCE AND SCALABLE CLUSTERED NETWORK ATTACHED STORAGE WITH INFINIBAND DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ranjit Noronha, MS * * * * * The Ohio State University 2008 Dissertation Committee: Approved by Dhabaleswar K. Panda, Adviser Ponnuswammy Sadayappan Adviser Feng Qin Graduate Program in Computer Science and Engineering c Copyright by Ranjit Noronha 2008 ABSTRACT The Internet age has exponentially increased the volume of digital media that is being shared and distributed. Broadband Internet has made technologies such as high quality streaming video on demand possible. Large scale supercomputers also consume and cre- ate huge quantities of data. This media and data must be stored, cataloged and retrieved with high-performance. Researching high-performance storage subsystems to meet the I/O demands of applications in modern scenarios is crucial. Advances in microprocessor technology have given rise to relatively cheap off-the-shelf hardware that may be put together as personal computers as well as servers. The servers may be connected together by networking technology to create farms or clusters of work- stations (COW). The evolution of COWs has significantly reduced the cost of ownership of high-performance clusters and has allowed users to build fairly large scale machines based on commodity server hardware. As COWs have evolved, networking technologies like InfiniBand and 10 Gigabit Eth- ernet have also evolved. These networking technologies not only give lower end-to-end latencies, but also allow for better messaging throughput between the nodes. This allows us to connect the clusters with high-performance interconnects at a relatively lower cost.
    [Show full text]
  • How Ethernet RDMA Protocols Iwarp and Roce Support Nvme Over Fabrics PRESENTATION TITLE GOES HERE
    How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics PRESENTATION TITLE GOES HERE John Kim, Mellanox David Fair, Intel January 26, 2016 Who We Are John F. Kim David Fair Director, Storage Marketing Chair, SNIA-ESF Mellanox Technologies Ethernet Networking Mktg Mgr Intel Corp. 2 SNIA Legal Notice ! " The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. ! " Member companies and individual members may use this material in presentations and literature under the following conditions: ! " Any slide or slides used must be reproduced in their entirety without modification ! " The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. ! " This presentation is a project of the SNIA Education Committee. ! " Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. ! " The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 3 Agenda •" How RDMA fabrics fit into NVMe over Fabrics •" RDMA explained and how it benefits NVMe/F •" Verbs, the lingua franca
    [Show full text]
  • 100G Iscsi - a Bright Future for Ethernet Storage
    100G iSCSI - A Bright Future for Ethernet Storage Tom Reu Consulting Application Engineer Chelsio Communications 2016 Data Storage Innovation Conference. © Chelsio Communications. All Rights Reserved. Presentation Outline Company Overview iSCSI Overview iSCSI and iSER Innovations Summary 2 2016 Data Storage Innovation Conference. © Chelsio Communications. All Rights Reserved. iSCSI Timeline RFC 3720 in 2004 10 GbE, IEEE 802ae 2002 Latest RFC 7143 in April 2014 First 10 Gbps hardware Designed for Ethernet-based Storage Area Networks iSCSI in 2004 (Chelsio) Data protection 40/100 GbE, IEEE Performance 802.3ba 2010 Latency Flow control First 40Gbps hardware Leading Ethernet-based SAN iSCSI in 2014 (Chelsio) technology First 100Gbps In-boxed Initiators hardware available in Plug-and-play Q3/Q4 2016 Closely tracks Ethernet speeds Increasingly high bandwidth 2016 Data Storage Innovation Conference. © Chelsio Communications. All Rights Reserved. iSCSI Trends iSCSI Growth Hardware offloaded FC in secular decline 40Gb/s (soon to be 50Gb/s FCoE struggles with & 100 Gb/s) aligns with limitations migration from spindles to Ethernet flexibility NVRAM iSCSI for both front and Unlocks potential of new back end networks low latency, high speed Convergence SSDs Block-level and file-level Virtualization access in one device using a single Ethernet controller Native iSCSI initiator support in all major Converged adapters with RDMA over Ethernet and OS/hypervisors iSCSI consolidate front and Simplifies storage back end storage fabrics virtualization 2016 Data Storage Innovation Conference. © Chelsio Communications. All Rights Reserved. iSCSI Overview High performance •Why Use TCP? Zero copy DMA on both •Reliable Protection ends Protocol Hardware TCP/IP offload Hardware iSCSI •retransmit of processing load/corrupted Data protection packets CRC-32 for header •guaranteed in-order CRC-32 for payload delivery No overhead with •congestion control hardware offload •automatic acknowledgment 2016 Data Storage Innovation Conference.
    [Show full text]
  • Persistent Memory Over Fabrics an Application-Centric View
    Persistent Memory over Fabrics An Application-centric view Paul Grun Cray, Inc OpenFabrics Alliance Vice Chair Agenda OpenFabrics Alliance Intro OpenFabrics Software Introducing OFI - the OpenFabrics Interfaces Project OFI Framework Overview – Framework, Providers Delving into Data Storage / Data Access Three Use Cases A look at Persistent Memory © 2017 SNIA Persistent Memory Summit. All Rights Reserved. 2 OpenFabrics Alliance The OpenFabrics Alliance (OFA) is an open source-based organization that develops, tests, licenses, supports and distributes OpenFabrics Software (OFS). The Alliance’s mission is to develop and promote software that enables maximum application efficiency by delivering wire-speed messaging, ultra-low latencies and maximum bandwidth directly to applications with minimal CPU overhead. https://openfabrics.org/index.php/organization.html © 2017 SNIA Persistent Memory Summit. All Rights Reserved. 3 OpenFabrics Alliance – selected statistics - Founded in 2004 - Leadership - Susan Coulter, LANL – Chair - Paul Grun, Cray Inc. – Vice Chair - Bill Lee, Mellanox Inc. – Treasurer - Chris Beggio, Sandia – Secretary (acting) - 14 active Directors/Promoters (Intel, IBM, HPE, NetApp, Oracle, Unisys, Nat’l Labs…) - Major Activities 1. Develop and support open source network stacks for high performance networking - OpenFabrics Software - OFS 2. Interoperability program (in concert with the University of New Hampshire InterOperability Lab) 3. Annual Workshop – March 27-31, Austin TX today’s focus - Technical Working Groups - OFIWG,
    [Show full text]
  • User's Guide: Converged Network Adapters (41000 Series)
    Marvell® Converged Network Adapters 41000 Series Adapters Converged Network Adapters User’s Guide Third party information brought to you courtesy of Dell. Doc. No. AH0054602-00 Rev. X January 29, 2021 Cover Marvell 41000 Series Adapters User’s Guide THIS DOCUMENT AND THE INFORMATION FURNISHED IN THIS DOCUMENT ARE PROVIDED “AS IS” WITHOUT ANY WARRANTY. MARVELL AND ITS AFFILIATES EXPRESSLY DISCLAIM AND MAKE NO WARRANTIES OR GUARANTEES, WHETHER EXPRESS, ORAL, IMPLIED, STATUTORY, ARISING BY OPERATION OF LAW, OR AS A RESULT OF USAGE OF TRADE, COURSE OF DEALING, OR COURSE OF PERFORMANCE, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. This document, including any software or firmware referenced in this document, is owned by Marvell or Marvell's licensors, and is protected by intellectual property laws. No license, express or implied, to any Marvell intellectual property rights is granted by this document. The information furnished in this document is provided for reference purposes only for use with Marvell products. It is the user's own responsibility to design or build products with this information. Marvell products are not authorized for use as critical components in medical devices, military systems, life or critical support devices, or related systems. Marvell is not liable, in whole or in part, and the user will indemnify and hold Marvell harmless for any claim, damage, or other liability related to any such use of Marvell products. Marvell assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use.
    [Show full text]
  • Porting Iwarp to Libripc
    Porting LibRIPC to iWARP Bachelorarbeit von cand. B.Sc. Felix Fereydun Pepinghege an der Fakultät für Informatik Erstgutachter: Prof. Dr. Frank Bellosa Zweitgutachter: Prof. Dr. Hartmut Prautzsch Betreuender Mitarbeiter: Dr. Jan Stoess Bearbeitungszeit: 25. Mai 2012 – 24. September 2012 KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft www.kit.edu Ich erkläre hiermit, dass ich die vorliegende Arbeit selbständig verfasst, keine anderen als die angegebenen Quellen und Hilfsmittel verwendet und die Regeln des Karlsruher Instituts für Technologie (KIT) zur Sicherung guter wissenschaft- licher Praxis beachtet habe. Karlsruhe, den 24. September 2012 iv Abstract Cloud computing has become a major economical factor in the recent develop- ment of computer systems. Companies tend to draw computational power for their data processing from the cloud, instead of hosting their own servers, in order to save costs. The providers of cloud systems run huge and cost efficient data centers, profiting from economies of scale. In order to further reduce the costs, these data centers currently move to more power efficient systems. Many applications that are now used in the cloud were not created with a cloud environment in mind, but have been "moved" to the cloud. These applications usually use TCP/IP for their intercommunication mechanisms, which is the de- facto standard for current applications in the Internet. Unfortunately, these TCP/IP based implementations rely on the Berkeley socket API, which does not match the demands of power efficient systems. Sockets introduce much CPU involvement, taking away precious computational time from the real applications. Several specialized network architectures, such as InfiniBand, overcome this issue.
    [Show full text]
  • An Overview of RDMA Over IP
    An Overview of RDMA over IP Allyn Romanow Cisco Systems San Jose, CA 95134 USA [email protected] Stephen Bailey Sandburst Corporation Andover, MA 01810 USA [email protected] Abstract This paper gives an overview of Remote Direct Memory Access (RDMA) over IP. The first part of the paper is taken from an internet draft [RMTB02] that describes the problem of high system costs due to network I/O copying in end-hosts at high speeds. A review of experience and research over the last ten or so years shows that the problem is due to limits on available memory bandwidth, and the prohibitive cost of additional memory bandwidth, and that it can be substantially improved using copy avoidance. The second part of the paper considers an RDMA over IP solution, giving background and current status. An architecture is described that is that currently being adopted by the RDMA Consortium and the RDDP WG in the IETF. Outstanding issues, some of a research nature, are considered. 1 Introduction This paper considers RDMA over IP. Much of the paper is taken from the internet draft, draft-rddp- problem-statement-00.txt [RMTB02], by the authors, and Jeff Mogul and Tom Talpey. While the internet draft describes the problem and reviews the literature, it also motivates why the topic should be in the IETF, and the draft is constrained as a document or an IETF working group. This workshop paper covers the same ground with respect to describing the problem and reviewing relevant literature- in fact we have changed the IETF draft text very little.
    [Show full text]
  • Roce Vs. Iwarp a Great Storage Debate
    RoCE vs. iWARP A Great Storage Debate Live Webcast August 22, 2018 10:00 am PT Today’s Presenters John Kim Tim Lustig Fred Zhang SNIA ESF Chair Mellanox Intel Mellanox © 2018 Storage Networking Industry Association. All Rights Reserved. 2 SNIA-At-A-Glance © 2018 Storage Networking Industry Association. All Rights Reserved. 3 SNIA Legal Notice The material contained in this presentation is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. © 2018 Storage Networking Industry Association. All Rights Reserved. 4 Agenda Introductions John Kim – Moderator What is RDMA? Technology Introductions RoCE – Tim Lustig, Mellanox Technologies iWARP – Fred Zhang, Intel Corporation Similarities and Differences Use Cases Challenge Topics Performance, manageability, security, cost, etc.
    [Show full text]
  • A Study of Iscsi Extensions for RDMA (Iser)
    A Study of iSCSI Extensions for RDMA (iSER) Mallikarjun Chadalapaka Uri Elzur Michael Ko Hewlett-Packard Company Broadcom IBM [email protected] [email protected] [email protected] Hemal Shah Patricia Thaler Intel Corporation Agilent Technologies [email protected] [email protected] Abstract Memory Access (RDMA) semantics over TCP/IP networks. The iSCSI protocol is the IETF standard that maps the The iSCSI Extensions for RDMA (iSER) is a protocol that SCSI family of application protocols onto TCP/IP enabling maps the iSCSI protocol over the iWARP protocol suite. convergence of storage traffic on to standard TCP/IP This paper analyzes some of the key challenges faced in fabrics. The ability to efficiently transfer and place the designing and integrating iSER into the iWARP framework data on TCP/IP networks is crucial for this convergence of while meeting the expectations of the iSCSI protocol.. As the storage traffic. The iWARP protocol suite provides part of this, the paper discusses the key tradeoffs and design Remote Direct Memory Access (RDMA) semantics over choices in the iSER wire protocol, and the role the design TCP/IP networks and enables efficient memory-to-memory of the iSER protocol played in evolving the RNIC (RDMA- data transfers over an IP fabric. This paper studies the enabled Network Interface Controller) architecture and the design process of iSCSI Extensions for RDMA (iSER), a functionality of iWARP protocol suite. protocol that maps the iSCSI protocol over the iWARP The organization of the rest of the paper is as follows. protocol suite. As part of this study, this paper shows how Section 2 provides an overview of the iSCSI protocol and iSER enables efficient data movement for iSCSI using the iWARP protocol suite.
    [Show full text]
  • TR-4684: Implementing and Configuring Modern Sans with Nvme/FC
    Technical Report Implementing and configuring Modern SANs with NVMe/FC Michael Peppers, Martin George, NetApp June 2021 | TR-4684 | Version 6 Abstract NVM Express (NVMe) is a data storage protocol that delivers the fastest response times for business-critical enterprise applications. However, NVMe is more than a storage specification; the broader NVMe over Fabrics (NVMe-oF) protocol encompasses the entire data path, from server to network to storage system. This technical report focuses on building modern SANs on NVMe over Fabrics, especially NVMe over Fibre Channel (NVMe/FC, also known as FC-NVMe). This report describes the industry-first NVMe/FC implementation in NetApp® ONTAP® data management software, includes guidelines for SAN design and implementation, and provides links to testing and qualification recommendations. TABLE OF CONTENTS NVMe in the modern data center ............................................................................................................... 5 NVMe and Fibre Channel Protocol naming .............................................................................................................5 Why NVMe is so fast ...............................................................................................................................................6 NVMe and modern SANs ............................................................................................................................ 8 NVMe use cases .....................................................................................................................................................8
    [Show full text]