Nvme™ and Nvme™ Over Fabrics (Nvme-Of™) J Metz, Ph.D, R&D Engineer, Advanced Storage Board of Directors, NVM Express, Inc

Total Page:16

File Type:pdf, Size:1020Kb

Nvme™ and Nvme™ Over Fabrics (Nvme-Of™) J Metz, Ph.D, R&D Engineer, Advanced Storage Board of Directors, NVM Express, Inc #CLUS BRKDCN-2494 Deep Dive into NVMe™ and NVMe™ over Fabrics (NVMe-oF™) J Metz, Ph.D, R&D Engineer, Advanced Storage Board of Directors, NVM Express, Inc. Board of Directors, SNIA Board of Directors, FCIA @drjmetz #CLUS Agenda • Introduction • Who is NVM Express? • Storage Concepts • What is NVMe™ • NVMe Operations • The Anatomy of NVM Subsystems • Queuing and Queue Pairs • NVMe over Fabrics (NVMe-oF™) • How NVMe-oF Works • NVMe-oF™/RDMA • NVMe-oF™/FC • NVMe-oF™/TCP (Future) • Whats New in NVMe 1.3 • Additional Resources #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 3 What This Presentation Is… and Is Not • What it is: • A technology conversation • Deep dive (We’re going in, Jim!) • What it is not • A product conversation • Comprehensive and exhaustive #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 4 Goals • At the end of this presentation you should know: • What NVMe is and why it is important • How NVMe is extended for remote access over a network (i.e., “Fabrics”) • The different types of fabrics and their differences • Some of the differences between traditional SCSI- based storage solutions and NVMe-based solutions • What’s new in NVMe and NVMe over Fabrics #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 5 Prerequisites • You really should know… • Basics of Block-based storage • Basic terminology (initiator, target) • Helpful to know… • Basic PCIe semantics • Some storage networking #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 6 Note: Screenshot Warning! • Get your screenshots ready when you see this symbol: • Useful for getting more information about a topic, URLs, etc. #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 7 What We Will Not Cover (In Detail) • NVMe-MI (Management) • NVMe-KV (Key Value) • Protocol Data Protection features • Advances in NVMe features • RDMA verbs • Fibre Channel exchanges • New form factor designs #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 8 Who is NVM Express? NVM Express, Inc. 125+ Companies defining NVMe together Board of Marketing Subcommittee Directors NVMexpress.org, webcasts, 13 elected companies, stewards of tradeshows, social media, and press the technology & driving processes Co-Chairs: Janene Ellefson and Chair: Amber Huffman Jonmichael Hands Technical Management I/F Interop (ICC) Workgroup Workgroup Workgroup Out-of-band management over PCIe® Interop & Conformance Testing in Base specification and VDM and SMBus collaboration with UNH-IOL NVMe over Fabrics Chair: Peter Onufryk Chair: Ryan Holmqvist Chair: Amber Huffman Vice-Chair: Austin Bolen #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 10 About NVM Express (The Technology) • NVM Express (NVMe™) is an open collection of standards and information to fully expose the benefits of non-volatile memory in all types of computing environments from mobile to data center • NVMe™ is designed from the ground up to deliver high bandwidth and low latency storage access for current and future NVM technologies NVM Express Base Specification The register interface and command set for PCI Express attached storage with industry standard software available for numerous operating systems. NVMe™ is widely considered the defacto industry standard for PCIe SSDs. NVM Express Management Interface (NVMe-MI™) Specification The command set and architecture for out of band management of NVM Express storage (i.e., discovering, monitoring, and updating NVMe™ devices using a BMC). NVM Express Over Fabrics (NVMe-oF™) Specification The extension to NVM Express that enables tunneling the NVM Express command set over additional transports beyond PCIe. NVMe over Fabrics™ extends the benefits of efficient storage architecture at scale in the world’s largest data centers by allowing the same protocol to extend over various networked interfaces. #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 11 NVMe™ Adoption – Industry • NVMe™ displacing SAS and SATA SSDs in server/PC markets • PCIe NAND Flash SSDs primarily inside servers • Lower latency Storage Class Memory (i.e., 3D Xpoint™) SSDs - NVMe™-only • Extensive Client (i.e., laptop, tablet) use of smaller form factor SSDs – M.2 and BGA • NVMe™ ecosystem and recognition growing quickly • Many servers offer NVMe™ slots - different server configurations and form factors • Startups already shipping NVMe™ and NVMe over Fabrics™ (NVMe-oF™) solutions • Storage class NVMe™ SSDs emerging – enable high availability (HA) in storage arrays • NVMe-oF™ emerging as a solution to limited scale of PCIe as a fabric • Expanding ecosystem (i.e., Analyzers, NVMe-oF™ adapters) #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 12 The Basics of Storage and Memory The Anatomy of Storage • There is a “sweet spot” for storage • Depends on the workload and application type • No “one-size fits all” • Understanding “where” the solution fits is critical to understanding “how” to put it together • Trade-offs between 3 specific forces • “You get, at best, 2 out of 3” #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 14 The Anatomy of Storage • There is a “sweet spot” for storage • Depends on the workload and application type • No “one-size fits all” • Understanding “where” the solution fits is critical to understanding “how” to put it together • Trade-offs between 3 specific forces • “You get, at best, 2 out of 3” • NVMe goes here #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 15 Storage Solutions - Where Does It Fit? • Different types of storage apply in different places #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 16 Storage Solutions - Where Does NVMe Fit? • Different types of storage apply in different places • NVMe is PCIe-based • Local storage, in-server #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 17 Storage Solutions - Where Does NVMe Fit? • Different types of storage apply in different places • NVMe is PCIe-based • Local storage, in-server • PCIe Extensions (HBAs and switches) extend NVMe outside the server #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 18 Storage Solutions - Where Does NVMe-oF Fit? • Different types of storage apply in different places • NVMe is PCIe-based • Local storage, in-server • PCIe Extensions (HBAs and switches) extend NVMe outside the server • NVMe-oF inherits characteristics of underlying transport #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 19 But First… SCSI • SCSI is the command set used in traditional storage • Is the basis for most storage used in the data center • Obviously, is the most obvious starting point for working with Flash storage • These commands are transported via: • Fibre Channel • Infiniband • iSCSI (duh!) • SAS, SATA • Works great for data that can’t be accessed in parallel (like disk drives that rotate) • Any latency in protocol acknowledgement is far less than rotational head seek time #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 20 Evolution from Disk Drives to SSDs The Flash Conundrum • Flash • Requires far fewer commands than SCSI provides • Does not rotate (no latency time, exposing latency of a one command/one queue system) • Thrives on random (non-linear) access • Both read and write • Nearly all Flash storage systems use SCSI for access • But they don’t have to! #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 21 So, NVMe… • Specification for SSD access via PCI Express (PCIe), initially flash media • Designed to scale to any type of Non Volatile Memory, including Storage Class Memory • Design target: high parallelism and low latency SSD access • Does not rely on SCSI (SAS/FC) or ATA (SATA) interfaces: New host drivers & I/O stacks • Common interface for Enterprise & Client drives/systems: Reuse & leverage engineering investments • New modern command set • 64-byte commands (vs. typical 16 bytes for SCSI) • Administrative vs. I/O command separation (control path vs. data path) • Small set of commands: Small fast host and storage implementations • Standards development by NVMe working group. • NVMe is de facto future solution for NAND and post NAND SSD from all SSD suppliers • Full support for NVMe for all major OS (Linux, Windows, ESX etc.) • Learn more at nvmexpress.org #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 22 What is NVMe? NVMe Operations What’s Special about NVMe? Slower Faster . HDD: varying speed conveyer belts . Flash: all data blocks available at the . Flash: all data blocks available at the carrying data blocks (faster belts = same seek time & latency same seek time & latency lower seek time & latency) . SCSI / SAS: pick & place robot . SCSI /SAS: pick & place robot . NVMe / PCIe: pick & place robot with w/tracked, single arm executing 1 w/single arm executing 1 command at 1000s of arms, all processing & command at a time, 1 queue a time, 1 queue executing commands simultaneously, with high depth of commands #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 25 Technical Basics • 2 key components: Host and NVM Subsystem (a.k.a. storage target) • “Host” is best thought of as the CPU and NVMe I/O driver for our purposes • “NVM Subsystem” has a component - the NVMe Controller - which does the communication work #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 26 Technical Basics • Memory-based deep queues (up to 64K commands per queue, up to 64K queues) • Streamlined and simple command set (13; only 3 required commands) • Command completion interface optimized for success (common case) • NVMe Controller: SSD element that processes NVMe commands #CLUS BRKDCN-2494 © 2018 Cisco and/or its affiliates.
Recommended publications
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • Fibre Channel Interface
    Fibre Channel Interface Fibre Channel Interface ©2006, Seagate Technology LLC All rights reserved Publication number: 100293070, Rev. A March 2006 Seagate and Seagate Technology are registered trademarks of Seagate Technology LLC. SeaTools, SeaFONE, SeaBOARD, SeaTDD, and the Wave logo are either registered trade- marks or trademarks of Seagate Technology LLC. Other product names are registered trade- marks or trademarks of their owners. Seagate reserves the right to change, without notice, product offerings or specifications. No part of this publication may be reproduced in any form without written permission of Seagate Technol- ogy LLC. Revision status summary sheet Revision Date Writer/Engineer Sheets Affected A 03/08/06 C. Chalupa/J. Coomes All iv Fibre Channel Interface Manual, Rev. A Contents 1.0 Contents . i 2.0 Publication overview . 1 2.1 Acknowledgements . 1 2.2 How to use this manual . 1 2.3 General interface description. 2 3.0 Introduction to Fibre Channel . 3 3.1 General information . 3 3.2 Channels vs. networks . 4 3.3 The advantages of Fibre Channel . 4 4.0 Fibre Channel standards . 5 4.1 General information . 6 4.1.1 Description of Fibre Channel levels . 6 4.1.1.1 FC-0 . .6 4.1.1.2 FC-1 . .6 4.1.1.3 FC-1.5 . .6 4.1.1.4 FC-2 . .6 4.1.1.5 FC-3 . .6 4.1.1.6 FC-4 . .7 4.1.2 Relationship between the levels. 7 4.1.3 Topology standards . 7 4.1.4 FC Implementation Guide (FC-IG) . 7 4.1.5 Applicable Documents .
    [Show full text]
  • AMD Opteron™ Shared Memory MP Systems Ardsher Ahmed Pat Conway Bill Hughes Fred Weber Agenda
    AMD Opteron™ Shared Memory MP Systems Ardsher Ahmed Pat Conway Bill Hughes Fred Weber Agenda • Glueless MP systems • MP system configurations • Cache coherence protocol • 2-, 4-, and 8-way MP system topologies • Beyond 8-way MP systems September 22, 2002 Hot Chips 14 2 AMD Opteron™ Processor Architecture DRAM 5.3 GB/s 128-bit MCT CPU SRQ XBAR HT HT HT 3.2 GB/s per direction 3.2 GB/s per direction @ 0+]'DWD5DWH @ 0+]'DWD5DWH 3.2 GB/s per direction @ 0+]'DWD5DWH HT = HyperTransport™ technology September 22, 2002 Hot Chips 14 3 Glueless MP System DRAM DRAM MCT CPU MCT CPU SRQ SRQ non-Coherent HyperTransport™ Link XBAR XBAR HT I/O I/O I/O HT cHT cHT cHT cHT Coherent HyperTransport ™ cHT cHT I/O I/O HT HT cHT cHT XBAR XBAR CPU MCT CPU MCT SRQ SRQ HT = HyperTransport™ technology DRAM DRAM September 22, 2002 Hot Chips 14 4 MP Architecture • Programming model of memory is effectively SMP – Physical address space is flat and fully coherent – Far to near memory latency ratio in a 4P system is designed to be < 1.4 – Latency difference between remote and local memory is comparable to the difference between a DRAM page hit and a DRAM page conflict – DRAM locations can be contiguous or interleaved – No processor affinity or NUMA tuning required • MP support designed in from the beginning – Lower overall chip count results in outstanding system reliability – Memory Controller and XBAR operate at the processor frequency – Memory subsystem scale with frequency improvements September 22, 2002 Hot Chips 14 5 MP Architecture (contd.) • Integrated Memory Controller
    [Show full text]
  • For Immediate Release
    an ellisys company FOR IMMEDIATE RELEASE SerialTek Contact: Simon Thomas, Director, Sales and Marketing Phone: +1-720-204-2140 Email: [email protected] SerialTek Debuts PCIe x16 Gen5 Protocol Analysis System and Web Application New Kodiak™ Platform Brings SerialTek Advantages to More Computing and Data Storage Markets Longmont, CO, USA — February 24, 2021 — SerialTek, a leading provider of protocol test solutions for PCI Express®, NVM Express®, Serial Attached SCSI, and Serial ATA, today introduced an advancement in the PCIe® test and analysis market with the release of the Kodiak PCIe x16 Gen5 Analysis System, as well as the industry’s first calibration-free PCIe x16 add-in-card (AIC) interposer and a new web-based BusXpert™ user interface to manage and analyze traces more efficiently than ever. The addition of PCIe x16 Gen5 to the Kodiak analyzer and SI-Fi™ interposer family brings previously unavailable analysis capabilities and efficiencies to computing, datacenter, networking, storage, AI, and other PCIe x16 Gen5 applications. With SerialTek’s proven calibration-free SI-Fi interposer technology, the Kodiak’s innovative state-of-the-art design, and the new BusXpert analyzer software, users can more easily set up the analyzer hardware, more accurately probe PCIe signals, and more efficiently analyze traces. Kodiak PCIe x16 Gen5 Analysis System At the core of the Kodiak PCIe x16 analyzer is an embedded hardware architecture that delivers substantial and unparalleled advancements in capture, search, and processing acceleration. Interface responsiveness is markedly advanced, searches involving massive amounts of data are fast, and hardware filtering is flexible and powerful. “Once installed in a customer test environment the Kodiak’s features and benefits are immediately obvious,” said Paul Mutschler, CEO of SerialTek.
    [Show full text]
  • Architecture and Application of Infortrend Infiniband Design
    Architecture and Application of Infortrend InfiniBand Design Application Note Version: 1.3 Updated: October, 2018 Abstract: Focusing on the architecture and application of InfiniBand technology, this document introduces the architecture, application scenarios and highlights of the Infortrend InfiniBand host module design. Infortrend InfiniBand Host Module Design Contents Contents ............................................................................................................................................. 2 What is InfiniBand .............................................................................................................................. 3 Overview and Background .................................................................................................... 3 Basics of InfiniBand .............................................................................................................. 3 Hardware ....................................................................................................................... 3 Architecture ................................................................................................................... 4 Application Scenarios for HPC ............................................................................................................. 5 Current Limitation .............................................................................................................................. 6 Infortrend InfiniBand Host Board Design ............................................................................................
    [Show full text]
  • Chapter 6 MIDI, SCSI, and Sample Dumps
    MIDI, SCSI, and Sample Dumps SCSI Guidelines Chapter 6 MIDI, SCSI, and Sample Dumps SCSI Guidelines The following sections contain information on using SCSI with the K2600, as well as speciÞc sections dealing with the Mac and the K2600. Disk Size Restrictions The K2600 accepts hard disks with up to 2 gigabytes of storage capacity. If you attach an unformatted disk that is larger than 2 gigabytes, the K2600 will still be able to format it, but only as a 2 gigabyte disk. If you attach a formatted disk larger than 2 gigabytes, the K2600 will not be able to work with it; you could reformat the disk, but thisÑof courseÑwould erase the disk entirely. Configuring a SCSI Chain Here are some basic guidelines to follow when conÞguring a SCSI chain: 1. According to the SCSI SpeciÞcation, the maximum SCSI cable length is 6 meters (19.69 feet). You should limit the total length of all SCSI cables connecting external SCSI devices with Kurzweil products to 17 feet (5.2 meters). To calculate the total SCSI cable length, add the lengths of all SCSI cables, plus eight inches for every external SCSI device connected. No single cable length in the chain should exceed eight feet. 2. The Þrst and last devices in the chain must be terminated. There is a single exception to this rule, however. A K2600 with an internal hard drive and no external SCSI devices attached should have its termination disabled. If you later add an external device to the K2600Õs SCSI chain, you must enable the K2600Õs termination at that time.
    [Show full text]
  • Application Note 904 an Introduction to the Differential SCSI Interface
    DS36954 Application Note 904 An Introduction to the Differential SCSI Interface Literature Number: SNLA033 An Introduction to the Differential SCSI Interface AN-904 National Semiconductor An Introduction to the Application Note 904 John Goldie Differential SCSI Interface August 1993 OVERVIEW different devices to be connected to the same daisy chained The scope of this application note is to provide an introduc- cable (SCSI-1 and 2 allows up to eight devices while the pro- tion to the SCSI Parallel Interface and insight into the differ- posed SCSI-3 standard will allow up to 32 devices). A typical ential option specified by the SCSI standards. This applica- SCSI bus configuration is shown in Figure 1. tion covers the following topics: WHY DIFFERENTIAL SCSI? • The SCSI Interface In comparison to single-ended SCSI, differential SCSI costs • Why Differential SCSI? more and has additional power and PC board space require- • The SCSI Bus ments. However, the gained benefits are well worth the addi- • SCSI Bus States tional IC cost, PCB space, and required power in many appli- • SCSI Options: Fast and Wide cations. Differential SCSI provides the following benefits over single-ended SCSI: • The SCSI Termination • Reliable High Transfer Rates — easily capable of operat- • SCSI Controller Requirements ing at 10MT/s (Fast SCSI) without special attention to termi- • Summary of SCSI Standards nations. Even higher data rates are currently being standard- • References/Standards ized (FAST-20 @ 20MT/s). The companion Application Note (AN-905) focuses on the THE SCSI INTERFACE features of National’s new RS-485 hex transceiver. The The Small Computer System Interface is an ANSI (American DS36BC956 specifically designed for use in differential SCSI National Standards Institute) interface standard defining a applications is also optimal for use in other high speed, par- peer to peer generic input/output bus (I/O bus).
    [Show full text]
  • NVM Express Over Fabrics
    NVM Express Over Fabrics Dave Minturn, Intel Corp. OFADevWorkshop NVMe in Fabric Environments • A primary use case for NVMe PCIe SSDs is in an all flash appliance • Hundreds or more SSDs may be attached – too many for PCIe based attach scale-out • Concern: Today remote SSD scale-out over a fabric attach uses SCSI based protocols: Requiring protocol translation Desire best performance and latency from NVMe SSD investment over fabrics like Ethernet with RDMA (iWARP,RoCE), InfiniBand™, and Intel® Omni-Path Architecture Realizing Benefit of Next Gen NVM over Fabrics • PCIe NVMe SSD latency may be < 10 µs with Next Generation NVM • Using a SCSI-based protocol for remote NVMe access ads over 100 µs* in latency • Usage models require efficient write mirroring of PCIe Next Gen NVMe SSDs over fabrics *Source: Intel measurements. Concern: Low latency of Next Gen NVM lost in (SCSI) translation. Why NVMe over Fabrics? Simplicity, Efficiency and End-to-End NVMe Model • Simplicity of protocol enables hardware automated I/O Queues – NVMe transport bridge • No translation to or from another protocol like SCSI (in firmware/software) • Inherent parallelism of NVMe multiple I/O Queues is exposed to the host • NVMe commands and structures are transferred end-to-end • Maintains the NVMe architecture across a range of fabric types • Maintains architecture and software consistency between fabric types by standardizing a common abstraction and encapsulation definition Performance Goal: Make remote NVMe access over fabrics equivalent to local PCIe attached NVMe,
    [Show full text]
  • Delock External Enclosure for M.2 Nvme Pcie SSD with Superspeed USB 20 Gbps (USB 3.2 Gen 2X2) USB Type-C™ Female
    Delock External Enclosure for M.2 NVMe PCIe SSD with SuperSpeed USB 20 Gbps (USB 3.2 Gen 2x2) USB Type-C™ female Description This enclosure by Delock enables the installation of anM.2 PCIe NVMe SSD in 2280 format, it can be connected via USB to the PC or laptop. Therobust metal housing with cooling fins ensures an optimum temperature of the memory. SuperSpeed USB 20 Gbps The enclosure allows a data transfer rate of 20 Gbps on the USB-C™ port. Specification Item no. 42000 • Connectors: EAN: 4043619420001 external: 1 x SuperSpeed USB 20 Gbps (USB 3.2 Gen 2x2) USB Type-C™ female internal: 1 x 67 pin M.2 key M slot Country of origin: China • Chipset: Asmedia ASM2364 • Supports M.2 modules in format 2280 with key M or key B+M based on PCIe (NVMe) Package: • Retail Box • Maximum height of the components on the module: 1.5 mm, application of double- sided assembled modules supported • Supports NVM Express (NVMe) • Data transfer rate up to 20 Gbps • LED indicator for power and access • Metal housing • Dimensions (LxWxH): ca. 99 x 50 x 18 mm • Hot Plug, Plug & Play System requirements • Android 9.0 or above • Chrome OS 78.0 or above • Linux Kernel 4.6 or above • Mac OS 10.15.3 or above • Windows 8.1/8.1-64/10/10-64 • PC or laptop with a free USB Type-C™ port • PC or laptop with a free Thunderbolt™ 3 port Package content • External enclosure M.2 • Mounting material • 1 x thermal conductive pad • Screwdriver • Cable USB-C™ male to USB-C™ male, length ca.
    [Show full text]
  • Serial Attached SCSI (SAS) Interface Manual
    Users Guide Serial Attached SCSI (SAS) Interface Manual Users Guide Serial Attached SCSI (SAS) Interface Manual ©2003, 2004, 2005, 2006 Seagate Technology LLC All rights reserved Publication number: 100293071, Rev. B May 2006 Seagate, Seagate Technology, and the Seagate logo are registered trademarks of Seagate Technology LLC. SeaTools, SeaFAX, SeaFONE, SeaBOARD, and SeaTDD are either registered trademarks or trade- marks of Seagate Technology LLC. Other product names are registered trademarks or trademarks of their owners. Seagate reserves the right to change, without notice, product offerings or specifications. No part of this publication may be reproduced in any form without written permission of Seagate Technology LLC. Revision status summary sheet Revision Date Writers/Engineers Notes Rev. A 11/11/04 J. Coomes Initial release. Rev. B 05/07/06 C. Chalupa, J. Coomes, G. Houlder All. Contents 1.0 Interface requirements. 1 1.1 Acknowledgements . 1 1.2 How to use this interface manual . 1 1.2.1 Scope . 2 1.2.2 Applicable specifications . 2 1.2.3 Other references . 3 1.3 General interface description. 3 1.3.1 Introduction to Serial Attached SCSI Interface (SAS) . 3 1.3.2 The SAS interface . 3 1.3.3 Glossary . 5 1.3.4 Keywords . 16 1.4 Physical interface characteristics. 17 1.5 Bit and byte ordering . 17 2.0 General . 19 2.1 Architecture . 19 2.1.1 Architecture overview . 19 2.1.2 Physical links and phys . 19 2.1.3 Ports (narrow ports and wide ports) . 20 2.1.4 SAS devices . 21 2.1.5 Expander devices (edge expander devices and fanout expander devices) .
    [Show full text]
  • EMC’S Perspective: a Look Forward
    The Performance Impact of NVM Express and NVM Express over Fabrics PRESENTATION TITLE GOES HERE Live: November 13, 2014 Presented by experts from Cisco, EMC and Intel Webcast Presenters J Metz, R&D Engineer for the Office of the CTO, Cisco Amber Huffman, Senior Principal Engineer, Intel Steve Sardella , Distinguished Engineer, EMC Dave Minturn, Storage Architect, Intel SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 3 What This Presentation Is A discussion of a new way of talking to Non-Volatile
    [Show full text]
  • SCSI Interface Hard Disk Drives Stock Nos
    SCSI Interface Hard Disk Drives Stock Nos. 192-0472 and 192-0488 Installation Orientation Figure 1. The drive may be installed in one of three orientations; either PCB down or on either edge. Inclination from vertical or horizontal should not exceed 5¡. Mounting screw fixing Figure 2. When the mounting screw holes on the side of the drive are used, then use only the two pairs of outer holes. Do not use the centre holes in conjunction with only one of the outer holes. The screw must not penetrate the drive by more than 4 millimetres. Grounding Sufficient grounding will be provided for correct operation when the drive is screwed into an electrically grounded frame. Cooling Figure 3. Allow space above and below the drive to provide for adequate air flow. Fan cooling is recommended. The disk enclosure temperature, measured at top centre, should A never exceed 60¡C. Termination A. Temperature measurement point A terminating resistor should be installed at both ends of the SCSI bus. When shipped from the factory, this drive is terminated. If the drive is to be used in the middle of the SCSI cable with other terminated devices at the ends, or if external termination is to be used, then the terminating resistor pack on the drive must be removed. Signal Pin Pin Signal GROUND 1 2 -DB0 GROUND 3 4 -DB1 GROUND 5 6 -DB2 GROUND 7 8 -DB3 GROUND 9 10 -DB4 GROUND 11 12 -DB5 GROUND 13 14 -DB6 GROUND 15 16 -DB7 GROUND 17 18 -DBP GROUND 19 20 GROUND GROUND 21 22 GROUND Reserved 23 24 Reserved Open 25 26 TERMPWR Reserved 27 28 Reserved GROUND 29 30 GROUND
    [Show full text]