E8 Storage, Proprietary and Confidential About E8 Storage

Total Page:16

File Type:pdf, Size:1020Kb

E8 Storage, Proprietary and Confidential About E8 Storage E8 – Serving Data Hungry Application October 2018 Ziv Serlin - VP Architecture & Co-founder Presenter In E8 Mr. Serlin was involve on architect E8 product for the 1st two years, recently Mr. Serlin is focusing on the extensive engagement with E8 end customers exploring, helping and defining with the customers their E8 deployment models, In Ziv Serlin addition to that Mr. Serlin lead the strategic engagement of E8 HW partners. An Co-Founder & VP Architecture expert in designing complex storage systems, Mr. Serlin has extensive experience as IBM XIV, Primary Data, Intel a system architect at Intel and was the HW R&D manager of IBM/XIV product (scale out high-end enterprise block storage system). He earned BSC in Computer Engineering at the Technion. He hold storage patent from his work at IBM. 2 ©2018 E8 Storage, Proprietary and Confidential About E8 Storage • Founded in November 2014 by storage industry veterans from IBM-XIV • Leading NVMe over Fabrics solution in the market • In production with customers in U.S. and Europe, expanding now to Asia • Awarded 10 patents (granted) + 4 pending for E8 architecture • World-wide Team: • R&D in Tel-Aviv • Sales & marketing in U.S., Europe and Asia • Flash Memory Summit 2016 & 2017 Most Innovative Product Award 3 ©2018 E8 Storage, Proprietary and Confidential Storage Growth is Data Growth 5% Business volume grows 5% a year 60% The size of their database grows 60% a year 100% Performance load on that database grows 100% year Big Data Analytics Image Recognition Real-time Security AI/ML Video 4K Post-Prod E8 Storage Accelerates Data Hungry Applications 4 ©2018 E8 Storage, Proprietary and Confidential The Problem (Part 1): Why not use local SSDs in servers? Local SSDs today achieve latency 10x faster than all-flash arrays • “The DevOps Problem” Local SSD AFA • Things that work on laptops become 10x slower on the production infrastructure 0.1ms 1ms • “The islands of storage problem” ??? • Local SSDs in servers mean inefficient capacity utilization, no sharing of SSD data • Local SSDs couple storage and compute • Server purchasing requires upfront investment in SSDs 5 ©2018 E8 Storage, Proprietary and Confidential The Problem (Part 2): Why not use SSDs in SAN/NAS? • Not enough performance • Traditional all-flash arrays (SAN/NAS) get 10%-20% of the potential performance of NVMe SSDs • Classic “scale-up” bottleneck • Dual controller bottleneck • All I/O gated by controller CPU • Switching the SSDs from SAS to NVMe cannot alleviate the controller bottleneck First gen architectures cannot unlock the full performance of NVMe 6 ©2018 E8 Storage, Proprietary and Confidential The Solution: NVMe Over Fabrics • By attaching SSDs over the network, rather 4 Year Cost of SSDs in a Data-Center than using local SSDs in each server, Customer is able to: $80 • Reduce total acquisition cost of SSDs by 60% $60 • Defer capacity purchases until needed $40 • Pay as you grow – at lower future prices $20 $30 • Improve capacity utilization of SSDs to 90% $26 $20 • Shared volumes vs local replicas $12 • Easily add capacity only as needed Moving to shared NVMe storage delivers strong TCO for large scale customer deployments 7 ©2018 E8 Storage, Proprietary and Confidential E8 Storage Unlocks the Performance of NVMe 10M AFA with 24 SSDs 40 1000 Single NVMe SSD E8 24 NVMe SSDs See the Demo! 100 120 750K 2.4 3.1 300K IOPS (@4K read) Read/Write Bandwidth (GB/s) Read Latency (us)(@4K) 8 ©2018 E8 Storage, Proprietary and Confidential E8 Storage Product Overview 9 ©2018 E8 Storage, Proprietary and Confidential What is NVMe™? (Non-Volatile Memory Express) Communication protocol designed specifically for flash storage • High performance, low latency • Efficient protocol with lower stack overhead • Exponentially more queues / commands than SAS • Parallel processing for SSDs vs serial for HDDs • Support for fabrics (NVMe-oF™) • Originally designed for PCIe (internal to servers) • Expands support for other transport media • RDMA Based: RoCE, iWARP, Infiniband I/O Queues Commands per Queue • Non-RMDA: FC, TCP SAS 1 256 • Maintains NVMe protocol end to end NVMe 65,535 64,000 10 ©2018 E8 Storage, Proprietary and Confidential The E8 Storage Difference A new architecture built specifically for high performance NVMe • Unleash the parallelism of NVMe SSDs • Direct drive access for near line rate performance • Separation of data and control paths; no controller bottleneck • E8 Agent offloads up to 90% of data path operations • Simple, centralized management • Intuitive management GUI for host / volume management • E8 Agents auto-discover assigned LUNs • Scalable in multiple dimensions • Up to 126 host agents per E8 Controller • Up to 8 Controllers per host 2PB in single management 11 ©2018 E8 Storage, Proprietary and Confidential Designed for Availability and Reliability No single point of failure anywhere in the architecture • Host agents operate independently Host Servers with E8 Host Agents • Failure of one agent (or more) does not affect other agents • Access to shared storage is not impacted • RAID-6/RAID-5/RAID-10 data protection • Network multi-pathing • Enclosure high availability • Option 1: HA enclosure + dual-ported SSDs • Option 2: Cross-enclosure HA + single-ported SSDs 12 ©2018 E8 Storage, Proprietary and Confidential E8 Storage Customers and Use-Cases 13 ©2018 E8 Storage, Proprietary and Confidential Genomic Acceleration with E8 Storage Shared NVMe as a fast tier for parallelizing genomic processing • "We were keen to test E8 by trying to integrate it with our Univa Grid Engine cluster as a consumable resource of ultra-performance scratch space. Following some simple tuning and using a single EDR link we were able to achieve about 5GB/s from one node and 1.5M 4k IOPS from one node. Using the E8 API we were quickly able to write a simple Grid Engine prolog/epilog that allowed for a user-requestable scratch volume to be automatically created and destroyed by a job. The E8 box behaved flawlessly and the integration with InfiniBand was simpler than we could have possibly expected for such a new product." • - Dr. Robert Esnouf, Director of Research Computing Oxford Big Data Institute + Wellcome Center for Human Genetics From 10 hours per genome to 1 hour for 10 genomes! 14 ©2018 E8 Storage, Proprietary and Confidential E8 for AI/ML with IBM GPFS and Nvidia Shared NVMe Accelerates Deep Learning Cost ($/GBu) GPU Farm: Nvidia DGX-1 • A GPU cluster requires 0.5PB-1PB of shared fast storage • Up to 8 GPUs per node • GPFS Client + E8 Agent run on • But GPU servers have no real x86 within GPU Server estate for local SSDs… • Up to 126 GPU nodes in cluster • E8 Storage provides concurrent Pure Storage IBM GPFS + ESS E8 + GPFS access for 1000 (!) GPUs per FlashBlade cluster Mellanox 100G IB Images per second, per GPU node • 10x Performance of 3000 (ResNet-50 Image Recognition Training) Pure Storage FlashBlade 2500 Shared NVMe Storage 2000 • 4x Performance of IBM ESS SSD • E8-D24 2U24-HA 1500 Pure Storage Appliances, for half the cost IBM GPFS + ESS • Dual-port 2.5” NVMe Drives 1000 E8+GPFS • Up to 184TB (raw) per 2U 500 Patented Distributed RAID6 • 0 1 GPU node 10 GPU nodes 100 GPU nodes 15 ©2018 E8 Storage, Proprietary and Confidential E8 for AI/ML with IBM GPFS and Nvidia Shared NVMe Accelerates AI/ML Workloads Cost ($/GBu) 7 6 • GPU Environment is Perfect for E8 5 • x86 processors on GPU nodes are free: run GPFS client + E8 agent 4 • GPU cluster typically connected with 100G IB or ROCE 3 2 • GPU nodes have no real estate for SSDs: requires external shared SSD solution 1 0 Pure Storage IBM GPFS + ESS Weka.IO (NVMe) E8 + GPFS • Highly Scalable, up to 1000 (!) GPUs in single cluster FlashBlade • 1-8 GPUs per node, up to 126 nodes per cluster • Scale number of NVMe drives and NVMe enclosures easily: 20TB – 2PB Epoch Time (Hours) 30 25 • 10x Performance of Pure Storage FlashBlade 20 • AI/ML data-sets incompatible with de-dup and compression 15 • Which also ruins performance… 10 5 0 • 4x Performance of IBM ESS SSD Appliances, for half the cost Pure Storage IBM GPFS + ESS Weka.IO (NVMe) E8 + GPFS • Open hardware architecture FlashBlade • No NSDs: Single hop from GPFS clients to SSDs through E8’s patented architecture • All network attach, no SAS cabling, no one-to-one mappings 16 ©2018 E8 Storage, Proprietary and Confidential E8 Storage accelerate GPU data hungry applications Better Throughput Better Throughput: E8 Storage processed more images per second than local SSD Lower Latency Lower Latency: Training time was faster with E8 Storage than local SSD 17 ©2018 E8 Storage, Proprietary and Confidential Using E8 with IBM Spectrum Scale Client Client Client Client • Scalable to larger configurations E8 Agent • Can mix connectivity depending on requirements • Standalone pool NSD NSD E8 Agent E8 Agent • Shared LUNs • LROC • Non-shared LUNs (direct connect clients only) IB/ RoCE • HAWC E8 MDS E8-D24 (Dual-port SSDs) E8 MDS SSD SSD SSD SSD SSD SSD 18 ©2018 E8 Storage, Proprietary and Confidential RAID-6 Fastest Shared Block Storage in the World • 0.57ms - Record breaking response time! • 45% lower ORT for the same builds • 8x lower latency on average • The power of Intel Optane • Ultra-low latency for data intensive apps 8x lower latency! • More performance, less hardware E8 Storage 24 NVMe SSDs 2U WekaIO 64 NVMe SSDs 8U Huawei 6800 F 60 SAS SSDs 14U * As of SPEC SFS®2014_swbuild results published August 2018. SPEC SFS2014 is the industry standard benchmark for file storage performance. See all published results at https://www.spec.org/sfs2014/results/
Recommended publications
  • Minnesota WSCA-NASPO Master Agreement Request for Proposal | Pure Storage
    ! Pure Storage, Inc. Response to the State of Minnesota REQUEST FOR PROPOSAL ! ! ! MINNESOTA WSCA-NASPO Master Agreement for: Computer Equipment (Desktops, Laptops, Tablets, Servers, Storage and Ruggedized Devices including Related Peripherals & Services) ! The State of Minnesota REQUEST FOR PROPOSAL MINNESOTA WSCA-NASPO Master Agreement for: Computer Equipment (Desktops, Laptops, Tablets, Servers, Storage and Ruggedized Devices including Related Peripherals & Services) 1 | 2013_0916 Materials Management Division 112 Administration Building 50 Sherburne Avenue St. Paul, MN 55155 Voice: 651.296.2600 Fax: 651.297.3996 STATE OF MINNESOTA REQUEST FOR PROPOSAL (RFP) COMPUTER EQUIPMENT: (DESKTOPS, LAPTOPS, TABLETS, SERVERS, STORAGE, RUGGEDIZED DEVICES INCLUDING RELATED PERIPHERALS & SERVICES) DUE DATE: November 18, 2013 TIME: 3:00 P.M., CENTRAL TIME 2 | 2013_0916 TABLE OF CONTENTS SECTION 1: SCOPE OF WORK ........................................................................................................................................................... 4! A. INTRODUCTION .............................................................................................................................................................................. 4! B. OBJECTIVE ....................................................................................................................................................................................... 5! C. WSCA-NASPO BACKGROUND INFORMATION .......................................................................................................................
    [Show full text]
  • DCIG 2018-19 All-Flash Array Buyer's Guide
    The Insider’s Guide to Evaluating All-Flash Arrays 2018-19 ALL-FLASH ARRAY BUYER’S GUIDE By Jerome Wendt & Ken Clipperton EMPOWERING THE IT INDUSTRY WITH ACTIONABLE ANALYSIS · WWW.DCIG.COM 2018-19 ALL-FLASH ARRAY BUYER’S GUIDE The Insider’s Guide to Evaluating All-Flash Arrays Table of Contents 1 Introduction 22 All-Flash Array Products 23 Dell EMC Unity 550F All-Flash 3 Executive Summary 24 Dell EMC Unity 650F All-Flash 25 Dell EMC VMAX 250FX AFA 5 How to Use this Buyer’s Guide 26 Dell EMC VMAX 450FX AFA 27 Hitachi Vantara VSP F400 5 Disclosures 28 Hitachi Vantara VSP F600 6 Inclusion Criteria 29 Hitachi Vantara VSP F800 30 Hitachi Vantara VSP F1500 6 The Eight-Step Process Used 31 HPE 3PAR StoreServ 9450 to Rank the Products 32 HPE 3PAR StoreServ 20450 33 HPE Nimble Storage AF5000 All Flash Array 7 DCIG Thoughts and Comments on … 34 HPE Nimble Storage AF7000 All Flash Array 7 Compression and Deduplication 35 HPE Nimble Storage AF9000 All Flash Array 7 Breadth of VMware vSphere API Support 36 Huawei OceanStor 5300F V5 8 Non-volatile Memory Express Interface 37 Huawei OceanStor 5500F V5 9 Scale-up versus Scale-out AFA Architectures 38 Huawei OceanStor 5600F V5 10 Capacity, Density, and Power Consumption 39 Huawei OceanStor 5800F V5 10 Predictive Analytics 40 Huawei OceanStor 6800F V5 12 Non-disruptive Upgrades (NDU) 41 Huawei OceanStor Dorado5000 V3 (NVMe) 12 Performance and Pricing 42 Huawei OceanStor Dorado5000 V3 (SAS) 13 Best Practices 43 Huawei OceanStor Dorado6000 V3 13 DCIG Observations 44 NetApp AFF A200 13 Recommended Ranking
    [Show full text]
  • SSD Buyer Behavior
    SSD Buyer Behavior Flash Memory Summit August 21, 2012 [email protected] Content SSD Brand Leaders SSD Buying Behavior August 2012 IT Brand Leader Survey Q3 Data Center Technology Survey 2 SSD Brand Leaders August 2012 IT Brand Leader Survey 2012 SSD Brand Leaders Pure Storage, Violin All Flash SAN SSDs TMS TMS & NextIO TMS TMS TMS Memory & TMS All Flash NAS SSDs Nimbus Nimbus Nimbus Nimbus Nimbus Nimbus All DRAM SAN SSDs TMS TMS TMS TMS TMS TMS All Flash Unified SSDs Nimbus Nimbus Nimbus Nimbus Nimbus Nimbus PCIe Adapter SSDs Fusion-io Intel Fusion-io Intel Intel Fusion-io SAS/SATA SSDs Intel Intel Intel Intel Intel Intel Cache SSDs EMC EMC EMC EMC EMC Fusion-io NAS Cache Appliances Cache IQ Cache IQ Violin Memory Cache IQ Cache IQ Violin Memory SSD Controller Chips LSI LSI Intel Intel Intel LSI Hybrid HDD/SSD Systems EMC EMC EMC EMC IBM IBM 4 All Flash SAN SSD 5 All Flash NAS SSD 6 All DRAM SAN SSD 7 All Flash Unified SAN/NAS SSD 8 PCIe Adapter SSD 9 SAS/SATA SSD Modules 10 Cache SSD (System, Adapter or SW) 11 NAS Cache Appliances 12 SSD Controller Chips 13 Hybrid HDD/SSD Systems 14 SSD Buyer Behavior Q3 2012 Data Center Technology Survey My organization’s spending for storage in the next 12 months will: Increase of 0-5% 10.3% Increase of 6-10% 16.4% Increase of 11-25% 12.9% Increase of 26-50% 4.7% 47.7% Increase of 50-75% 1.7% Increase of more than 75% 1.7% Stay the same 41.1% Decrease of 0-5% 2.2% Decrease of 6-10% 2.2% Decrease of 11-25% 3.4% Decrease of 26-50% 1.7% 11.2% Decrease of 50-75% 0.4% Decrease of more than 75%
    [Show full text]
  • 2015 Enterprise Flash Storage Who's Adopting Them and Why
    August 11, 2015 2015 Enterprise Flash Storage Who's Adopting Them and Why Frank Berry Senior Analyst This preview version does not show numerical data as presented in the Flash Memory Summit session. The cost to purchase a copy of the full presentation (which includes all numerical data) is $ 495. Contact [email protected] to order. IT Brand Pulse Year in Review 2015 Flash Brand Leaders 2015 Flash Adoption Trends 2 About Us A trusted source to 20,000 IT pro subscribers for product testing, research, and analysis about data center infrastructure 3 2015 Flash Memory Summit IT Brand Pulse Year in Review 2015 Flash Brand Leaders 2015 Flash Adoption Trends 4 6’7” 250 lb. Penguins 2015 Flash Memory Summit 5 “[We] offer [TLC 3D NAND] SSDs at approximately the same price per gigabyte as high-end 15K HDDs and with up to 24 times performance improvement, up to six times the density, lower latency and lower power consumption". – Dell, 7/21/15 2015 Flash Memory Summit 6 Software Defined [Flash] Storage Cost of 250TB Growing 25% Per Year $400,000 $350,000 $300,000 $250,000 $200,000 $150,000 $100,000 $50,000 $- Year 1 Year 2 Year 3 Year 4 Year 5 EMC VNXe 3200 $192,922 $213,468 $242,108 $308,338 $380,010 NetApp E2700 $60,206 $87,838 $144,718 $196,890 $261,622 NEC M110 $83,017 $103,285 $130,975 $178,055 $225,203 Dot Hill Ultra56 $55,979 $64,493 $87,555 $104,713 $123,774 SUSE Enterprise Storage $70,100 $76,579 $84,825 $96,238 $108,607 2015 Flash Memory Summit 7 Flash reliability breaks out 2015 Flash Memory Summit 8 Enterprise IT OEM public
    [Show full text]
  • Flash Was Only the Begining Book | Pure Storage
    DIGITAL EDITION FLASH WAS ONLY THE BEGINNING The creators of Pure Storage who reimagined data storage and the fastest growing enterprise technology company in history FLASH WAS ONLY THE BEGINNING DIGITAL EDITION FLASH WAS ONLY THE BEGINNING The creators of Pure Storage® who reimagined data storage and built the fastest growing enterprise technology company in history DIGITAL EDITION Pure Storage, Inc. © 2019 Pure Storage, Inc. All rights reserved. No part of the publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise without the prior written permission of the author, Pure Storage, Inc. Requests to the author for permission should be addressed to [email protected] or written to the address below. The Pure Storage products described in the publication are distributed under a license agreement restricting the use, copying, distribution, decompilation and reverse engineering of the products. Pure Storage may make improvements and/or changes in the Pure Storage products and/or the programs described in the publication at any time without notice. Pure Storage, the Pure Storage Logo, ActiveCluster, AIRI, CloudSnap, DirectFlash, Evergreen, FlashArray, FlashBlade, FlashStack, Pure1 and Pure//Accelerate are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and other countries. Other company, product, or service names may be trademarks or service marks of others. While the author and publisher have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose.
    [Show full text]