Nexentastor Hardware Reference Architectures

Total Page:16

File Type:pdf, Size:1020Kb

Nexentastor Hardware Reference Architectures NexentaStor 5.x Hardware Certification List (HCL) Nexenta Certification Team Dec 2019 Table of Contents 1 Overview .................................................................................................................................. 7 1.1 Introduction ............................................................................................................................................................................... 7 1.2 NexentaStor Solutions ........................................................................................................................................................... 7 1.2.1 Reference Architectures (RA) ............................................................................................................................................. 7 1.2.2 Certified Solutions (CS) .......................................................................................................................................................... 7 1.3 Common Pre-Requisites ....................................................................................................................................................... 8 2 Cisco Reference Architectures ................................................................................................... 9 2.1 Cisco All-Flash Configurations ........................................................................................................................................... 9 2.1.1 Cisco C240 and HGST 2U24 All-Flash .............................................................................................................................. 9 2.2 Cisco Hybrid / All-Disk Configurations ....................................................................................................................... 10 2.2.1 Cisco C240 Standalone Hybrid ......................................................................................................................................... 10 2.2.2 Cisco C240 and Seagate Hybrid / All-Disk .................................................................................................................. 11 2.2.3 Cisco C240 and HGST 4U60G2 Hybrid / All-Disk ..................................................................................................... 12 3 Dell 14G Reference Architectures ............................................................................................ 13 3.1 Dell 14G All-Flash Configurations .................................................................................................................................. 13 3.1.1 Dell R740 and R740xd All-Flash ...................................................................................................................................... 13 3.2 Dell 14G Hybrid Configurations ...................................................................................................................................... 14 3.2.1 Dell R740xd Hybrid................................................................................................................................................................ 14 3.2.2 Dell R740 Hybrid – MD14xx .............................................................................................................................................. 15 3.2.3 Dell R740 Hybrid – ME484 ................................................................................................................................................. 16 3.3 Dell and HGST Storage Platform Configurations ..................................................................................................... 18 3.3.1 Dell R740 and HGST 2U24 All-Flash .............................................................................................................................. 18 3.3.2 Dell R740 and HGST Ultrastar Data60 Hybrid / All-Disk .................................................................................... 19 4 Dell 13G Reference Architectures ............................................................................................ 20 4.1 Dell 13G All-Flash Configurations .................................................................................................................................. 20 4.1.1 Dell R730 and R730xd All-Flash ...................................................................................................................................... 20 4.2 Dell 13G Hybrid Configurations ...................................................................................................................................... 21 4.2.1 Dell R730xd Hybrid................................................................................................................................................................ 21 4.2.2 Dell R730 Hybrid – MD14xx .............................................................................................................................................. 22 4.2.3 Dell R730 Hybrid – MD3060e and MD1280 ............................................................................................................... 23 4.3 Dell 13G All-Disk Configurations .................................................................................................................................... 23 4.3.1 Dell R730 All-Disk – MD3060e and MD1280 ............................................................................................................. 24 4.4 Dell and HGST Storage Platform Configurations ..................................................................................................... 25 4.4.1 Dell R730 and HGST 2U24 All-Flash .............................................................................................................................. 25 4.4.2 Dell R730 and HGST 4U60G2 Hybrid / All-Disk ....................................................................................................... 26 5 HPE Gen10 Reference Architectures ........................................................................................ 27 5.1 HPE Gen10 All-Flash Configurations ............................................................................................................................ 27 5.1.1 HPE DL380 Gen10 and D3710 All-Flash ...................................................................................................................... 27 5.2 HPE Gen10 Hybrid Configurations ................................................................................................................................ 28 5.2.1 HPE DL380 Gen 10 Hybrid – D3610 .............................................................................................................................. 28 5.2.2 HPE DL380 Gen 10 Hybrid – D3710 .............................................................................................................................. 29 5.2.3 HPE DL380 Gen 10 Hybrid – D6020 .............................................................................................................................. 30 2 6 Lenovo Reference Architectures .............................................................................................. 32 6.1 Lenovo ThinkSystem SR650 All-Flash – D1224 ....................................................................................................... 32 6.2 Lenovo Hybrid Configurations ........................................................................................................................................ 33 6.2.1 Lenovo ThinkSystem SR650 Hybrid – D1224 ............................................................................................................ 33 6.2.2 Lenovo ThinkSystem SR650 Hybrid – D1212 ............................................................................................................ 34 6.2.3 Lenovo ThinkSystem SR650 Hybrid – D3284 ............................................................................................................ 35 6.3 Lenovo ThinkSystem SR650 Single Node Appliances ........................................................................................... 36 6.3.1 Lenovo ThinkSystem SR650 Single Node (No HA) – All-Flash........................................................................... 36 6.3.2 Lenovo ThinkSystem SR650 Single Node (No HA) – Hybrid 2.5” ..................................................................... 37 6.3.3 Lenovo ThinkSystem SR650 Single Node (No HA) – Hybrid 3.5” ..................................................................... 38 6.4 Lenovo and HGST Storage Platform Configurations .............................................................................................. 39 6.4.1 Lenovo ThinkSystem SR650 and HGST 2U24 All-Flash ........................................................................................ 39 6.4.2 Lenovo ThinkSystem SR650 and HGST Ultrastar Data 60 Hybrid / All-Disk ............................................. 40 7 StorMax Reference Architectures ............................................................................................ 41 7.1 StorMax All-Flash and Hybrid Configurations .......................................................................................................... 41 8 Supermicro X11 Reference Architectures ................................................................................. 42 8.1 Supermicro X11 All-Flash Configurations .................................................................................................................
Recommended publications
  • Inventors, Technologists and Entrepreneurs
    19 November 2013 We, the undersigned, are a group of inventors, technologists and entrepreneurs. Many of us have founded technology businesses; we have invented many of the protocols, systems and devices that make the Internet work, and we are collectively listed as the inventors on over 150 patents. We write to you today about the U.S. patent system. That system is broken. Based on our experiences building and deploying new digital technologies, we believe that software patents are doing more harm than good. Perhaps it is time to reexamine the idea, dating from the 1980s, that government-issued monopolies on algorithms, protocols and data structures are the best way to promote the advancement of computer science. But that will be a complex task, and one we don't expect to happen quickly. Unfortunately, aspects of the problem have become so acute they must be addressed immediately. Broad, vague patents covering software-type inventions—some of which we ourselves are listed as inventors on—are a malfunctioning component of America's inventive machinery. This is particularly the case when those patents end up in the hands of non-practicing patent trolls. These non-practicing entities do not make or sell anything. Their exploitation of patents as a tool for extortion is undermining America’s technological progress; patent trolls are collecting taxes on innovation by extracting billions of dollars in dubious licensing fees, and wasting the time and management resources of creative businesses. Many of us would have achieved much less in our careers if the trolling problem had been as dire in past decades as it is now.
    [Show full text]
  • Education and Research Solutions Brief
    Education and Research Solutions Brief Data Integrity, Performance, Flexibility, and Lower Total Cost of Storage Whether a university department or a large research lab, higher education and research institutions need to provide an IT infrastructure that will allow users to store, retrieve, and analyze data generated by their user groups whether on campus, around the world, or from outer space. Enter: Nexenta Nexenta software-only products solve accessibility, storage performance, and data management In summary: bottlenecks and do so cost effectively. Nexenta provides a cost optimized, hardware independent Why Choose Nexenta? storage platform to safely keep data and make it accessible from anywhere. It delivers superior performance running on industry standard x86 reference architectures from leading hardware • Total Flexibility on hardware choice vendors such as Dell, HP, SuperMicro, Cisco, Quanta, and others. • Lowest total cost for storage • Highest levels of data integrity • Improved data performance With Nexenta, Research Institutes can meet higher levels of: Data Integrity: the large, complex, and sometimes sensitive data sets at higher education and research institutions require the utmost in data reliability. Using the proven ZFS file system, NexentaStor provides unparalleled levels of reliability against data corruption and bit rot Revolutionize your data storage. which has become a growing issue as the amount of data has exploded; and ZFS copy-on- write delivers 100% data integrity. Choose Nexenta Performance: User groups at these institutions put high demands on their data. High levels of Software-Defined Storage memory caching in its ZFS technology allow NexentaStor to deliver superior I/O for the fastest Request a proof-of-concept at access to content.
    [Show full text]
  • Nexentastor 5.X Reference Architecture- Supermicro
    NexentaStor 5.x ReFerence Architecture May 2019 Copyright © 2019 Nexenta Systems, ALL RIGHTS RESERVED www.nexenta.com NexentaStor 5.x ReFerence Architecture- Supermicro TabLe oF Contents Table of Contents ............................................................................................................................ 2 Preface ............................................................................................................................................ 3 Intended Audience ................................................................................................................................................................................. 3 Comments .................................................................................................................................................................................................. 3 Copyright, Trademarks, and Compliance ..................................................................................................................................... 3 1 Supermicro X11 Reference Architectures ................................................................................... 4 1.1 Supermicro X11 All-Flash Configurations ....................................................................................................................... 4 1.1.1 Supermicro X11 All-Flash – 24 Bay SC216 .................................................................................................................... 4 1.2 Supermicro X11 Hybrid Configurations
    [Show full text]
  • Nexentastor Installation Guide 4.0.4 FP1
    NexentaStor Installation Guide 4.0.4 FP1 Date: June, 2016 Subject: NexentaStor Installation Guide Software: NexentaStor Software Version: 4.0.4 FP1 Part Number: 2000-nxs-4.0.4 FP1-0000012-B Copyright © 2016 Nexenta Systems, ALL RIGHTS RESERVED www.nexenta.com NexentaStor Installation Guide Copyright © 2016 Nexenta SystemsTM, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose, without the express written permission of Nexenta Systems (hereinafter referred to as “Nexenta”). Nexenta reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. Nexenta products and services only can be ordered under the terms and conditions of Nexenta Systems’ applicable agreements. All of the features described in this document may not be available currently. Refer to the latest product announcement or contact your local Nexenta Systems sales office for information on feature and product availability. This document includes the latest information available at the time of publication. Nexenta, NexentaStor, NexentaEdge, and NexentaConnect are registered trademarks of Nexenta Systems in the United States and other countries. All other trademarks, service marks, and company names in this document are properties of their respective owners. Product Versions Applicable to this Documentation: Product Versions supported NexentaStorTM 4.0.4 FP1 Copyright © 2016 Nexenta Systems, ALL RIGHTS RESERVED ii www.nexenta.com NexentaStor Installation Guide Contents Preface . .vii 1 Introduction . .1 About NexentaStor . .1 NexentaStor Advantages .
    [Show full text]
  • Meet Nexenta
    Who we are Meet Nexenta is the Global Leader in Open Source-driven Software-Defined Storage. We lead the market in combining our Open Source-driven Software with the most innovative industry standard hardware to provide you the ability to maximize the potential of the storage in your data center. Nexenta. Today, Nexenta Software-Defined Storage manages over 2.0 Exabytes of storage. What we do “Our decision to go with Nexenta Nexenta provides you with the functionality, flexibility and performance that you want at a significantly lower TCO than hardware appliance vendors, will save the university around $9 delivering greater value that satisfies both your current and future data million over a five year period. ‘Tier 1’ center needs. By providing a software-only storage solution, including a rich storage vendors would have charged feature set across your complete protocol’s needs (block, file, and object), Nexenta enables you to deliver a Software-Defined infrastructure to your 3 to 3.5 times more than the amount legacy and next generation applications, while giving you the freedom to negotiated with Nexenta and yet the choose the platform you want to run on. functionality was basically the same.” What makes us different Patrick J.A. Hopewell Our vision. It’s focused on promoting and delivering the true benefits Director, Enterprise Infrastructure Solutions of Software-Defined Everything. We use our vision to promote and University of Toronto drive open partnerships with our customers and partners, including server hardware, networking, SSDs, HDDs, cloud platforms, application infrastructure, business analytics, and everything else considered part of an IT infrastructure.
    [Show full text]
  • Introducing a New Product
    illumos SVOSUG Update Presented by Garrett D'Amore Nexenta Systems, Inc. August 26, 2010 What's In A Name? illumos = illum + OS = “Light + OS” Light as in coming from the Sun... OS as in Operating System Note: illumos not Illumos or IllumOS “illumos” trademark application in review. Visual branding still under consideration. Not All of OpenSolaris is Open Source ● Critical components closed source – libc_i18n (needed for working C library) – NFS lock manager – Portions of crypto framework – Numerous critical drivers (e.g. mpt) ● Presents challenges to downstream dependents – Nexenta, Belenix, SchilliX, etc. – See “Darwin” and “MacOS X” for the worst case What's Good ● The Technology! – ZFS, DTrace, Crossbow, Zones, etc. ● The People – World class engineers! – Great community of enthusiasts – Vibrant ecosystem ● The Code is Open – Well most of it, at least illumos – the Project ● Derivative (child) of OS/Net (aka ON) – Solaris/OpenSolaris kernel and foundation – 100% ABI compatible with Solaris ON – Now a real fork of ON, but will merge when code available from Oracle ● No closed code – Open source libc, kernel, and drivers! ● Repository for other “experimental” innovations – Can accept changes from contributors that might not be acceptable to upstream illumos – the Ecosystem ● illumos-gate is just ON – Focused on “Core Foundation Blocks” – Flagship project ● Expanding to host other affiliated projects – Umbrella organization – X11 components? – Desktop components? – C++ Runtime? – Distributions? illumos – the Community ● Stands independently
    [Show full text]
  • A Technical Summary
    A Technical Summary Table A.1. Kerberos administrative ACL (MIT) Allow Deny Operation a A Add principals or policies d D Delete principals or policies m M Modify principals or policies c C Change passwords i I Inquiry the database l L List principals or policies s S Set key for principals * or x Allow all privileges Table A.2. Kerberos kadmin password policies (MIT) Option Meaning -history Minimum count of unusable old passwords -maxlife Maximum allowed lifetime -minclasses Minimum character classes -minlength Minimum password length -minlife Minimum lifetime 380 A Technical Summary Table A.3. LDAP access control list levels Level Privilege Coding Explanation none =0 No access at all auth =x Permits authentication attempt compare =cx Permits comparison search =scx Permits search filter application read =rscx Permits search result inspection write =wrscx Permits modification or deletion Table A.4. The OpenAFS access control list attributes Permission Meaning l List contents (lookup) i Create new files or directories (insert) d Delete files or directories a Change ACL attributes of directories (administer) r Read contents of files w Modify contents of files (write) k Lock files for reading read Equivalent to rl write Equivalent to rlidwk (no administrative rights) all Equivalent to rlidwka none Remove all ACL permissions Table A.5. The OpenAFS @sys names (excerpt) @sys Architecture alpha dux40 Digital UNIX 4 on an Alpha alpha dux50 Digital UNIX 5 on an Alpha i386 linux24 Linux Kernel 2.4 on Intel and compatible i386 linux26 Linux Kernel 2.6 on Intel and compatible i386 nt40 Microsoft Windows NT and later on Intel and compatible ppc darwin70 Apple MacOS X 10.3 on a PowerPC Macintosh ppc darwin80 Apple MacOS X 10.4 on a PowerPC Macintosh ppc darwin90 Apple MacOS X 10.5 on a PowerPC Macintosh rs aix52 IBM AIX 5.2 on a pSeries rs aix53 IBM AIX 5.3 on a pSeries sgi 65 SGI Irix 6.5 on a MPIS x86 darwin80 Apple MacOS X 10.4 on an Intel Macintosh x86 darwin90 Apple MacOS X 10.5 on an Intel Macintosh A Technical Summary 381 Table A.6.
    [Show full text]
  • Nexentaedge Datasheet
    NexentaEdge Datasheet Multi-Service Scale-Out Storage Software NexentaEdge provides scale-out Block, File and Object (S3 & Swift) storage services delivering cluster-wide inline deduplication and compression at chunk level to deliver enterprise grade data integrity. Quick Erasure Coding and shared namespace between NFS and object data provide best in class performance with low touch operating model. Next Generation Scale-Out Architecture NexentaEdge is a truly distributed, scale-out architecture, Standard servers consisting of four or more physicalservers interconnected using NexentaEdge NexentaEdge NexentaEdge a dedicated 10 Gigabit Ethernet (10 GbE) network for cluster communication. The connected servers form a cluster that maintains NexentaEdge NexentaEdge NexentaEdge redundancy and resilience of data throughout the system using strong All-flash, hybrid or all-disk cryptographic checksums for data integrity, and replication NexentaEdge NexentaEdge NexentaEdge technology to ensure hardware-level redundancy. NexentaEdge runs on shared nothing clusters of industry standard Linux servers. See the NexentaEdge Configuration Guidelines for Multi-service Scale-Out Block, File & Object more information. Supporting Platforms for Converged Infrastructures NexentaEdge is ideally suited to support use cases from Enterprise and Next Generation Clouds, Object Applications, and large Object-Based Archives. NexentaEdge clusters can be deployed as either all-flash, hybrid, or all-disk configurations. Enterprise Clouds Next Gen Clouds & Object Apps Object Archive All-Flash Hybrid All HDD iSCSI, Native Block, NFS, S3 and Swift S3, Swift, NFS High Performance Scalability Inline Deduplication & Compression Quick EC Ease of operation Data integrity Visit us online at nexenta.com NexentaEdge System Requirements & Feature Highlights Key Features of NexentaEdge NexentaEdge software provides flexibility in hardware configuration and brings unparalleled simplicity to your daily user experience regardless of your deployment size.
    [Show full text]
  • Nexenta, Openstorage and Commercial Open Source
    Nexenta, OpenStorage and Commercial Open Source Anil Gulecha Developer / Community Lead, Nexenta Core Platform What we'll cover Nexenta Core Platform OpenSolaris Ubuntu Nexenta's difference Distribution Creation Derivatives Nexenta.org Commercial Open Source The OpenStorage Industry NexentaStor www.nexentastor.org Nexenta Core Platform Nexenta is a combination of two great technologies: OpenSolaris : The enterprise class FOSS kernel Ubuntu userland : The most popular *nix userland and packaging framework Started in 2005 as a desktop distribution, based on Ubuntu. Proved hard to support due to small dev team Starting from early 2008, the project only concentrated on a command-line only core platform, which became known as the Nexenta Core Platform. The focus was to create the one of the best server platform out there. NCP: The OpenSolaris side Nexenta is based on the opensolaris kernel. It thus has goodies like The ZFS filesystem, which has revolutionized the open source file systems arena. Dtrace, which provides administrators and developers with the ability to analyze the system in a fine-grained manner. Zones, the lightweight virtualization which allows for opensolaris (and linux) virtualization on Nexenta. Nexenta maintains patches of it's own improvements to the OpenSolaris kernel, which includes patches providing iSCSI on ZFS volumes, and CIFS performance. To fit into the Ubuntu/Debian model, the opensolaris kernel is packaged as .deb packages, for ex: sunwckr : the core kernel runtime package sunwzfs : the zfs package NCP: The Ubuntu side All userland packages, which include shell commands, server packages, tools and utilities are ports of upstream Ubuntu packages. Ubuntu has a lot of packages.
    [Show full text]
  • Nexentastor 4.0.4 User Guide
    NexentaStor User Guide 4.0.4 Date: June, 2015 Subject: NexentaStor User Guide Software: NexentaStor Software Version: 4.0.4 Part Number: 3000-nxs-4.0.4 000057-B Copyright © 2015 Nexenta Systems, ALL RIGHTS RESERVED www.nexenta.com NexentaStor User Guide Copyright © 2015 Nexenta SystemsTM, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose, without the express written permission of Nexenta Systems (hereinafter referred to as “Nexenta”). Nexenta reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. Nexenta products and services only can be ordered under the terms and conditions of Nexenta Systems’ applicable agreements. All of the features described in this document may not be available currently. Refer to the latest product announcement or contact your local Nexenta Systems sales office for information on feature and product availability. This document includes the latest information available at the time of publication. Nexenta, NexentaStor, NexentaEdge, and NexentaConnect are registered trademarks of Nexenta Systems in the United States and other countries. All other trademarks, service marks, and company names in this document are properties of their respective owners. Product Versions Applicable to this Documentation: Product Versions supported NexentaStorTM 4.0.4 Copyright © 2015 Nexenta Systems, ALL RIGHTS RESERVED ii www.nexenta.com NexentaStor User Guide Contents Preface . xv 1 Introduction . .1 About NexentaStor . .1 About NexentaStor Components . .2 Using Plugins .
    [Show full text]
  • Nexentaedge Datasheet
    NexentaEdge Datasheet Next Generation scale-out storage for your Next Generation Cloud Applications NexentaEdge is Software-Defined Storage for multi-petabyte scale repositories, delivering high performance Block and Object services and truly disruptive storage economics thanks to cluster wide inline deduplication. The relentless growth of unstructured data creates a need for solutions specifically designed to scale and run on low cost shared-nothing clusters of industry standard servers, delivering highly available, highly durable and extremely cost effective storage. NexentaEdge is ideally Software suited to meet these requirements and support the following use cases: Defined A single NexentaEdge cluster delivers Cinder Block, Swift and S3 Object storage services required by OpenStack PB Petabyte clouds. Horizon management plug-in simplifies storage Scale management and capacity planning. Inline NexentaEdge delivers the high performance iSCSI Deduplication Block services required by CloudStack and Citrix CloudPlatform infrastructures. Cluster wide inline deduplication minimizes the amount of capacity used to store virtual machine images and application data. Auto Optimized Chunk level cryptographic checksums, S3 object API and unmatched storage efficiencies provided by cluster wide inline deduplication make NexentaEdge an ideal platform for petabyte scale backup and archive repositories. Built for Cloud Visit us online at nexenta.com NexentaEdge Features Overview Next Generation Design Host Operating • Scale out, shared nothing architecture
    [Show full text]
  • Nexentastor Release Notes 4.0.4
    NexentaStor Release Notes 4.0.4 Date: July, 2015 Subject: NexentaStor Release Notes Software: NexentaStor Software Version: 4.0.4 Part Number: 7000-nxs-4.0.4-000017-A Copyright © 2015 Nexenta Systems, ALL RIGHTS RESERVED www.nexenta.com NexentaStor Release Notes This page intentionally left blank Copyright © 2015 Nexenta Systems, ALL RIGHTS RESERVED ii www.nexenta.com NexentaStor Release Notes 4.0.4 Contents What is New in this Release? ..........................................................................................................1 VAAI Block Changes ..................................................................................................................2 Warning about upgrading without preparation ..................................................................3 Planning for upgrade ............................................................................................................4 Identifying VMFS datastore extents and checking ATS status .............................................4 Disabling ATS on Existing Datastores ...................................................................................6 Enabling ATS on Existing Datastores ....................................................................................6 Enabling ATS on NexentaStor ..............................................................................................7 Reverting to pre-FP3 checkpoints ........................................................................................8 IPMI Support ............................................................................................................................8
    [Show full text]