Breakthroughs in NVM Storage

Andrey Kudryavtsev, SSD Solution Architect, NSG, Corp. notices and disclaimers

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration.

No computer system can be absolutely secure.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. For more complete information about performance and benchmark results, visit http://www.intel.com/benchmarks .

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks .

Intel® Advanced Vector Extensions (Intel® AVX)* provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo.

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor- dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.

Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Intel, the Intel logo, and Intel are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as property of others.

© 2018 Intel Corporation.

2 Changing Data Needs Have Exposed Storage & Memory Gaps

NVM SOLUTIONS GROUP 3 Intel® Optane™ Technology+ Intel® QLC Technology FilltheGaps dram

ory Enable new insightswith

m Intel® Optane™DC

e bigger, moreaffordable

ended t

m persistentmemory memory

memory ex

g n Break through bottlenecks i Intel® Optane™SSD

to increase value of storage data ork

w DCP4800X storage Cost-optimized SSDs enable Intel®QLC storage consolidation and acceleration capacity 3d NANDSSD

HDD Products not shown to actual scale.

NVM SOLUTIONS GROUP 7 HPC. Recognize the BEST fit for SSDs. HPC Cluster

5 HPC. Recognize the BEST fit for SSDs. HPC Cluster 1 Local compute storage – use Optane SSDs for certain workloads requiring large scratch/temp storage

1 Optane SSD

6 HPC. Recognize the BEST fit for SSDs. HPC Cluster 1 Local compute storage – use Optane SSDs for certain workloads requiring large scratch/temp storage 2 IO nodes – deploy Optane SSDs to accelerate data transfer to/from compute node and/or 2 IO Node burst buffer for usages such as memory Optane snapshot across multiple compute nodes SSD P4510/ P4610 SSDs

1 Optane SSD

7 HPC. Recognize the BEST fit for SSDs. HPC Cluster Parallel Storage 1 Local compute storage – use Optane SSDs for 3 Meta data certain workloads requiring large scratch/temp storage Optane SSDs 2 IO nodes – deploy Optane SSDs to accelerate data transfer to/from compute node and/or 2 IO Node burst buffer for usages such as memory Optane snapshot across multiple compute nodes SSD P4510/ P4610 3 Metadata – accelerate meta data and access to SSDs frequently accessed small files with Optane SSDs

1 Optane SSD

HDD

8 HPC. Recognize the BEST fit for SSDs. HPC Cluster Parallel Storage 1 Local compute storage – use Optane SSDs for 3 Meta data certain workloads requiring large scratch/temp storage Optane SSDs Tiered 2 IO nodes – deploy Optane SSDs to accelerate data transfer to/from compute node and/or 2 IO 4 Storage Node burst buffer for usages such as memory Optane snapshot across multiple compute nodes SSD P4510/ P4610 3 Metadata – accelerate meta data and access to SSDs frequently accessed small files with Optane SSDs P4510 16TB U.2 4 Storage – reduce storage TCO with space and or “Ruler” EDSFF SSDs 1 power efficient, highly manageable ruler FF. Optane SSD Opportunity for NAND based NVMe SSDs.

HDD

9 HPC. Recognize the BEST fit for SSDs. HPC Cluster Parallel Storage 1 Local compute storage – use Optane SSDs for 3 Meta data certain workloads requiring large scratch/temp storage Optane SSDs Tiered 2 IO nodes – deploy Optane SSDs to accelerate data transfer to/from compute node and/or 2 IO 4 Storage Node burst buffer for usages such as memory Optane snapshot across multiple compute nodes SSD P4510/ P4610 3 Metadata – accelerate meta data and access to SSDs frequently accessed small files with Optane 5 SSDs Memory node P4510 16TB U.2 4 Storage – reduce storage TCO with space and or “Ruler” EDSFF SSDs power efficient, highly manageable ruler FF. 1 Optane Optane DC Optane SSD SSDs Persistent Opportunity for NAND based NVMe SSDs. Memory 5 Memory nodes – use Optane DC Persistent Memory or Optane SSD with Intel Memory Drive Technology to deploy fat memory nodes. HDD

10 Intel® Optane™ SSD DC P4800x. The Ideal Caching Solution.

lower & more consistent latency + higher endurance = more efficient Average Read Latency under Random Write Workload1 Drive Writes Per Day (DWPD)2 Cache as a % of Storage Capacity3

Up to

Intel® Optane™ Intel® Optane™ SSD DC P4800X SSD DC P4800X as cache

60.0 DWPD Storage

Intel® SSD Intel® SSD DC DC P4600 P4600 (3D NAND) (3D NAND) as cache 3.0 DWPD Storage

Intel® Optane™ SSD DC P4800X Intel® SSD DC P4600 Intel® Optane™ SSD DC P4800X Intel® SSD DC P4600 Low Latency + High Endurance = Greater SDS System Efficiency 1. Source – Intel-tested: Average read latency measured at queue depth 1 during 4k random write workload. Measured using FIO 2.15. Common Configuration - Intel 2U Server System, OS CentOS 7.5, kernel 4.17.6-1.el7.x86_64, CPU 2 x Intel® Xeon® 6154 Gold @ 3.0GHz (18 cores), RAM 256GB DDR @ 2666MHz. Configuration – Intel® Optane™ SSD DC P4800X 375GB and Intel® SSD DC P4600 1.6TB. Latency – Average read latency measured at QD1 during 4K Random Write operations using fio-2.15. System BIOS: 00.01.0013; ME Firmware: 04.00.04.294; BMC Firmware: 1.43.91f76955; FRUSDR: 1.43. The benchmark results may need to be revised as additional testing is conducted. Performance results are based on testing as of July 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. 2. Source – Intel: Endurance ratings available at https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/optane-dc-p4800x-series/p4800x-750gb-2-5-inch.html 3. Source – Intel: General proportions shown for illustrative purposes. NVM SOLUTIONS GROUP 11 Intel® Storage Performance Snapshot Tool

Overview . A lightweight tool for collecting and analyzing system-level performance information with special focus on storage Data Collector . Very easy to use, zero configuration, non- intrusive, low overhead . Based on the Linux* dstat utility and requires dstat to operate . Outputs a standard CSV file User Interface (UI) . HTML-based UI for viewing and analyzing the collected data . Runs from any modern browser (Chrome, Firefox, IE, Safari) . Does not require network connection. Data is never uploaded and always stay on the local computer

*Other names and brands may be claimed as the property of others.

NVM SOLUTIONS GROUP 12 ® AI/Analytics: Java 10 Direct I/O* Optimizations vs Buffered I/O

Single threaded Random Read with one outstanding I/O on Intel® Optane™ DC SSD P4800X (higher is better) 60 EXT4 48.51 50 EXT4 % performance 39.84 Buffered I/O improvement with 40 31.50 (baseline) Direct I/O* vs. 28.65 30 25.67 25.93 Buffered I/O 21.05 20 13.52 10 4.45

0 4k 8k 16k 32k 64k 128k 256k 512k 1024k Block Sizes Direct I/O* Optimizations Provide up to 48% Greater Efficiency1 1 Source – Intel: System Configuration : S2600WFT Intel White Box, 2x Intel® Xeon® Gold 6154 CPU @ 3.00GHz with 36 vcores, 64GB DIMM DDR4 Synchronous 2666 MHz (0.4 ns) - 4x 16GB , 1x NVMe* PCIe* Intel® Optane™ SSD DC P4800X 750GB (Firmware version : E2010324), 1x NVMe* PCIe* Intel® SSD DC P4500 4TB (Firmware version : QDV10150) , Intel® BIOS Version: SE5C620.86B.00.01.0013.030920180427, CentOS* 7.4 distribution with 4.15.7 kernel. See OpenJDK* info @ http://openjdk.java.net/projects/jdk/10/. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Performance results are based on testing as of July 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. * Other names and brands may be claimed as the property of others

NVM Solutions Group 13 Introducingintel® memory drive technology

• Use Intel® Optane™ SSD DC P4800X transparently as memory • Grow beyond system DRAM capacity, or replace high-capacity DIMMs for lower-cost alternative, with similar performance • Leverage storage-class memory today! • No change to software stack: unmodified Linux* OS, applications, and programming • No change to hardware: runs bare-metal, loaded before OS from BIOS or UEFI OLD NEW • Aggregated single volatile memory pool

*Other names and brands may be claimed as the property of others

NVM Solutions Group 14 Intel® memory drive technology delivers big, affordable memory

use case use case 1 EXPAND beyond limited DRAM CAPACITY 2 Displace dram with Affordable SSDs

INTEL® MEMORY DRIVE TECHNOLOGY INTEL® MEMORY DRIVE TECHNOLOGY Expand Insights with Reduce High-capacity DRAM Massive Data Pools CAPEX Expenditures

Note: Intel® Memory Drive Technology supports Linux* x86_64 (64-bit), kernels 2.6.32 or newer. *Other names and brands may be claimed as the property of others

NVM Solutions Group New Data Solutions. Supporting Data Center DesignFlexibility.

DRAM Intel® Optane™ DC Intel® Optane™SSD persistent memoryModule with software

Capacity

Latency/ Bandwidth

Power

N Persistency Y N

Graphical representation of product comparison is based on internal Intel analysis, and is provided here for informational purposes only. Any differences in system hardware, software or configuration may affect actual performance.

32 NVM Solutions Group intel® Optane™ SSD + IMDT for massive memory expansion

Modern usages benefit from massive memory Efficiently scale to larger working sets SGEMM: compute efficiency (Higher is better) 100% and more… 90% 75% ` 80% Lower 70% $/GB Image Speech Natural Language 60% Search Recognition Processing 50% 40% 83.1% 79.3% 30% 20% 95% Use it as a “setup validator” Peakof % Theoretical GFlops vs. 10% All-DRAM For complete source code and instructions: 0% https://github.com/ScaleMP/SEG_SGEMM All-DRAM IMDT (768GB RAM)… (768GB DRAM, 2.7 TB Total)… Expand to 4x larger deep learning problem size (matrix multiplication) with near DRAM performance and much lower cost

2x E5-Xeon Gold 6140 @ 2.3 GHz – 72 CPUs, Theoretical peak of 5.3 TFlop/s single-precision for dual-socket setup 1 Source – ScaleMP* tested: Segmented GEMM* workload, details at https://github.com/ScaleMP/SEG_SGEMM. Estimated results were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown". Implementation of these updates may make these results inapplicable to your device or system. * Other names and brands may be claimed as the property of others

17 Apache spark* 2.2.0 + intel® memory drive technology Spark* K-Means* benchmark Workload Cluster (workers) Configuration Cost Comparison Execute advanced analytics on very large data sets 100,000.00 45 using Apache Spark 2.2.0* and Intel® Memory Drive 80,000.00 43 30 35 Technology

60,000.00 25 Runtime 40,000.00 12 15 Additional Info 20,000.00 5 • This workload benchmarks K-Means* Clustering Dollars 0.00 -5 algorithm implemented in Spark-MLLib*. Data 3x Intel® Xeon® 2x Intel® Xeon® 3x Intel® Xeon® Server Server + Intel® Server + Intel® source is generated by GenKMeansDataset based Memory Drive Memory Drive on Uniform Distribution and Guassian Distribution Technology Technology • K-Means is one of the oldest, most commonly used clustering algorithm that partitions data points into clusters such that data points in same cluster are similar than the data points in other clusters Server Cost Optane/Intel® Memory Drive Technology Cost Runtime [min] • A larger Intel Memory Drive Technology + Intel® Optane™ SSD memory pool enables running larger Intel Memory Drive Technology reduces cost 20%, reduces runtime by 1.4x1 datasets and/or smaller data center foot print by 2 enabling data node reduction Intel Memory Drive Technology raises cost by 20%, reduces runtime by 3.5x Intel® Memory Drive Technology accelerates Spark* K-Means* performance 1 Source – Intel, System Configuration for Management Node: S2600WFT Intel White Box, 2 sockets, Intel® Xeon® Gold 6140 CPU @ 2.30GHz, 18 cores per socket / 2 threads per core (total 72 vcores), 192GB DDR4, CentOS 7.4* distribution with 4.14.16 kernel, HortonWorks* Data Platform 2.6.4, Spark 2.2.0*. Estimated results were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown". Implementation of these updates may make these results inapplicable to your device or system. 2 Source – Intel, System Configuration for Data Node(s): Same as above plus 2x NVMe* PCIe* Intel® Optane™ DC SSD P4800X 375GB, 2x NVMe* PCIe* Intel® SSD DC P4500 3.7 TB *Other names and brands may be claimed as the property of others

18 Universal Memory and Storage solution for HPC Re-architecting a cluster from “Fat memory nodes” and “I/O nodes” to the single set of “IMDT and I/O nodes” configured real TIME for memory or storage.

Yesterday: Fat Memory and I/O Nodes Today: INTEL® MEMORY DRIVE TECHNOLOGY • Composable architecture allows converting systems from Optane as a storage to memory mode. • IMDT shows sustained performance over DRAM size on typical HPC codes Graph500, GEMM. • Overheads to Optane are well-hidden in compute-intensive applications. • Affordable performance – more hybrid systems can be populated in a cluster.

19 NVM SOLUTIONS GROUP Do not modify; Use of this material is permissible only through January 2019. 20 NVM SOLUTIONS GROUP Do not modify; Use of this material is permissible only through January 2019. 21 Improving Performance of HPC Storage TM Purpose

Accelerating traditional HPC parallel storage by Use cases introducing new features to improve small I/O Lustre: Architecture Ingredients: . Metadata server (MDS) . Bringing PCIe*/NVMe* into typical HPC storage ecosystem to . HSM storage tier improve small I/O . DSS with Intel CAS . Intel® PCIe SSDs in for Lustre*, BeeGFS*, CEPH* . All flash scratch file systems . Intel® Optane™ SSDs for journal or MDT drive

NVM SOLUTIONS GROUP *Other names and brands may be claimed as the property of others. New: RSC Tornado HYPEr-converged RSC Tornado node: • 2 x Intel® Xeon® Scalable (Skylake-SP) processors up to 205W with 28 cores each • Intel® Server Board S2600BP with two 10GigE ports on-board and (optional) Intel Quick Assist support Intel® SSD DC P4511 (NVMe, M.2) • RSC Management Module with dedicated Ethernet fabric • Up to 12 hot-swap NVMe SSDs, for example each can be: • Intel® SSD DC P4511 (NVMe, M.2) 1-2TB configured as disk or memory via IMDT • Memory per node – up to 768GiB DDR4 Reg ECC up to 2666 • 2 x Intel® Omni-Path 100 Gb/s adapter (or EDR InfiniBand or Ethernet) providing up to 200Gbps external fabric bandwidth NVMe-attached SSDs can provide: • Large and fast storage: up to 24TB+ per node today • Large Memory Capacity node via Intel Memory Drive Technology (IMDT) with up to 4.2TB of RAM today • Many combination of previous two options, e.g. 3TB RAM and 8TB disk

100% ‘hot water’ liquid cooled solution for stable operation and high safety of components

23 RSC Proprietary. Copyright *© Other2016 names-18 RSC and Groupbrands andmay itsbe companiesclaimed as the. All propertyright reserved of others.. PatentCopyright pending © 2009,. Intel Corporation. RSC Lustre Storage-on-Demand IO500 Benchmark Results 56 GB/s easy_read ior test

36 GB/s easy_write ior test

Configuration: • Tornado HYPEr-converged nodes • 1 x MDS • 12 x OSS (6 x 1TB NVMe SSD) • 24 Clients to load Lustre • 100GBps Intel® Omni-Path interconnect

24 RSC Proprietary. Copyright*© Other2016 names-18 RSC and Groupbrands andmay itsbe claimedcompanies as the. All propertyright reserved of others.. PatentCopyright pending © 2009,. Intel Corporation. JINR system at IO500 Rating – #9

25 RSC Proprietary. Copyright*© Other2016 names-18 RSC and Groupbrands andmay itsbe claimedcompanies as the. All propertyright reserved of others.. PatentCopyright pending © 2009,. Intel Corporation. DAOS: Distributed Async. Object Storage

Scale-out object store built from the ground up for massively distributed NVM storage DAOS Benefits . Built over new userspace

3rd Party PMEM/NVMe software stack Applications HPC Workflow . High throughput/IOPS @arbitrary Rich Data Data Model Library alignment/size Models . Ultra-fine grained I/O Storage DAOS Platform Open Source Apache 2.0 License . Scalable communications & I/Os SPDK PMDK over homogenous, shared-nothing Storage NVRA NVMeNVMe HDD Media SCMM NVMe servers Open source . Software-managed redundancy – declustered replication & APACHE 2.0 License erasure code with self healing https://github.com/daos-stack/daos

NVM Solutions Group Edsff “ruler”.IntelisLeadingaRevolutionaryFormFactor.

Goal to maximizestorage A groupof efficiency by defining Broad, dynamic range of 15 companies revolutionary industry solutions that scaleswith 1 working together1 2 3 new interfacespeeds standard form factors

A Healthyecosystem ODM/OEM solutions SSD Suppliers

1 List of EDSFF members provided at https://edsffspec.org/ * Other names and brands may be claimed as the property of others NVM SOLUTIONS GROUP 23 EDSFF: Intel is Building the Most Robust “ruler” Portfolio

DC PCIe* SSD GB SAM Form Factor by GB Shipped1 E1.L 9.5mm E1.L 18mm E1.S 100% Capacity Scaling. 80% EDSFF EDSFF • Up to 32 E1.L 9.5mm drives per 1U2 60% • Up to 48 E1.S drives per 1U2

40% Thermal Efficiency.

20% • Up to 2x less airflow needed per E1.L 9.5mm SSD vs. U.2 15mm3 0% • Up to 3x less airflow needed per E1.S SSD vs. U.2 7mm3 2017 2018 2019 2020 2021 2022 Enhanced Serviceability. U.2 M.2 AIC EDSFF EDSFF-S • Fully front serviceable with integrated pull latch • Integrated, programmable LEDs EDSFF 45% of Data Center Serviceable • Remote, drive specific power cycling Available Market (SAM) by 20221 Future Ready. • x4, x8, x16 support, ready for PCIe* 4.0 and 5.04 1 Source: Intel NSG Market Forecast, Q2’18 2 Source – EDSFF form factor specifications shown at edsffspec.org 3 Source – Intel. Results have been estimated or simulated using internal analysis or architecture simulation or modeling, and provided for informational purposes. Comparing airflow required to maintain equivalent temperature of a 4TB U.2 15mm Intel® SSD DC P4500 to a 4TB “Ruler” form factor for Intel® SSD DC P4500. Simulation involves three drives for each form factor in a sheet metal representation of a server, 12.5mm pitch for “Ruler” form factor, 1000m elevation, limiting SSD on case temp of 70C or thermal throttling performance, whichever comes first. 5C guard band. Results used as a proxy for airflow anticipated on EDSFF spec compliant “Ruler” form factor Intel® SSD P4510. 4 EDSFF Future Ready - https://edsffspec.org/edsff-resources/ *Other names and brands may be claimed as the property of others. NVM SOLUTIONS GROUP 28 Reduce Cost by Converting HDDs to Intel® QLC Technology

HDD: 256 drives @ 4tb ea QLC SSD: 32 drives @ 32tb ea Reducing cost of operations

2U 1U

up to up to Intel® SSD D5-P4326 30.72TB E1.L 64% 2.8X Lower Reduced 1 petabyte ~1 petabyte Power1 Cooling Cost1

up to 20X 36X Greater Fewer

Rack Drive Consolidation1 Replacements1,2

1. Power, Cooling, Consolidation. Based on HDD: 7.2K RPM 4TB HDD, AFR of 2.00% and 7.7W active power , 24 drives in 2U (1971W total power) https://www.seagate.com/files/www- content/datasheets/pdfs/exos-7-e8-data-sheet-DS1957-1-1709US-en_US.pdf SSD: 22W active power 44% AFR, 32 drives in 1U (704W total power) Cooling cost based on deployment term of 5 years with Kwh cost of $.158 and number of watts to cool 1 watt 1.20 Based on 3.5” HDD 2U 24 drives and EDSFF 1U Long 1U 32 drives. Hybrid storage based off using Intel TLC SSD for cache. 2. Drive Replacement. Calculation: HDD 2% AFR x 256 drives x 5 years = 25.6 replacements in 5 years; SSD: 0.44% AFR x 32 drives x 5 years = 0.7 replacements in 5 years.

NVM SOLUTIONS GROUP 29 Intel®QLCtechnology.AcceleratingHDD displacement inwarmstorage. Today with QLC Broad QLC Portfolio TLC NAND Intel®SSD QLC NAND U.2, 15mm, 8TB D5-P4320 WARM DATA U.2, 15mm, 16/32TB (Optimize HDD Intel®SSD forTCO) QLC NAND D5-P4326 16TB Coming Q4’181 EDSFF, 9.5mm, 16/32TB 32TB Coming 20191

COLD DATA HDD HDD (Optimize for$/GB)

1 All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. NVM SOLUTIONS GROUP 28 I/O node, Burst Buffer, NVMe over Fabric (NVMeOF) Purpose

Accelerating data transfer in-and-out of the compute by scaling I/O nodes in regards to compute nodes with close to local I/O latencies Use cases Architecture Ingredients: • PCIe IO rich nodes with balanced input-output configuration, so, the internal IO processing capabilities can scale externally. • Burst Buffer implementation with • Time to market to OmniPath and Ethernet products. Cray DataWarp nodes

• Typical attach rate is 30 compute nodes to 1 burst buffer node TM • Optimal NVMe SSD count depends on the used fabrics solution • Data Transfer Node designs with and available bandwidth. Aspera* and Zettar* solutions NVMe Host Software

• NVMeF supports OPA fabric Host Side Transport Abstraction Block Diagram with 10.4 driver release

and kernel 4.5+ *

RoCE

iWARP

Fabrics Next Next Gen

Skylake - EP Skylake - EP InfinBand Fabrics Fabrics Controller Side Transport Abstraction NVMe SSD 32 OptimizationResources

Intel® Storage Performance Development Kit (SPDK): spdk.io

Intel® Smart Storage Manager: github.com/Intel-bigdata/SSM

Persistent Memory Development Kit: pmem.io/pmdk/

Access bare metal Intel® Optane™ SSD servers: acceleratewithoptane.com

NVM SOLUTIONS GROUP 36