TECHNOLOGY BRIEF Aerospike on OpenFlex™ F3200

Reference Architecture

WESTERN DIGITAL CORPORATION 2579-810391-A01 Revision History

Revision Date Description Reference

A00 January 2021 Initial release

A01 February 2021 Public releae

Typographical Conventions This document uses the typographical conventions listed and shown in the table below.

Table 0-1.Typographical Conventions

Convention Usage Command and option names appear in bold type in definitions and examples.

n Directories, files, partitions, and volumes also appear in bold.

Bold n Interface controls (check boxes, radio buttons, fields, folders, icons, list boxes, items inside list boxes, multicolumn lists, menu choices, menu names, and tabs)

n Keywords and parameters in text Variable information appears in italic type. This includes user-supplied information on com- mand lines.

Italics n Citations (titles of books, diskettes, and CDs) n Emphasis of words

n Words defined in text Screen output and code samples appear in monospace type.

n Citations (titles of books, diskettes, and CDs)

n Examples and code examples, for example, this is a line of code

n File names, programming keywords, and other elements that are difficult to Monospace distinguish from surrounding text

n Message text and prompts addressed to the user

n Text that the user must enter

n Values for arguments or command options

WESTERN DIGITAL CORPORATION - 2 - 2579-810391-A01 Western Digital Corporation, Inc. or its affiliates' (collectively “Western Digital”) general policy does not recommend the use of its products in life support applications where in a failure or malfunction of the product may directly threaten life or injury. Per Western Digital Terms and Conditions of Sale, the user of Western Digital products in life support applications assumes all risk of such use and indemnifies Western Digital against all damages. This document is for information use only and is subject to change without prior notice. Western Digital assumes no responsibility for any errors that may appear in this document, nor for incidental or consequen- tial damages resulting from the furnishing, performance or use of this material. Absent a written agreement signed by Western Digital or its authorized representative to the contrary, Western Digital explicitly disclaims any express and implied warranties and indemnities of any kind that may, or could, be associated with this document and related material, and any user of this document or related material agrees to such disclaimer as a precondition to receipt and usage hereof. Each user of this document or any product referred to herein expressly waives all guaranties and warran- ties of any kind associated with this document any related materials or such product, whether expressed or implied, including without limitation, any implied warranty of merchantability or fitness for a particular purpose or non-infringement. Each user of this document or any product referred to herein also expressly agrees Western Digital shall not be liable for any incidental, punitive, indirect, special, or consequential damages, including without limitation physical injury or death, property damage, lost data, loss of profits or costs of procurement of substitute goods, technology, or services, arising out of or related to this docu- ment, any related materials or any product referred to herein, regardless of whether such damages are based on tort, warranty, contract, or any other legal theory, even if advised of the possibility of such dam- ages. This document and its contents, including diagrams, schematics, methodology, work product, and intel- lectual property rights described in, associated with, or implied by this document, are the sole and exclu- sive property of Western Digital. No intellectual property license, express or implied, is granted by Western Digital associated with the document recipient's receipt, access and/or use of this document or the products referred to herein; Western Digital retains all rights hereto. Western Digital, the Western Digital logo, and OpenFlex are registered trademarks or trademarks of Western Digital Corporation or its affiliates in the U.S. and/or other countries. Intel and Optane are trademarks of Intel Corporation or its subsidiaries in the US and/or other countries. The NVMe and NVMe-oF word marks are trademarks of NVM Express, Inc. All other marks are the property of their respective owners. Product specifications subject to change without notice. Pictures shown may vary from actual products. Not all products are available in all regions of the world. © 2021 Western Digital Corporation or its affiliates. All rights reserved.

WESTERN DIGITAL CORPORATION - 3 - 2579-810391-A01 Table of Contents

TABLE OF CONTENTS

1. EXECUTIVE SUMMARY ...... 6

2. SOLUTION HIGHLIGHTS...... 7

3. TECHNOLOGY OVERVIEW ...... 8 3.1 OpenFlex F3200 and E3000 Overview...... 8 3.1.1 Composable Infrastructure...... 9 3.1.2 OpenFlex...... 9 3.1.3 Open Composable API ...... 9 3.1.4 Benefits of OpenFlex...... 10 3.1.5 System Data Ingest Architecture ...... 11 3.2 Aerospike Overview...... 12 3.3 Aerospike Cluster ...... 13 3.3.1 The Client Layer ...... 13 3.3.2 Distribution Layer ...... 14 3.3.3 Data Storage Layer...... 15 3.4 Aerospike Data Distribution ...... 16

4. AEROSPIKE CLUSTER TEST CONFIGURATION DETAILS ...... 18 4.1 Logical Cluster Topology ...... 18 4.1.1 Setting Up Aerospike Cluster...... 19 4.2 Aerospike User Interface View...... 20 4.2.1 Performance Tests ...... 20 4.2.2 Test One...... 21 4.2.3 Test Two ...... 22

5. USE CASES AND APPLICATIONS / WORKLOADS...... 24

6. SUMMARY ...... 25

7. RESOURCES AND ADDITIONAL LINKS...... 26

8. CONTACT INFORMATION...... 27

WESTERN DIGITAL CORPORATION - 3 - 2579-810391-A01 List of Figures

LIST OF FIGURES

Figure 3-1 OpenFlex F3200 and E3000 Layouts...... 9 Figure 3-2 OpenFlex F3200 Specifications...... 10 Figure 3-3 External Line Interface...... 11 Figure 3-4 Aerospike Overview ...... 12 Figure 3-5 Aerospike Architecture Layers ...... 13 Figure 3-6 Paxos-Based Gossip Algorithm ...... 14 Figure 3-7 Aerospike Data Partitioning Scheme ...... 16 Figure 4-1 Sample Architecture ...... 18 Figure 4-2 OpenFlex F3200 Aerospike Cluster Performance Details ...... 20 Figure 4-3 ASBenchmark Bandwidth Details for 1 Million Records...... 21 Figure 4-4 ASBenchmark Latency Results for 1 Million Records ...... 21 Figure 4-5 ASBenchmark Latency Results for 1 Million Records ...... 21 Figure 4-6 Read and Write TPS ...... 22 Figure 4-7 ASBenchmark Bandwidth Details for 1 Million Records...... 22 Figure 4-8 ASBenchmark Latency Results for 1 Million Records ...... 23

WESTERN DIGITAL CORPORATION - 4 - 2579-810391-A01 List of Tables

LIST OF TABLES

Table 0-1 Typographical Conventions...... 2 Table 3-1 Key Pillars for Open Composable Infrastructure ...... 8 Table 3-2 OpenFlex F3200 Specifications ...... 10 Table 4-1 Aerospike Cluster Configuration Details ...... 19 Table 4-2 Maximum TPS Observed ...... 20

WESTERN DIGITAL CORPORATION - 5 - 2579-810391-A01 1.0 EXECUTIVE SUMMARY

It’s something that’s eluded the Information Technology (IT) industry for years – a scalable, reliable operational platform that combines extremely fast read/write speeds with strong data consistency. Faster data access and data consistency for transactional workloads has been an area of innovation for decades, fueled in part by the power and popularity of distributed computing environments. Node and network failures are unavoidable and both increase as the size of the data cluster grows. Consequently, it’s important for IT organizations to understand how different database offerings cope with the inevitable challenges of faster data access, data consistency & availability during normal operations and failure scenarios. That is not an easy task.

For firms struggling to manage high and rapidly growing volumes of operational data in real- time, it is worth exploring how recent advances in database management technology provided by combining Aerospike’s distributed NoSQL system with Western Digital’s OpenFlex Composable Infrastructure can help achieve these goals.

The OpenFlex architecture allows storage to be disaggregated from compute, enabling applications to share a common pool of storage capacity. Data can easily be shared between applications or needed capacity can be allocated to an application regardless of location. By decoupling software from the underlying platform enterprises can build solutions with greater flexibility spanning the portfolio of OpenFlex infrastructure offerings. This provides a decisive step forward in reducing the cost of ownership for data center deployments to come, and it helps address all the problems caused by the volume, variety, velocity, access speed and consistency of data.

Aerospike provides a database system that processes large volumes of operational data in real time while delivering exceptional runtime performance, high availability and cost efficiency while still keeping your data safe.

The combination of Aerospike’s distributed NoSQL system with Western Digital’s OpenFlex Composable Infrastructure provides capabilities to IT operators that enable them to connect disaggregated resources intelligently and manage, modify and scale these components over time. With this solution, IT can achieve greater productivity, agility, performance and faster time-to-market.

WESTERN DIGITAL CORPORATION - 6 - 2579-810391-A01 2.0 SOLUTION HIGHLIGHTS

This paper introduces key solutions which can be achieved using Aerospike with the OpenFlex F3200 to modernize data management infrastructures, such as:

 Database with billions of objects

 Rapid read/write speeds without extensive tuning or a separate data

 Extremely high throughput

 24x7 availability, including cross-data center replication

 Operational ease during scale-out and maintenance

 Inter-operation with popular software offerings

 Fault-tolerant service for mission critical data with 100% uptime

Aerospike provides a database system that processes large volumes of operational data in real time while delivering exceptional runtime performance, high availability and cost efficiency while helping keep your data safe. The OpenFlex Composable Infrastructure delivers a simpler deployment model with a fluid pool of resources that are used for scale-out workloads such as NoSQL or web-scale applications with significantly improved agility. This brief provides a high-level design reference architecture for implementing the Aerospike database with an OpenFlex F3200 device. OpenFlex F3200 is a fabric device that leverages the Open Composable Infrastructure (OCI) approach in the form of disaggregated data storage using NVMe™-over-Fabrics (NVMe-oF™). The Aerospike database platform is modeled on a classic shared-nothing database architecture. The database cluster consists of server nodes, each of which has CPUs, DRAMs, rotational disks (HDDs) and optional flash storage units (SSDs).

WESTERN DIGITAL CORPORATION - 7 - 2579-810391-A01 3.0 TECHNOLOGY OVERVIEW

3.1 OpenFlex F3200 and E3000 Overview The OpenFlex F3200 is a flash storage devices that can be housed in the E3000 in amounts from 1 to 10 per enclosure. The OpenFlex E3000 is a 3U rack mounted data storage enclosure built on the OpenFlex platform. OpenFlex is Western Digital’s architecture that supports OCI. The OpenFlex F3200 and E3000 are fabric devices that leverage this OCI approach in the form of disaggregated data storage using NVMe-oF. NVMe-oF is a networked storage protocol that allows storage to be disaggregated from compute to make that storage widely available to multiple applications and servers. Western Digital’s vision for Open Composable Infrastructures is based on four key pillars:

Table 3-1.Key Pillars for Open Composable Infrastructure

Open in both API and form factor Open Designed for robust interoperability of multi-vendor solutions

Ability to compose solutions at the width of the network Scalable Enable self-organizing systems of composable elements that communicate horizontally

Pools of resources available for any use case that is defined at run time Disaggregated Independent scaling of compute and storage elements to maximize efficiency and agility

Inclusive of both disk and flash

Entire ecosystem of composable elements managed and orchestrated Extensible using a common API framework

Prepared for yet-to-come composable elements - e.g., memory, accelerators

By enabling applications to share a common pool of storage capacity, data can be easily shared between applications, or needed capacity can be allocated to an application regardless of location. Exploiting NVMe device-level performance, NVMe- oF promises to deliver the lowest end-to-end latency from application to shared storage. NVMe-oF enables composable infrastructures to deliver the data locality benefits of NVMe DAS (low latency, high performance) while providing the agility and flexibility of sharing storage and compute. The maximum data storage capacity is 614TB1 when leveraging a full set of 10 F3200 fabric devices. An F3200 is capable of scaling up to 2Million IOPs and cumulatively we can scale for each E3000 up to 20 Million IOPs in a 3U solution.

1 One terabyte (TB) is equal to one trillion bytes. Actual user capacity may be less due to operating environment.

WESTERN DIGITAL CORPORATION - 8 - 2579-810391-A01 3.1.1 Composable Infrastructure Composable Infrastructure is an emerging category of data center infrastructure that seeks to disaggregate compute, storage and networking fabric resources into shared resource pools that can be available for on-demand allocation (i.e., composable). Composability occurs at the software level, disaggregation occurs at the hardware level using NVMe-over-Fabric (NVMe-oF) will vastly improve compute and storage utilization, performance and agility in the data center.

3.1.2 OpenFlex Open Flex is Western Digital’s architecture that supports Open Composable Infrastructure through storage disaggregation – both disk and flash natively attached to a scalable fabric. OpenFlex does not rule out multiple fabrics, but whenever possible, Ethernet will be used as a unifying connect for both flash and disk because of its broad applicability and availability.

3.1.3 Open Composable API Western Digital's new Open Composable API is designed for data center composability. It builds upon existing industry standards utilizing the best features of those standards as well as practices from proprietary management protocols. Note: All testing is done using (1) OpenFlex E3000 populated with (10) F3200 devices.

Figure 3-1. OpenFlex F3200 and E3000 Layouts

WESTERN DIGITAL CORPORATION - 9 - 2579-810391-A01 3.1.4 Benefits of OpenFlex

2 x 50G per Fabric Device scaling up to 20 X 50G per OpenFlex E3000

Reduced CPU cycles on the HOST server enabling better bandwidth usage and higher scalability.

Support Multiple Ethernet Switching & NIC Vendors

Ecosystem & Partner Enablement

Figure 3-2. OpenFlex F3200 Specifications

Table 3-2. OpenFlex F3200 Specifications

Specification Value

Max Raw Data Storage Capacity per Device 61.4TB

Data Ingest Capability 2x 50G Ethernet

Data Transfer Rates 12GBps*

Number per Enclosure Up to 10

How Swappable Yes

Service Window 5 minutes

Dynamic Provisioning Support Yes

WESTERN DIGITAL CORPORATION - 10 - 2579-810391-A01 3.1.5 System Data Ingest Architecture The system main data ingest architecture uses two separate 50G Ethernet connections each a dual QSFP28 connector on the rear I/O of the chassis on.

Figure 3-3. External Line Interface

This completes the connection from the device that is inserted into a chassis slot, through the backplane into the QSFP connectors. The architecture supports the hot swap nature of the devices and do not require any sort of shut down or disconnection before servicing. Each 100G Ethernet connection is split in half at the QSFP28 connectors resulting in 50G per connector allowing for dual port functionality with the device. F3200 uses NVMe over Fabrics technology with NVMe-RoCEv2 protocol for all network transmission.

WESTERN DIGITAL CORPORATION - 11 - 2579-810391-A01 3.2 Aerospike Overview Aerospike is a distributed NoSQL system that provides extremely fast – and predictable – read/write access to operational data sets that span billions of records in databases of 10s – 100s TB. Its patented Hybrid Memory Architecture™ (HMA) delivers exceptional performance using a much smaller server footprint than competing solutions. Aerospike uses dynamic random-access memory (DRAM) for index and user data. Optionally, applications can commit each write directly to fast, non-volatile memory (solid state disks or SSDs), which Aerospike treats as raw devices for speed and efficiency. Sophisticated (and automatic) data distribution techniques, a smart client layer and other features further drive Aerospike’s speed and cost efficiency. Aerospike is a distributed, scalable database. The architecture has three key objectives:

 Create a flexible, scalable platform for web-scale applications.

 Provide the robustness and reliability (as in ACID) expected from traditional databases.

 Provide operational efficiency with minimal manual involvement.

Most real-time decision systems that use Aerospike require very high scale and need to make decisions within a strict SLA by reading from and writing to a database containing billions of data items at a rate of millions of operations per second with sub-millisecond latency. For over years, Aerospike has been continuously used in over a hundred successful production deployments, as many enterprises have discovered that it can substantially enhance their user experience.

Figure 3-4. Aerospike Overview

WESTERN DIGITAL CORPORATION - 12 - 2579-810391-A01 3.3 Aerospike Cluster Aerospike architecture includes the following layers:

 Client Layer - This cluster-aware layer includes open source client libraries, which implement Aerospike APIs, track nodes and know where data resides in the cluster.

 Clustering and Data Distribution Layer - This layer manages cluster communications and automates fail-over, replication, Cross-data center Replication (XDR) and intelligent re-balancing and data migration.

 Data Storage Layer - This layer reliably stores data in DRAM and Flash for fast retrieval.

 Client Layer - The Aerospike Smart Client™ is designed for speed. It is implemented as an open source linkable library available in C, C#, Java, Node.js and others.

Figure 3-5. Aerospike Architecture Layers

3.3.1 The Client Layer

 Implements the Aerospike API, the client-server protocol and talks directly to the cluster.

 Tracks nodes and knows where data is stored, instantly learning of changes to cluster configuration or when nodes go up or down.

 Implements its own TCP/IP connection pool for efficiency. Also detects transaction failures that have not risen to the level of node failures in the cluster and re-routes those transactions to nodes with copies of the data.

 Transparently sends requests directly to the node with the data and re-tries or re-routes requests as needed (for example, during cluster re-configurations).

 This architecture reduces transaction latency, offloads work from the cluster and eliminates work for the developer. It ensures that applications do not have to restart when nodes are brought up or down. And you don't have to waste time with cluster setup or add cluster management servers or proxies.

WESTERN DIGITAL CORPORATION - 13 - 2579-810391-A01 3.3.2 Distribution Layer The Aerospike shared nothing architecture is designed to reliably store terabytes of data with automatic fail-over, replication and Cross Data center Replication (XDR). This layer scales linearly. The Distribution layer is designed to eliminate manual operations with the systematic automation of all cluster management functions. It includes three modules:

 Cluster Management Module

 Data Migration Module

 Transaction Processing Module

3.3.2.1 Cluster Management Module Tracks nodes in the cluster. The key algorithm is a Paxos-based gossip-voting process that determines which nodes are considered part of the cluster. Aerospike implements a special heartbeat (both active and passive) to monitor inter-node connectivity.

3.3.2.2 Data Migration Module When you add or remove nodes, Aerospike Database cluster membership is ascertained. Each node uses a distributed hash algorithm to divide the primary index space into data slices and assign owners. The Aerospike Data Migration module intelligently balances data distribution across all nodes in the cluster, ensuring that each bit of data replicates across all cluster nodes and data centers. This operation is specified in the system replication factor configuration.

Figure 3-6. Paxos-Based Gossip Algorithm

WESTERN DIGITAL CORPORATION - 14 - 2579-810391-A01 3.3.2.1 Transaction Processing Module Reads and writes data on request and provides the consistency and isolation guarantees. This module is responsible for:

 Sync/Async Replication: For writes with immediate consistency, it propagates changes to all replicas before committing the data and returning the result to the client.

 Proxy: In rare cases during cluster re-configurations when the Client Layer may be briefly out of date, the Transaction Processing module transparently proxies the request to another node.

 Resolution of Duplicate Data: For clusters recovering from being partitioned (including when restarting nodes), this module resolves any conflicts between different copies of data. The resolution can be based on either the generation count (version) or the last update time.

3.3.3 Data Storage Layer Aerospike is a key-value store with a schema less data model. Data flows into policy containers, namespaces, which are semantically similar to databases in an RDBMS system. Within a namespace, data is subdivided into sets and records. Each record has an indexed key unique in the set and one or more name bins that hold values associated with the record. You do not need to define sets and bins. For maximum flexibility, they can be added at run-time.Values in bins are strongly typed and can include any supported data type. Bins are not typed, so different records can have the same bin with values of different types. Indexes, including the primary index and optional secondary indexes, are stored by default in DRAM for ultra-fast access. The primary index can also be configured to be stored in Persistent Memory or on an NVMe flash device. Values can be stored either in DRAM or more cost-effectively on SSDs. You can configure each namespace separately, so small namespaces can take advantage of DRAM and larger ones gain the cost benefits of SSDs.

WESTERN DIGITAL CORPORATION - 15 - 2579-810391-A01 3.4 Aerospike Data Distribution A record’s primary key is hashed into a 160-byte digest using the RipeMD160 algorithm, which is extremely robust against collisions. The digest space is partitioned into 4096 non-overlapping partitions. A partition is the smallest unit of data ownership in Aerospike. Records are assigned partitions based on the primary key digest. Even if the distribution of keys in the key space is skewed, the distribution of keys in the digest space—and therefore in the partition space—is uniform. This data partitioning scheme is unique to Aerospike.

Figure 3-7. Aerospike Data Partitioning Scheme

It significantly contributes to avoiding the creation of hotspots during data access, which helps achieve high levels of scale and fault tolerance. Aerospike indexes and data are collocated to avoid any cross-node traffic when running read operations or queries. Writes may require communication between multiple nodes based on the replication factor. Colocation of index and data, when combined with a robust data distribution hash function, results in uniformity of data distribution across nodes. This, in turn, ensures that:

 Application workload is uniformly distributed across the cluster,

 Performance of database operations is predictable,

 Scaling the cluster up and down is easy, and

 Live cluster reconfiguration and subsequent data re-balancing is simple, non- disruptive and efficient.

WESTERN DIGITAL CORPORATION - 16 - 2579-810391-A01 The Aerospike partition assignment algorithm generates a replication list for every partition. The replication list is a permutation of the cluster succession list. The first node in the partition’s replication list is the master for that partition, the second node is the first replica, the third node is the second replica and so on. The partition assignment algorithm has the following objectives:

 To be deterministic so that each node in the distributed system can independently compute the same partition map,

 To achieve uniform distribution of master partitions and replica partitions across all nodes in the cluster, and

 To minimize movement of partitions during cluster view changes.

WESTERN DIGITAL CORPORATION - 17 - 2579-810391-A01 4.0 AEROSPIKE CLUSTER TEST CONFIGURATION DETAILS

OpenFlex and Aerospike have created a unique reference architecture with industry- standard servers, Ethernet network switches. It is leverages the OCI approach in the form of disaggregated data storage using NVMe-over-Fabrics (NVMe-oF). F3200 uses NVMe over Fabrics technology with NVMe-RoCEv2 protocol for all network transmission. This end-to-end solution can be specifically designed to accelerate large-scale production while delivering the compute and storage performance needed. As part of Aerospike Cluster Performance Test on OpenFlex F3200 device, we have configured a Three Node cluster set up using on One OpenFlex blade.

4.1 Logical Cluster Topology The minimum requirements to build out the cluster are:

 (1) AMC

 (3) Nodes

 (1) OpenFlex Fabric Device

 (1) 100G Switches

Figure 4-1. Sample Architecture

WESTERN DIGITAL CORPORATION - 18 - 2579-810391-A01 Table 4-1. Aerospike Cluster Configuration Details

Product OpenFlex F3200 Fabric Device

Interface DualQSFP28 (2x50Gb)

Host Server Dual Socket server with 20 Core CPU each.80 logical cores in total with HT enabled

Host OS Ubuntu 18.04.3 LTS (Bionic Beaver)

Kernel 4.15.0.55-generic x86_64

Physical Care 80

Host NIC CX5 - MCX516A-CCAT

NIC Package Version 4.6-1.0.1.1

Topology Switched

Aerospike Version 4.9.0.5

Number of Volumes 12

Volumes connected per host 4 Volumes to 1 Host -> Total 12 Volumes on 3 Host

Memory Configuration 32GB for Each Node-> Total 96GB for 3 Nodes

4.1.1 Setting Up Aerospike Cluster For Aerospike cluster configuration below things needs to be configured.

Configure Network service and heartbeat sub-contexts

Configure Namespaces

Configure Logging and Log Rotate

Configure Security (Optional)

Configure Rack-Aware (Optional)

Configure Cross-Datacenter Replication (Optional)

WESTERN DIGITAL CORPORATION - 19 - 2579-810391-A01 4.2 Aerospike User Interface View

Figure 4-2. OpenFlex F3200 Aerospike Cluster Performance Details

4.2.1 Performance Tests This experiment was intended to evaluate Aerospike’s scale out capability. Typical deployments start with a low initial size and grow as the needs grow. Keep in mind that this set of experiments was performed using Single OpenFlex F3200 fabric device which was hosting a Three Node cluster. We have used The Aerospike Benchmarks tool for running all the performance test. As part of this test we have written 1 Million integer keys/records (starting at 1) having 50 character string value in to the Aerospike database and 1 Million integer keys/ records to the Aerospike database having 1,000 byte record size, which contains 10 fields/bins of 100 bytes each.

Table 4-2 Maximum TPS Observed

Max tps for 1 Million integer keys/records Max tps for 1 Million integer keys/records Test Details (starting at "1") having 50 to the Aerospike database character string value having 1,000 byte record size

Read 100% 835Ktps 815Ktps

Below are the recommendations to achieve better performance

 Based on available budget, customer can opt for higher CPU/Memory configuration to achieve maximum results.

 Similarly, depending on the data requirements, customers can either go for lower capacity or higher capacity Fabric device.

 Similarly, depending on the data requirements, customers can expand the cluster configuration and avail more storage space as per need.

WESTERN DIGITAL CORPORATION - 20 - 2579-810391-A01 4.2.2 Test One As part of this test we have written 1 Million integer keys/records (starting at 1) having 50 character string value in to the Aerospike database.

Figure 4-3. ASBenchmark Bandwidth Details for 1 Million Records

Figure 4-4. ASBenchmark Latency Results for 1 Million Records

Figure 4-5. ASBenchmark Latency Results for 1 Million Records

WESTERN DIGITAL CORPORATION - 21 - 2579-810391-A01 4.2.3 Test Two As part of this test we have written 1 Million integer keys/records to the Aerospike database having 1,000 byte record size, which contains 10 fields/bins of 100 bytes each.

Connect to localhost:3000 using test namespace.

Used different combination of Read and write ratio of the time using 200 concurrent threads.

Used 1 Million keys/records having 1,000 byte record size, which contains 10 fields/bins of 100 bytes.

Benchmark synchronous methods.

The integer keys represents the number of records, the starting value defines the record key name, data is string with 50char on single bin.

Connect to localhost:3000 using test namespace.

Used different combination of Read and Write ratio of the time using 200 concurrent threads.

Figure 4-6. Read and Write TPS

Figure 4-7. ASBenchmark Bandwidth Details for 1 Million Records

WESTERN DIGITAL CORPORATION - 22 - 2579-810391-A01 Figure 4-8. ASBenchmark Latency Results for 1 Million Records

WESTERN DIGITAL CORPORATION - 23 - 2579-810391-A01 5.0 USE CASES AND APPLICATIONS / WORKLOADS

The combined solution of Western Digital’s OpenFlex Composable Infrastructure with Aerospike’s distributed NoSQL system supports a wide range of use cases and environments, including:

 Financial services and technology

 Payment gateway

 Fraud detection technology

 AI & ML

 Retail and Entertainment

 Mass Transit

WESTERN DIGITAL CORPORATION - 24 - 2579-810391-A01 6.0 SUMMARY

The OpenFlex and Aerospike introduces new techniques for managing all the tough challenges faced with respect to performance, data consistency, data availability, data access latency and TCO. It enables greater economics, agility, efficiency and simplicity at scale and can provide 40% lower TCO than traditional HCI architecture. It provides extremely low data access latencies (often sub-millisecond) for read/write workloads over large volumes of operational data at a fraction of the TCO of other solutions.

Furthermore, it eliminates the need for firms that require strong, immediate consistency to compromise on performance. With Aerospike on OpenFlex firms can use the same database platform to act as a system of engagement as well as a system of record. This solution reference architecture is designed to provide an overview of the combined solution and the components employed in the solution. The results demonstrate the characteristics of Aerospike, with OpenFlex F3200 environments with data in memory. It highlights the high performance, throughput, low latency and linear scalability achieved with Aerospike on OpenFlex F3200. By combining OpenFlex platforms with Aerospike software, organizations can create a powerful, flexible solution for handling the most demanding modern workloads.

WESTERN DIGITAL CORPORATION - 25 - 2579-810391-A01 7.0 RESOURCES AND ADDITIONAL LINKS

Aerospike: https://www.aerospike.com/ https://www.aerospike.com/docs/

OpenFlex: https://westerndigital.com/platforms

WESTERN DIGITAL CORPORATION - 26 - 2579-810391-A01 8.0 CONTACT INFORMATION

Western Digital 5601 Great Oaks Parkway, San Jose, CA 95119 Phone: +1-408-801-1000 Fax: +1-408-801-8657 Email: [email protected]

WESTERN DIGITAL CORPORATION - 27 - 2579-810391-A01 For service and literature: support.wdc.com www.westerndigital.com 800.ASK.4WDC North America +31.88.0062100 EMEA & EU

2579-810391-A01 Western Digital February 2021 5601 Great Oaks Parkway San Jose, CA 95119 U.S.A.

WESTERN DIGITAL CORPORATION