Elevating Commodity Storage with the SALSA Host Translation Layer

Total Page:16

File Type:pdf, Size:1020Kb

Elevating Commodity Storage with the SALSA Host Translation Layer Elevating commodity storage with the SALSA host translation layer Nikolas Ioannou, Kornilios Kourtis, and Ioannis Koltsidas IBM Research, Zurich fnio, kou, [email protected] workload. To complicate things further, many applica- Abstract tions (e.g., media services [6, 62]) require multiple stor- age media to meet their requirements. To satisfy increasing storage demands in both capac- ity and performance, industry has turned to multiple stor- Storage devices are also frequently idiosyncratic. age technologies, including Flash SSDs and SMR disks. NAND Flash, for example, has a different access and These devices employ a translation layer that conceals erase granularity, while SMR disks preclude in-place up- the idiosyncrasies of their mediums and enables random dates, allowing only appends. Because upper layers (e.g., access. Device translation layers are, however, inher- databases and filesystems) are often not equipped to deal ently constrained: resources on the drive are scarce, they with these idiosyncrasies, translation layers [11] are in- cannot be adapted to application requirements, and lack troduced to enable applications to access idiosyncratic visibility across multiple devices. As a result, perfor- storage transparently. Translation layers (TLs) imple- mance and durability of many storage devices is severely ment an indirection between the logical space as seen degraded. by the application, and the physical storage as exposed In this paper, we present SALSA: a translation layer by the device. A TL can either be implemented on the that executes on the host and allows unmodified appli- host (host TL) or on the device controller (drive TL). cations to better utilize commodity storage. SALSA It is well established that, for many workloads, drive supports a wide range of single- and multi-device opti- TLs lead to sub-optimal use of the storage medium. mizations and, because is implemented in software, can Many works identify these performance problems, and adapt to specific workloads. We describe SALSA’s de- try to address them by improving the controller transla- sign, and demonstrate its significant benefits using mi- tion layer [19, 32], or adapting various layers of the I/O crobenchmarks and case studies based on three applica- software stack: filesystems [23, 29, 38], caches [24, 45], tions: MySQL, the Swift object store, and a video server. paging [49], and key-value stores [13, 14, 34, 46, 67]. In agreement with a number of recent works [5,30,43], 1 Introduction we argue that these shortcomings are inherent to drive The storage landscape is increasingly diverse. The mar- TLs, and advocate placing the TL on the host. While a arXiv:1801.05637v2 [cs.OS] 10 Jan 2019 ket is dominated by spinning magnetic disks (HDDs) host TL is not a new idea [17, 31], our approach is dif- and Solid State Drives (SSDs) based on NAND Flash. ferent from previous works in a number of ways. First, Broadly speaking, HDDs offer lower cost per GB, while we focus on commodity drives, without dependencies on SSDs offer better performance, especially for read- specific vendors. Our goal is to enable datacenter appli- dominated workloads. Furthermore, emerging technolo- cations to use cost-effective storage, while maintaining gies provide new tradeoffs: Shingled Magnetic Record- acceptable performance. Second, we propose a unified ing (SMR) disks [69] offer increased capacity compared TL framework that supports different storage technolo- to HDDs, while Non-Volatile Memories (NVM) [39] of- gies (e.g., SSDs, SMRs). Third, we argue for a host TL fer persistence with performance characteristics close to that can be adapted to different application requirements that of DRAM. At the same time, applications have dif- and realize different tradeoffs. Finally, we propose a TL ferent requirements and access patterns, and no one-size- that can virtualize multiple devices, potentially of differ- fits-all storage solution exists. Choosing between SSDs, ent types. The latter allows optimizing TL functions such HDDs, and SMRs, for example, depends on the capac- as load balancing and wear leveling across devices, while ity, performance and cost requirements, as well as on the also addressing storage diversity by enabling hybrid sys- 1 Published in 2018 IEEE 26th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS) tems that utilize different media. 2.1 Flash-based SSDs SALSA (SoftwAre Log Structured Array) implements Flash memory fits well in the gap between DRAM and the above ideas, following a log-structured architec- spinning disks: it offers low-latency random accesses ture [36, 47]. We envision SALSA as the backend of compared to disks at a significantly lower cost than a software-defined storage system, where it manages DRAM. As a result, its adoption is constantly increas- a shared storage pool, and can be configured to use ing in the data center [25, 51, 65], where it is primarily workload-specific policies for each application using the deployed in the form of SSDs. Nevertheless, Flash has storage. In this paper, we focus on the case where unique characteristics that complicate its use [3]. First, SALSA is used to run unmodified applications by expos- writes are significantly more involved than reads. NAND ing a Linux block device that can be either used directly, Flash memory is organized in pages, and a page needs or mounted by a traditional Linux filesystem. The con- to be erased before it can be programmed (i.e., set to tribution of our work is a novel host TL architecture and a new value). Not only programming a page is much implementation that supports different media and allows slower than reading it, but the erase operation needs to optimizing for different objectives. Specifically: be performed in blocks of (hundreds or even thousands of) pages. Therefore, writes cannot be done in-place, • SALSA achieves substantial performance and dura- and also involve a high cost as block erasures are two or- bility benefits by implementing the TL on the host ders of magnitude slower than reading or programming for single- and multi-device setups. When de- a page. Second, each Flash cell can only sustain a finite ploying MySQL database containers on commodity number of erase cycles before it wears out and becomes SSDs, SALSA outperforms the raw device by 1:7× unusable. on one SSD and by 35:4× on a software RAID-5 Flash translation layers (FTLs) [32] are introduced to array. address the above issues. In general, an FTL performs • SALSA makes efficient use of storage by allowing writes out-of-place, and maintains a mapping between application-specific policies. We present a SALSA logical and physical addresses. When space runs out, in- policy tailored to the Swift object store [59] on valid data are garbage collected, and valid data are re- SMRs that outperforms the raw device by up to a located to free blocks. To aid the garbage collection factor of 6:3×. (GC) process, controllers keep a part of the drive’s ca- • SALSA decouples space management from storage pacity hidden from the user (overprovisioning). The policy. This enables SALSA to accommodate dif- more space that is overprovisioned, the better the GC ferent applications, each with its own policy, using performs. the same storage pool. This allows running MySQL and Swift on the same storage with high perfor- 2.2 SMR disks mance and a low overhead. Magnetic disks remain the medium of choice for many • SALSA embraces storage diversity by supporting applications [6], mainly due to low cost. However, mag- multiple types of devices. We present how we com- netic recording is reaching its density scaling limits. To bine SMRs and SSDs to speedup file retrieval for a increase disk capacity, a number of techniques have been video server workload (where adding an SSD im- proposed, one of which, Shingled Magnetic Recording proves read performance by 19:1×), without modi- (SMR) [16, 69], has recently become widely available fying the application. [20, 52]. SMRs gain density by precluding random up- dates. The density improvement is achieved by reducing The remaining of the paper is organized as follows. the track width, thereby fitting more tracks on the surface We start with our a brief overview of idiosyncratic stor- of a platter, without reducing the write head size. As a age and our motivation behind SALSA (x2). We con- result, while a track can be read as in conventional disks, tinue with a description of the design of SALSA (x3), it cannot be re-written without damaging adjacent tracks. discuss how we satisfy specific application workload re- SMR disks are typically organized into zones. Zones quirements (x4), and evaluate our approach (x5). Finally, are isolated from each other by guards, so that writes to we discuss related work (x6) and conclude (x7). a zone do not interfere with tracks on other zones. All 2 Background and Motivation the writes within a zone must be done in strict sequential order; however, multiple zones can be written to indepen- In this section, we provide a brief background on Flash- dently. These zones are called sequential zones. Drives, based SSDs (x2.1) and Shingled Magnetic Recording also, typically include a small number of conventional (SMR) disks (x2.2), analyze the limitations of commod- zones, where random updates are allowed. ity drive TLs (x2.3), and argue for a unified host TL ar- Three categories of SMR drives exist based on where chitecture (x2.4). the TL is placed: drive-managed (DM), host-managed 2 patterns, only sustaining about 16MiB/s for 4KiB blocks, 500 439 10.6 and 33MiB/s for 1MiB blocks. Interestingly, a block size 400 10 of 1MiB is not large enough to bring the write perfor- 300 out - mance of the drive to its ideal level; block sizes closer to 4.8 200 5 the GiB level are required instead, which better reflects Wear the native block size of modern dense Flash SSDs [8].
Recommended publications
  • Optimized Database Performance Contents Your Customers Need Lots of Data, Fast
    Improve customer experience in B2B SaaS with optimized database performance Contents Your customers need lots of data, fast Your customers need lots of data, fast ......................... 2 Business to business applications such as human resources (HR) portals, collaboration tools and customer relationship management (CRM) Memory and storage: completing the pyramid .......... 3 systems must all provide fast access to data. Companies use these tools Traditional tiers of memory and storage ....................... 3 to increase their productivity, so delays will affect their businesses, and A new memory tier .................................................................. 4 ultimately their loyalty to the software provider. Additional benefits of the 2nd generation Intel® As data volumes grow, business to business software as a service (B2B SaaS) providers need to ensure their databases not only have the Xeon® Scalable processor .....................................................4 capacity for lots of data, but can also access it at speed. Improving caching with Memory Mode .......................... 5 Building services that deliver the data customers need at the moment More responsive storage with App Direct Mode ........ 6 they need it can lead to higher levels of customer satisfaction. That, in turn, could lead to improved loyalty, the opportunity to win new Crawl, walk, run ......................................................................... 8 business, and greater revenue for the B2B SaaS provider. The role of SSDs ......................................................................
    [Show full text]
  • Grade Database Architecture for Mission-‐Critical, Real
    WHITE PAPER Building an Enterprise-Grade Database Architecture for Mission-Critical, Real-Time Applications Table of contents A new generation of applications – Systems of engagement ......................................... 3 New challenges – Always on at the speed of now .......................................................... 3 Shortcomings of conventional technologies ................................................................... 4 Limitations of relational databases .............................................................................. 4 Emergence of NoSQL ................................................................................................... 4 Challenges of adding a caching layer ........................................................................... 5 A better way – In-memory speed without a caching layer ............................................. 7 The industry’s first SSD-optimized NoSQL database architecture ............................... 7 How it works ................................................................................................................... 8 Exploiting cost and durability breakthroughs in flash memory technology ................ 9 Enterprise-grade NoSQL has arrived ............................................................................... 9 Financial services – Position system of record and risk ............................................... 9 More sample use cases ................................................................................................
    [Show full text]
  • Aerospike Multi-Site Clustering: Globally Distributed, Strongly Consistent, Highly Resilient
    Aerospike Multi-site Clustering: Globally Distributed, Strongly Consistent, Highly Resilient Transactions at Scale Contents Executive Summary ......................................................................................................... 3 Applications and Use Cases ........................................................................................... 3 Fundamental Concepts.................................................................................................... 5 The Aerospike Approach ................................................................................................. 6 Core Technologies ........................................................................................................... 7 Rack Awareness ..................................................................................................................................... 7 Strong, Immediate Data Consistency ..................................................................................................... 8 Operational Scenarios ..................................................................................................... 9 Healthy Cluster ....................................................................................................................................... 9 Failure Situations .................................................................................................................................. 10 Summary.........................................................................................................................
    [Show full text]
  • Aerospike Vs. Cassandra - a Brief Comparison
    Benchmark Summary Aerospike vs. Cassandra - a Brief Comparison As a leading NoSQL database, Aerospike delivers fast, predictable performance at scale at a much lower Total Cost of Ownership (TCO) than other alternatives. Our tests with Apache Cassandra showed that Aerospike delivered 15.4x better transaction throughput, 14.8x better 99th percentile read latency and 12.8x better 99th percentile update latency. And Aerospike did that at less than 10% of Cassandra’s cost. Perhaps that’s hard to believe. Why not explore the details yourself? Benchmark Summary We tested Aerospike 4.6.0.4 Community Edition and Apache Cassandra 3.11.4 using the Yahoo Cloud Serving Benchmark (YCSB). To run a balanced mix of read and write operations for 24 hours on 2.65 TB of unique user data with a replication factor of 2, the total user data was 5.3 TB. Our environment included three Dell® R730xd rack-mounted servers with 128 GB of DRAM and two 14 core Xeon processors (56 hyperthreads) on each server. Each server utilized two 1.6 TB Micron 9200 Max NVMe SSDs for data and ran Centos Linux version 7.6. As Table 1 presents Aerospike’s throughput and latency advantages at the 99th percentile and 95th percentile, which were unmatched. Read Results Update Results Throughput 95th 99th Throughput 95th 99th Transactions Percentile Percentile Transactions Percentile Percentile (per second) Latency Latency (per second) Latency Latency (µsecs) (µsecs) (µsecs) (µsecs) Apache 23,999 8,487 17,187 23,999 8,319 20,109 Cassandra 3.11.4 Aerospike CE 369,998 744 1,163 369,998 1,108 1,574 4.6.0.4 Aerospike 15.4x 11.4x 14.8x 15.4x 7.51x 12.8x advantage better better better better better better Table 1: YCSB results summary Maybe you’re wondering why Cassandra didn’t fare better.
    [Show full text]
  • Drivescale Architecture for Data Center Scale Composability
    DriveScale Architecture for Data Center Scale Composability by Brian Pawlowski, Chris Unkel, Jim Hanko, Jean-François Remy 1 Overview of talk § Defining composable infrastructure § What the administrator sees § DriveScale design and architecture § Future directions § Recapitulation § Questions 2 Defining composable infrastructure 3 DriveScale – Disaggregate then Compose Captive, fixed DAS Composable DAS DAS Disaggregate DAS DAS DAS DAS Purchase Time Right sized Defined Infrastructure Software Defined Infrastructure 4 Composable Infrastructure – blade chassis scale • Single vendor hardware • Low scalability 10,000 Servers 100,000 Drives 5 Composable Infrastructure – data center scale 10,000 Servers 100,000 Drives • Choose optimized dense diskless servers, optimized storage platforms for HDD and SSD • Multivendor – mix and match • Multiprotocol Ethernet (iSCSI, ROCEv2, NVME over TCP) for wide variety of use cases • Composer platform server(s) configuration: dual socket x86, 128GB DRAM, SSD 6 Composable Infrastructure at scale Compose any compute to any drive Ethernet to storage adapters PhysicalLogical View View Hadoop Cassandra Aerospike 7 Think Bare Metal Virtualized Infrastructure for Linux, at scale – but without the VMs. 8 The devil is in the details § Breaking up (disaggregation) is easy (in my pocket) § Composable Infrastructure – at scale – is hard • Create secure durable bindings between servers and drives • Plumb end-to-end Linux storage stack over multiple protocols, multipathing, file systems, RAID configurations, hardware vendors
    [Show full text]
  • Aerospike: Architecture of a Real-Time Operational DBMS
    Aerospike: Architecture of a Real-Time Operational DBMS V. Srinivasan Brian Bulkowski Wei-Ling Chu Sunil Sayyaparaju Andrew Gooding Rajkumar Iyer Ashish Shinde Thomas Lopatic Aerospike, Inc. [email protected] ABSTRACT correct advertisement to a user, based on that user’s behavior. You In this paper, we describe the solutions developed to address key can see the basic architecture of the ecosystem illustrated in technical challenges encountered while building a distributed Figure 1. database system that can smoothly handle demanding real-time workloads and provide a high level of fault tolerance. Specifically, we describe schemes for the efficient clustering and data partitioning for the automatic scale out of processing across multiple nodes and for optimizing the usage of CPUs, DRAM, SSDs and networks to efficiently scale up performance on one node. The techniques described here were used to develop Aerospike (formerly Citrusleaf), a high performance distributed database system built to handle the needs of today’s interactive online services. Most real-time decision systems that use Aerospike require very high scale and need to make decisions within a strict SLA by reading from, and writing to, a database containing billions of data items at a rate of millions of operations per second with sub-millisecond latency. For over five years, Aerospike has been continuously used in over a hundred successful production deployments, as many enterprises have discovered that it can Figure 1: RTB technology stack substantially enhance their user experience. In order to participate in the real-time bidding [22] process, every participant in this ecosystem needs to have a high-performance 1.
    [Show full text]
  • Memcached, Redis, and Aerospike Key-Value Stores Empirical
    Memcached, Redis, and Aerospike Key-Value Stores Empirical Comparison Anthony Anthony Yaganti Naga Malleswara Rao University of Waterloo University of Waterloo 200 University Ave W 200 University Ave W Waterloo, ON, Canada Waterloo, ON, Canada +1 (226) 808-9489 +1 (226) 505-5900 [email protected] [email protected] ABSTRACT project. Thus, the results are somewhat biased as the tested DB The popularity of NoSQL database and the availability of larger setup might be set to give more advantage of one of the systems. DRAM have generated a great interest in in-memory key-value We will discuss more in Section 8 (Related Work). stores (kv-store) in the recent years. As a consequence, many In this work, we conduct a thorough experimental evaluation by similar kv-store store projects/products has emerged. Besides the comparing three major key-value stores nowadays, namely Redis, benchmarking results provided by the KV-store developers which Memcached, and Aerospike. We first elaborate the databases that are usually tested under their favorable setups and scenario, there we tested in Section 3. Then, the evaluation methodology are very limited comprehensive resources for users to decide comprises experimental setups, i.e., single and cluster mode; which kv-store to choose given a specific workload/use-case. To benchmark setups including the description of YCSB, dataset provide users with an unbiased and up-to-date insight in selecting configurations, types workloads (i.e., read-heavy, balanced, in-memory kv-stores, we conduct a study to empirically compare write-heavy), and concurrent access; and evaluation metrics will Redis, Memcached and Aerospike on equal ground by trying to be discussed in Section 4.
    [Show full text]
  • Key Data Modeling Techniques
    Key Data Modeling Techniques Piyush Gupta Director Customer Enablement Aerospike Aerospike is a "Record Centric", Distributed, NoSQL Database. 2 AEROSPIKE SUMMIT ‘19 | Proprietary & Confidential | All rights reserved. © 2019 Aerospike Inc Relational Data Modeling – Table Centric Schema, 3rdNF 3 AEROSPIKE SUMMIT ‘19 | Proprietary & Confidential | All rights reserved. © 2019 Aerospike Inc NoSQL Modeling: Record Centric Data Model § De-normalization implies duplication of data § Queries required dictate Data Model § No “Joins” across Tables (No View Table generation) § Aggregation (Multiple Data Entry) vs Association (Single Data Entry) § “Consists of” vs “related to” 4 AEROSPIKE SUMMIT ‘19 | Proprietary & Confidential | All rights reserved. © 2019 Aerospike Inc Before jumping into modeling your data ... What do you want to achieve? § Speed at Scale. § Need Consistency & Multi-Record Transactions? § Know your traffic. § Know your data. Model your data: § Even a simple key-value lookup model can be optimized to significantly reduce TCO. § Will you need secondary indexes? § List your Queries upfront. § Design de-normalized data model to address the queries. Data Modeling is tightly coupled with reducing the Total Cost of Operations. 5 AEROSPIKE SUMMIT ‘19 | Proprietary & Confidential | All rights reserved. © 2019 Aerospike Inc Aerospike Architecture Related Decisions [ Proprietary & Confidential || © 2017 Aerospike Inc. All rights reserved. 6 ] Namespaces – Select one or more. [Data Storage Options] § Storage: RAM– fastest, File storage slowest. SSDs: RAM like performance. § ALL FLASH: TCO advantage for petabyte stores/small size records. Latency penalty. 7 AEROSPIKE SUMMIT ‘19 | Proprietary & Confidential | All rights reserved. © 2019 Aerospike Inc Aerospike API Features for Data Modeling API Features to exploit for Data Modeling: § Write policy - GEN_EQUAL for Compare-and-Set (Read-Modify-Write).
    [Show full text]
  • Accelerating Memcached on AWS Cloud Fpgas
    Accelerating Memcached on AWS Cloud FPGAs Jongsok Choi, Ruolong Lian, Zhi Li, Andrew Canis, and Jason Anderson LegUp Computing Inc. 88 College St., Toronto, ON, Canada, M5G 1L4 [email protected] ABSTRACT In recent years, FPGAs have been deployed in data centres of major cloud service providers, such as Microsoft [1], Ama- zon [2], Alibaba [3], Tencent [4], Huawei [5], and Nimbix [6]. This marks the beginning of bringing FPGA computing to the masses, as being in the cloud, one can access an FPGA from anywhere. A wide range of applications are run in the cloud, includ- ing web servers and databases among many others. Mem- cached is a high-performance in-memory object caching sys- Figure 1: Memcached deployment architecture. tem, which acts as a caching layer between web servers and databases. It is used by many companies, including Flicker, Wikipedia, Wordpress, and Facebook [7, 8]. Another historic impediment to FPGA accessibility lies In this paper, we present a Memcached accelerator imple- in their programmability or ease-of-use. FPGAs have tra- mented on the AWS FPGA cloud (F1 instance). Compared ditionally been difficult to use, requiring expert hardware to AWS ElastiCache, an AWS-managed CPU Memcached designers to use low-level RTL code to design circuits. In service, our Memcached accelerator provides up to 9× bet- order for FPGAs to become mainstream computing devices, ter throughput and latency. A live demo of the Memcached they must be as easy to use as CPUs and GPUs. There has accelerator running on F1 can be accessed on our website [9].
    [Show full text]
  • Best Practices
    Cloud Container Engine Best Practices Issue 01 Date 2021-09-07 HUAWEI TECHNOLOGIES CO., LTD. Copyright © Huawei Technologies Co., Ltd. 2021. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Issue 01 (2021-09-07) Copyright © Huawei Technologies Co., Ltd. i Cloud Container Engine Best Practices Contents Contents 1 Checklist for Deploying Containerized Applications in the Cloud...............................1 2 Cluster.......................................................................................................................................
    [Show full text]
  • Ultra-High Performance Nosql Benchmarking
    Ultra-High Performance NoSQL Benchmarking Analyzing Durability and Performance Tradeoffs Denis Nelubin, Director of Technology, Thumbtack Technology Ben Engber, CEO, Thumbtack Technology Overview As companies deal with ever larger amounts of data and increasingly demanding workloads, a new class of databases has taken hold. Dubbed “NoSQL”, these databases trade some of the features used by traditional relational databases in exchange for increased performance and/or partition tolerance. But as NoSQL solutions have proliferated and differentiated themselves (into key-value stores, document databases, graph databases, and “NewSQL”), trying to evaluate the database landscape for a particular class of problem becomes more and more difficult. In this paper we attempt to answer this question for one specific, but critical, class of functionality – applications that need the highest possible raw performance for a reliable storage engine. There have been a few attempts to provide standardized tools to measure performance or other characteristics, but these have been hobbled by the lack of a clear mandate on exactly what they’re testing, plus an inability to measure load at the highest volumes. In addition, there is an implicit tradeoff between the consistency and durability requirements of an application and the maximum throughput that can be processed. What is needed is not an attempt to quantify every NoSQL solution into one artificial bucket, but a more systemic analysis of how some of these databases can achieve under assumptions that mirror real-world application needs. We attempted to provide a comprehensive answer to one specific set of use cases for NoSQL databases -- consumer-facing applications which require extremely high throughput and low latency, and whose information can be represented using a key-value schema.
    [Show full text]
  • Snapdeal Case Study
    In-memory NoSQL Database fueled by Aerospike Snapdeal, India’s Largest Online Marketplace, Accelerates RESULTS Personalized Web Experience With Aerospike In-memory Aerospike database maintains sub-millisecond In India, Internet retail accounted for $3 billion in 2013 and is expected to latency on Amazon Elastic swell to $22 billion in the next five years. Still, with a total consumer market Compute Cloud (EC2) while managing 100 million-plus valued at $450 billion, India has been called the world’s last major frontier for objects stored in 8GB of DRAM e-commerce. Leading the industry in addressing this opportunity is Snapdeal, to support real-time dynamic India’s largest online marketplace. pricing. Snapdeal empowers sellers across the country to provide consumers with a Predictable low latency with 95- fully responsive and intuitive online shopping experience through an advanced 99% of transactions complet- ing within 10 milliseconds—es- platform that merges logistics subsystems with cutting-edge online and sential for enabling a responsive mobile payment models. The platform has a wide range of products from customer experience. thousands of national, international and regional brands. Aerospike’s highly efficient use Snapdeal.com now has a network of more than 20,000 sellers, of resources enables Snapdeal to cost effectively deploy on serving 20 million-plus members—one out of every six Internet just two servers in Amazon EC2. users in India. Powering this platform is the Aerospike flash and DRAM- optimized in-memory NoSQL data- Full replication across the base. By harnessing the real-time “There has been no need for main- Amazon EC2 servers ensures big data processing capabilities of tenance with Aerospike database; business continuity.
    [Show full text]