Contributed Articles

Contributed Articles

contributed articles DOI:10.1145/3286588 months and application-demanded Programmable software-defined requirements from the storage sys- tem grow quickly over time. This solid-state drives can move computing notable lag in the adaptability and functions closer to storage. velocity of movement of the storage infrastructure may ultimately affect BY JAEYOUNG DO, SUDIPTA SENGUPTA, AND STEVEN SWANSON the ability to innovate throughout the cloud world. In this article, we advocate creating a software-defined storage substrate of Programmable solid-state drives (SSDs) that are as pro- grammable, agile, and flexible as the applications/OS accessing from serv- ers in cloud datacenters. A fully pro- Solid-State grammable storage substrate prom- ises opportunities to better bridge the gap between application/OS needs and Storage in storage capabilities/limitations, while allowing application developers to in- novate in-house at cloud speed. Future Cloud The move toward software-defined control for IO devices and co-proces- sors has played out before in the data- Datacenters center. Both GPUs and network inter- face cards (NICs) started as black-box devices that provide acceleration for CPU-intensive operations (such as graphics and packet processing). In- ternally, they implemented accelera- tion features with a combination of specialized hardware and proprietary THERE IS A major disconnect today in cloud datacenters firmware. As customers demanded concerning the speed of innovation between greater flexibility, vendors slowly ex- posed programmability to the rest of application/operating system (OS) and storage the system, unleashing the vast pro- infrastructures. Application/OS software is patched cessing power available from GPUs with new/improved functionality every few weeks at and a new level of agility in how sys- tems can manage networks for en- “cloud speed,” while storage devices are off-limits hanced functionality like more granu- for such sustained innovation during their hardware lar traffic management, security, and life cycle of three to five years in datacenters. Since key insights the software inside the storage device is written by ˽ A fully programmable storage substrate storage vendors as proprietary firmware not open in cloud datacenters opens up new opportunities to innovate the storage for general application developers to modify, the infrastructure at cloud speed. developers are stuck with a device whose functionality ˽ In-storage programming is becoming increasingly easier with powerful and capabilities are frozen in time, even as many of processing capabilities and highly flexible them are modifiable in software. A period of five years is development environments. ˽ New value propositions with the almost eternal in the cloud computing industry where programmable storage substrate can be new features, platforms, and application program realized, such as customizing the storage interface, moving compute close to data, interfaces (APIs) are evolving every couple of and performing secure computations. 54 COMMUNICATIONS OF THE ACM | JUNE 2019 | VOL. 62 | NO. 6 deep-network telemetry. Such large-scale datasets—which gen- The controller that is most com- Storage is at the cusp of a similar erally range from tens of terabytes to monly implemented as a system-on- transformation. Modern SSDs rely multiple petabytes—present chal- a-chip (SoC) is designed to manage on sophisticated processing engines lenges of extreme scale while achieving the underlying storage media. For ex- running complex firmware, and ven- very fast and efficient data processing: ample, SSDs built using NAND flash dors already provide customized firm- a high-performance storage infrastruc- memory have unique characteristics ware builds for cloud operators. Ex- ture in terms of throughput and latency in that data can be written only to an posing this programmability through is necessary. This trend has resulted in empty memory location—no in-place easily accessible interfaces will let growing interest in the aggressive use updates are allowed—and memory storage systems in the cloud data- of SSDs that, compared with tradition- can endure only a limited number centers adapt to rapidly changing re- al spinning hard disk drives (HDDs), of writes before it can no longer be quirements on the fly. provides orders-of-magnitude lower read. Therefore, the controller must latency and higher throughput. In ad- be able to perform some background Storage Trends dition to these performance benefits, management tasks (such as garbage The amount of data being generated the advent of new technologies (such collection) to reclaim flash blocks daily is growing exponentially, placing as 3D NAND enabling much denser containing invalid data to create more and more processing demand on chips and quad-level-cell, or QLC, for available space and wear leveling to datacenters. According to a 2017 mar- bulk storage) allows SSDs to continue evenly distribute writes across the keting-trend report from IBM,a 90% of to significantly scale in capacity and to entire flash blocks with the purpose the data in the world in 2016 has been yield a huge reduction in price. of extending the SSD life. These tasks created in the last 12 months of 2015. There are two key components in are, in general, implemented by pro- SSDs,4 as shown in Figure 1—an SSD prietary firmware running on one or IMAGE BY ANDRIJ BORYS ASSOCIATES, USING PHOTO FROM SHUTTERSTOCK.COM USING PHOTO ASSOCIATES, ANDRIJ BORYS BY IMAGE a https://ibm.co/2XNvHPk controller and flash storage media. more embedded processor cores in JUNE 2019 | VOL. 62 | NO. 6 | COMMUNICATIONS OF THE ACM 55 contributed articles Figure 1. Internal architecture of a modern flash SSD. of 16 or 32 flash channels, as out- lined in Figure 2. Since each flash Flash SSD channel can keep up with ~500MB/ sec; internally each SSD can be up to Flash Storage Media ~500MB/sec per channel X 32 chan- Flash Channel nels = ~16GB/sec (see Figure 2d); and Flash e Embedded SRAM the total aggregated in-SSD perfor- r Controller ac Processor f e r ll e mance would be ~16GB/sec per SSD X o t r n t I n 64 SSDs = ~1TB/sec (see Figure 2c), a o st C SSD Flash Channel o DRAM Flash 66x gap. Making SSDs programmable H Controller Controller Controller would thus allow systems to fully le- verage this abundant bandwidth. DRAM In-Storage Programming Modern SSDs combine processing— embedded processor—and storage Figure 2. Example conventional storage server architecture with multiple NVMe SSDs. components—SRAM, DRAM, and flash memory—to carry out routine func- tions required for managing the SSD. SSD Storage System These computing resources present in- 0 teresting opportunities to run general Flash SSD 0 0 user-defined programs. In 2013, Do et PCIe 1 Flash SSD Switch 1 al.6,17 explored such opportunities for 2 Flash SSD the first time in the context of running 1 PCIe 3 selected database operations inside Switch Flash SSD 31 x e l a Samsung SAS flash SSD. They wrote p m CPU o c RAM simple selection and aggregation oper- t D RAM 0 D oo 0 R Flash SSD ators that were compiled into the SSD 1 15 firmware and extended the execution Flash SSD 1 PCIe 2 framework of Microsoft SQL Server Switch Flash SSD 2012 to develop a working prototype 3 Flash SSD 31 in which simple selection and aggrega- (c) (d) (a) (b) tion queries could be run end-to-end. 64 SSDs X ~2 GB/s That work demonstrated several Throughput gap of 8x = ~128 GB/s 16 lanes of PCIe times improvement in performance = ~16 GB/s 64 SSDs X ~16 GB/s 32 channels X ~500 MB/s Throughput gap of 66x = ~1 TB/s = ~16 GB/s and energy efficiency by offloading database operations onto the SSD and highlighted a number of challenges that would need to be overcome to the controller. In enterprise SSDs, large storage server at low cost (compared broadly adapt programmable SSDs: SRAM is often used for executing the to building a specialized server to First, the computing capabilities SSD firmware, and both user data and directly attach all SSDs on the moth- available inside the SSD are limited by internal SSD metadata are cached in erboard of the host), the maximum design. The low-performance embed- external DRAM. throughput is limited to 16-lane ded processor inside the SSD with- Interestingly, SSDs generally have PCIe interface speed (see Figure 2a), out L1/L2 caches and high latency to a far larger aggregate internal band- which is approximately 16GB/sec, the in-SSD DRAM require extra care- width than the bandwidth supported regardless of the number of SSDs ful programming to run user code in by host I/O interfaces (such as SAS and accessed in parallel. There is thus the SSD without producing a perfor- PCIe). Figure 2 outlines an example an 8x throughput gap between the mance bottleneck. of a conventional storage system that host interface and the total aggre- Moreover, the embedded software- leverages a plurality of NVM Express gated SSD bandwidth that could be development process is complex and (NVMe)b SSDs; 64 of them are con- up to roughly ~2GB/sec per SSDc X 64 makes programming and debugging nected to 16 PCIe switches that are SSDs = ~128GB/sec (see Figure 2b). very challenging. To maximize perfor- mounted to a host machine via 16 More interestingly, this gap would mance, Do et al. had to carefully plan lanes of PCIe Gen3. While this stor- grow further if the internal SSD per- the layout of data structures used by the age architecture provides a com- formance is considered. A modern code running inside the SSD to avoid modity solution for high-capacity enterprise-level SSD usually consists spilling out of the SRAM. Likewise, Do et al. used a hardware-debugging tool b A device interface for accessing non-volatile c Practical sequential-read bandwidth of a com- to debug programs running inside the memory attached via a PCI Express (PCIe) bus.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us