Solid-State Drive Caching with Differentiated Storage Services
Total Page:16
File Type:pdf, Size:1020Kb
IT@Intel White Paper Intel IT IT Best Practices Storage and Solid-State Drives July 2012 Solid-State Drive Caching with Differentiated Storage Services Executive Overview Our test results indicate that Intel IT, in collaboration with Intel Labs, is evaluating a new storage technology Differentiated Storage Services called Differentiated Storage Services (DSS).1 When combined with Intel® Solid-State can significantly improve Drives, used as disk caches, we found that DSS can improve the performance and application throughput— cost effectiveness of disk caching. resulting in a positive financial DSS uses intelligent I/O classification to enable whether the HDD was 7200 revolutions impact on storage costs. storage systems to enforce per-class quality- per minute (RPM) or 15K RPM. of-service (QoS) policies on data, doing so at • 7200 RPM HDDs with DSS caching the granularity of blocks—the smallest unit outperformed 15K RPM HDDs without of storage in a disk drive. By classifying disk cache and 15K RPM HDDs with LRU. I/O, DSS supports the intelligent prioritization • DSS caching reduced the cost per MB of I/O requests, without sacrificing storage per IOPS by 66 percent compared to system interoperability. And when applied to LRU-based caching, depending on the caching, DSS enables new selective cache workload. allocation and eviction algorithms. In our tests, DSS was tuned for general In our tests, we compared DSS-based purpose file system workloads, where selective caching to equivalently configured throughput is primarily limited by access to systems that used the least recently used small files and their associated metadata. (LRU) caching algorithm and to systems that However, other types of workloads, such used no SSD cache at all. Our test results as databases, hypervisors, and big data, indicate that DSS can significantly improve can also benefit from DSS by specifying application throughput—resulting in a positive application-specific DSS caching policies. For financial impact on storage costs, in terms of example, a database can specify caching cost per megabyte (MB) of each I/O operation policies for specific database queries. Intel per second (IOPS) delivered by the storage Labs is exploring these and other potential system; that is, U.S. dollars per MB per IOPS. applications of DSS technology. • DSS-based caching was capable of processing 1.5x to 6.8x more IOPS, Christian Black 1 Mesnier, Michael, Jason Akers, Feng Chen, and Tian Luo. depending on the workload and on Enterprise Architect, Intel IT “Differentiated Storage Services.” 23rd ACM Symposium on Operating Systems Principles (SOSP). October 2011. Michael Mesnier Research Scientist, Intel Labs Terry Yoshii Enterprise Architect, Intel IT IT@Intel White Paper Solid-State Drive Caching with Differentiated Storage Services Contents BACKGROUND performance problem. But, because the high cost of SSDs has prevented the mass replacement As demand for data storage capacity Executive Overview ............................. 1 of HDDs, some form of caching is typically used, at Intel continues to increase by about where a limited number of SSDs are used as a Background ............................................ 2 35-percent annually, Intel IT—like the rest cache in front of conventional HDDs. of the IT industry—is facing challenges Prioritizing I/O with related to data storage and caching. Caching is a common method of improving Differentiated Storage Services ...... 3 I/O performance in a storage system. The Evaluating Caching with • Balancing throughput and cost. Higher effectiveness of caching is largely based on Differentiated Storage Services ...... 4 capacity utilization of a storage disk typically the type of caching algorithm implemented leads to decreased performance because and the performance of the physical caching Test Methodology and System Configuration ....................... 4 of increased disk head movement. To device. There are many conventional caching avoid these performance issues, it is often algorithms, a few of which are described in Performance Analysis Results ....... 5 tempting to purchase new disks before the Table 1. Financial Analysis Results ............... 6 old ones are completely filled. However, this Caching involves augmenting slower storage solution, referred to as short-stroking, is not Conclusion .............................................. 8 access with a small amount of high-speed cost effective because it reduces effective storage that stores latency-sensitive data, Related Reading ................................... 8 utilization and increases storage costs. effectively increasing the number of I/O Acronyms ................................................ 8 • Boosting storage performance. Although operations per second (IOPS) produced by processing capability—in particular, the a given storage system. However, when number of cores per processor—has increased the cache is under pressure, the caching dramatically over the past several algorithm may be unable to keep the correct years, hard disk drive (HDD) storage data cached and instead may thrash—causing performance has remained relatively it to continuously exchange old data with flat. HDDs take milliseconds to access new data, with the new data providing little data, whereas processor performance can or no value to the system. be measured in nanoseconds—a million Much of this thrash occurs because conventional times faster. Although a DRAM cache storage (block-level) caches do not adequately adequately supports large storage arrays, prioritize I/O in the storage system—and it cannot persistently store data without this is because today’s block-based storage an expensive battery backup system. protocols consist of two basic commands: Additionally, the high cost of DRAM is a READ and WRITE. This API abstracts away limiting factor in large-scale deployment. useful information about what is being read or High-performance solid-state drives (SSDs) that written, such as filenames, file sizes, metadata use flash memory technology support persistent structures, application context, and user and storage and faster data access at a lower cost process information. While the block protocol IT@INtel than DRAM, thereby addressing the storage enables interoperability across storage systems, The IT@Intel program connects IT professionals around the world with their Table 1. Popular Caching Algorithms peers inside our organization – sharing Caching Algorithm General Description lessons learned, methods and strategies. Least Recently Discards the least recently used items first, based on age-bits that track what Our goal is simple: Share Intel IT best Used (LRU) data was used and when it was used. practices that create business value and Segmented LRU Divides the cache into a probationary segment and a protected segment. Within make IT a competitive advantage. Visit (SLRU) each segment, cache lines are ordered from the most to the least recently us today at www.intel.com/IT or contact accessed. Data from misses are added to the probationary segment, while hits are your local Intel representative if you’d moved to the most recently used end of the protected segment. like to learn more. Most Recently Used Discards the most recently used items first. MRU algorithms are most useful (MRU) in situations where older items are more likely to be accessed, such as when a file—particularly a large file—is being repeatedly scanned, and for random access patterns. 2 www.intel.com/IT Solid-State Drive Caching with Differentiated Storage Services IT@Intel White Paper the intelligent prioritization of I/O requests is • Selective eviction. Evicts in priority can determine the class of each block-level extremely difficult. order (lowest priority first). For example, I/O request, and different QoS policies can be temporary database tables would be associated with each class of data. Policies Intel IT, working with Intel Labs, is now evicted before system tables. may vary across storage system vendors; in evaluating Differentiated Storage Services our tests we evaluated storage system policies (DSS)—a new I/O classification technology DSS uses I/O classification to improve based on cache priority levels. that addresses many of the limitations of the performance—and therefore the conventional caching algorithms. DSS enables cost effectiveness—of data caching, Before the storage system is brought online, the policy-based caching, whereby different going far beyond the capabilities of the computer system must inform it of the various caching policies are assigned to different Segmented Least Recently Used caching classes of data and their associated policies. classes of data. DSS is a storage (block-level) algorithm, described in Table 1. By using For example, any I/O classified as metadata caching technology that is currently being I/O classification, DSS can enable storage might be assigned a high-priority caching applied in storage systems. systems to enforce per-class quality-of- policy. The table on the left in Figure 1 lists service (QoS) policies, such as I/O priorities. sample I/O classes and their assigned policies. The specific I/O classification scheme depends By separating policy from the classification on the origin of the I/O request. For example, mechanism in this manner, DSS enables OS PRIORITIZING I/O the request could come from the file system, interoperability across various storage systems, WITH DIFFERENTIAted OS, database, hypervisor, or cloud storage server. while still supporting the intelligent