Slimcache: Exploiting Data Compression Opportunities in Flash-Based Key-Value Caching

Total Page:16

File Type:pdf, Size:1020Kb

Slimcache: Exploiting Data Compression Opportunities in Flash-Based Key-Value Caching SlimCache: Exploiting Data Compression Opportunities in Flash-based Key-value Caching Yichen Jia Zili Shao Feng Chen Computer Science and Engineering Computer Science and Engineering Computer Science and Engineering Louisiana State University The Chinese University of Hong Kong Louisiana State University [email protected] [email protected] [email protected] Abstract—Flash-based key-value caching is becoming popular First, compared to memory-based key-value cache, such as in data centers for providing high-speed key-value services. These Memcached, flash-based key-value caches are usually 10-100 systems adopt slab-based space management on flash and provide times larger. As key-value items are typically small (e.g., tens a low-cost solution for key-value caching. However, optimizing cache efficiency for flash-based key-value cache systems is highly to hundreds of bytes), a flash-based key-value cache often challenging, due to the huge number of key-value items and the needs to maintain billions of key-value items, or even more. unique technical constraints of flash devices. In this paper, we Tracking such a huge number of small items in cache man- present a dynamic on-line compression scheme, called SlimCache, agement would result in an unaffordable overhead. Also, many to improve the cache hit ratio by virtually expanding the usable advanced cache replacement algorithms, such as ARC [36] and cache space through data compression. We have investigated the effect of compression granularity to achieve a balance between CLOCK-Pro [26], need to maintain a complex data structure compression ratio and speed, and leveraged the unique workload and a deep access history (e.g., information about evicted characteristics in key-value systems to efficiently identify and data), making the overhead even more pronounced. Therefore, separate hot and cold data. In order to dynamically adapt to a complex caching scheme is practically infeasible for flash- workload changes during runtime, we have designed an adaptive based key-value caches. hot/cold area partitioning method based on a cost model. In order to avoid unnecessary compression, SlimCache also estimates Second, unlike DRAM, flash memories have several unique data compressibility to determine whether the data are suitable technical constraints, such as the well-known “no in-place for compression or not. We have implemented a prototype overwrite” and “sequential-only writes” requirements [7], [15]. based on Twitter’s Fatcache. Our experimental results show that As such, flash devices generally favor large, sequential, log- SlimCache can accommodate more key-value items in flash by like writes rather than small, random writes. Consequently, up to 125.9%, effectively increasing throughput and reducing average latency by up to 255.6% and 78.9%, respectively. flash-based key-value caches do not directly “replace” small key-value items in place as Memcached does. Instead, key- I. INTRODUCTION value data are organized and replaced in large coarse-grained chunks, relying on Garbage Collection (GC) to recycle the Today’s data centers still heavily rely on hard disk drives space occupied by obsolete or deleted data. This unfortunately (HDDs) as their main storage devices. To address the per- further reduces the usable cache space and affects the caching formance problem of disk drives, especially for handling efficiency. random accesses, in-memory key-value cache systems, such as For the above two reasons, it is difficult to solely rely Memcached [37], become popular in data centers for serving on developing a complicated, fine-grained cache replacement various applications [20], [48]. Although memory-based key- algorithm to improve the cache hit ratio for key-value caching value caches can eliminate a large amount of key-value data in flash. In fact, real-world flash-based key-value cache sys- retrievals (e.g., “User ID” and “User Name”) from the back- tems often adopt a simple, coarse-grained caching scheme. For end data stores, they also raise concerns on high cost and example, Twitter’s Fatcache uses a First-In-First-Out (FIFO) power consumption issues in a large-scale deployment. As policy to manage its cache in a large granularity of slabs (a an alternative solution, flash-based key-value cache systems group of key-value items) [48]. Such a design, we should note, recently have attracted an increasingly high interest in industry. is an unwillingly-made but necessary compromise to fit the For example, Facebook has deployed a key-value cache system needs for caching many small key-value items in flash. based on flash, called McDipper [20], as a replacement of the This paper seeks an alternative solution to improve the expensive Memcached servers. Twitter has a similar key-value cache hit ratio. This solution, interestingly, is often ignored in cache solution, called Fatcache [48]. practice—increasing the effective cache size. The key idea is that for a given cache capacity, the data could be compressed to A. Motivations save space, which would “virtually” enlarge the usable cache The traditional focus on improving the caching efficiency space and allow us to accommodate more data in the flash is to develop sophisticated cache replacement algorithms [36], cache, in turn increasing the hit ratio. [26]. Unfortunately, it is highly challenging in the scenario of In fact, on-line compression fits flash devices very well. flash-based key-value caching. This is for two reasons. Figure 1 shows the percentage of I/O and computation time I/O 3 100 Compute Weibo 100% Tweets Reddit 2.5 80 80% 2 60% 60 Weibo 1.5 Tweets 40% 40 Reddit 1 20% Compression Ratio Percentage of total latency 20 0% 0.5 Cumulative Probability(%) 0 0 0 500 1000 1500 2000 1KB,R 4KB,R 1MB,R 1KB,W 4KB,W Item 64B 128B 256B 512B 1KB 2KB 4KB 8KB 16KB 32KB128KB 1MB,W 16KB,R 64KB,R 16KB,W 64KB,W 256KB,R Item Size(Bytes) 256KB,W Compression Granularity Fig. 1: I/O time v.s. computation time Fig. 2: Compr. ratio v.s. granularity. Fig. 3: Distribution of item sizes. for compressing and decompressing random data in different compressibility and conditionally apply on-line compression request sizes. The figure illustrates that for read requests, the to minimize the overhead. decompression overhead only contributes a relatively small Last but not least, we also need to be fully aware of the portion of the total time, less than 2% for requests smaller unique properties of flash devices. For example, flash devices than 64KB. For write requests, the compression operations are generally favor large and sequential writes. The traditional more computationally expensive, contributing for about 10%- log-based solution, though being able to avoid generating 30% of the overall time, but it is still at the same order of small and random writes, relies on an asynchronous Garbage magnitude compared to an I/O access to flash. Compared to Collection (GC) process, which would leave a large amount schemes compressing data in memory, such as zExpander [54], of obsolete data occupying the precious cache space and the relative computing overhead accounts for an even smaller negatively affect the cache hit ratio. percentage, indicating that it would be feasible to apply on-line All these issues must be well considered for an effective compression in flash-based caches. adoption of compression in flash-based key-value caching. B. Challenges and Critical Issues C. Our Solution: SlimCache Though promising, efficiently incorporating on-line com- In this paper, we present an adaptive on-line compression pression in flash-based key-value cache systems is non-trivial. scheme for key-value caching in flash, called SlimCache. Several critical issues must be addressed. SlimCache identifies the key-value items that are suitable First, various compression algorithms have significantly dif- for compression, applies a compression and decompression ferent compression efficiency and computational overhead [3], algorithm at a proper granularity, and expands the effectively [5], [32]. Lightweight algorithms, such as lz4 [32] and usable flash space for caching more data. snappy [3], are fast, but only provide moderate compression In SlimCache, the flash cache space is dynamically divided uncompressed ratio (i.e., compressed ); heavyweight schemes, such as the into two separate regions, a hot area and a cold area, to deflate algorithm used in gzip [2] and zlib [5], can provide store frequently and infrequently accessed key-value items, better compression efficacy, but are relatively slow and would respectively. Based on the highly skewed access patterns in incur higher overhead. We need to select a proper algorithm. key-value systems [8], the majority, infrequently accessed key- Second, compression efficiency is highly dependent on value items are cached in flash in a compressed format for the the compression unit size. A small unit size suffers from purpose of space saving. A small set of frequently accessed a low compression ratio problem, while aggressively using key-value items is cached in their original, uncompressed an oversized compression unit could incur a severe read format to avoid the read amplification and decompression amplification problem (i.e., read more than needed). Figure 2 penalty. The partitioning is automatically adjusted based on shows the average compression ratio of three datasets (Weibo, the runtime workloads. In order to create the desired large Tweet, Reddit) with different container sizes. We can see that sequential write pattern on flash, the cache eviction process these three datasets are all compressible, as expected, and a and the hot/cold data separation mechanism are integrated to larger compression granularity generally results in a higher minimize the cache space waste caused by data movement compression ratio. In contrast, compressing each key-value between the two areas. item individually or using a small compression granularity To our best knowledge, SlimCache is the first work intro- (e.g., smaller than 4 KB) cannot reduce the data size effec- ducing compression into flash-based key-value caches.
Recommended publications
  • Download Media Player Codec Pack Version 4.1 Media Player Codec Pack
    download media player codec pack version 4.1 Media Player Codec Pack. Description: In Microsoft Windows 10 it is not possible to set all file associations using an installer. Microsoft chose to block changes of file associations with the introduction of their Zune players. Third party codecs are also blocked in some instances, preventing some files from playing in the Zune players. A simple workaround for this problem is to switch playback of video and music files to Windows Media Player manually. In start menu click on the "Settings". In the "Windows Settings" window click on "System". On the "System" pane click on "Default apps". On the "Choose default applications" pane click on "Films & TV" under "Video Player". On the "Choose an application" pop up menu click on "Windows Media Player" to set Windows Media Player as the default player for video files. Footnote: The same method can be used to apply file associations for music, by simply clicking on "Groove Music" under "Media Player" instead of changing Video Player in step 4. Media Player Codec Pack Plus. Codec's Explained: A codec is a piece of software on either a device or computer capable of encoding and/or decoding video and/or audio data from files, streams and broadcasts. The word Codec is a portmanteau of ' co mpressor- dec ompressor' Compression types that you will be able to play include: x264 | x265 | h.265 | HEVC | 10bit x265 | 10bit x264 | AVCHD | AVC DivX | XviD | MP4 | MPEG4 | MPEG2 and many more. File types you will be able to play include: .bdmv | .evo | .hevc | .mkv | .avi | .flv | .webm | .mp4 | .m4v | .m4a | .ts | .ogm .ac3 | .dts | .alac | .flac | .ape | .aac | .ogg | .ofr | .mpc | .3gp and many more.
    [Show full text]
  • Divx Codec Package
    Divx codec package Videos. How To Use DivX Mux GUI · How to Stream DivX Plus HD (MKV) files to your Xbox · more. Guides. There are no guides available. Search FAQs. Téléchargement gratuit. Inclut DivX Codec et tout ce dont vous avez besoin pour lire les fichiers DivX, AVI ou MKV dans n'importe quel lecteur multimédia. Kostenloser Download. Umfasst DivX Codec und alles, was Du zur Wiedergabe von DivX-, AVI- oder MKV-Dateien in einem beliebigen Media-Player brauchst. H codecs compress digital video files so that they only use half the space of MPEG-2, to deliver the same quality video. An H encoder delivers. Download grátis. Inclui DivX Codec e tudo o mais de que você precisa para reproduzir arquivos DivX, AVI ou MKV em qualquer player de mídia. Free video software downloads to play & stream DivX (AVI) & DivX Plus HD (MKV) video. Find devices to play DivX video and Hollywood movies in DivX format. You can do it all in one go and be ready for any video format that comes your way. Codec Pack All-in-1 includes: DivX ; XviD Codec Media Player Codec Pack for Microsoft Windows, 10, , 8, 7, Vista, XP, , x | h | HEVC | 10bit x | x | h | AVCHD | AVC | DivX | XviD. They feature improved HEVC and AVC decoders for better stability and the DivX codec pack has been removed for consistency around the. Codec Pack All in 1, free and safe download. Codec Pack All in 1 latest version: A free Video program for Windows. Codec Pack All in 1 is a good, free Windows.
    [Show full text]
  • Installation Manual
    CX-20 Installation manual ENABLING BRIGHT OUTCOMES Barco NV Beneluxpark 21, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Registered office: Barco NV President Kennedypark 35, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Copyright © All rights reserved. No part of this document may be copied, reproduced or translated. It shall not otherwise be recorded, transmitted or stored in a retrieval system without the prior written consent of Barco. Trademarks Brand and product names mentioned in this manual may be trademarks, registered trademarks or copyrights of their respective holders. All brand and product names mentioned in this manual serve as comments or examples and are not to be understood as advertising for the products or their manufacturers. Trademarks USB Type-CTM and USB-CTM are trademarks of USB Implementers Forum. HDMI Trademark Notice The terms HDMI, HDMI High Definition Multimedia Interface, and the HDMI Logo are trademarks or registered trademarks of HDMI Licensing Administrator, Inc. Product Security Incident Response As a global technology leader, Barco is committed to deliver secure solutions and services to our customers, while protecting Barco’s intellectual property. When product security concerns are received, the product security incident response process will be triggered immediately. To address specific security concerns or to report security issues with Barco products, please inform us via contact details mentioned on https://www.barco.com/psirt. To protect our customers, Barco does not publically disclose or confirm security vulnerabilities until Barco has conducted an analysis of the product and issued fixes and/or mitigations. Patent protection Please refer to www.barco.com/about-barco/legal/patents Guarantee and Compensation Barco provides a guarantee relating to perfect manufacturing as part of the legally stipulated terms of guarantee.
    [Show full text]
  • Implementing Compression on Distributed Time Series Database
    Implementing compression on distributed time series database Michael Burman School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 05.11.2017 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Michael Burman Title Implementing compression on distributed time series database Degree programme Major Computer Science Code of major SCI3042 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Date 05.11.2017 Number of pages 70+4 Language English Abstract Rise of microservices and distributed applications in containerized deployments are putting increasing amount of burden to the monitoring systems. They push the storage requirements to provide suitable performance for large queries. In this paper we present the changes we made to our distributed time series database, Hawkular-Metrics, and how it stores data more effectively in the Cassandra. We show that using our methods provides significant space savings ranging from 50 to 95% reduction in storage usage, while reducing the query times by over 90% compared to the nominal approach when using Cassandra. We also provide our unique algorithm modified from Gorilla compression algorithm that we use in our solution, which provides almost three times the throughput in compression with equal compression ratio. Keywords timeseries compression performance storage Aalto-yliopisto, PL 11000, 00076 AALTO www.aalto.fi Diplomityön tiivistelmä Tekijä Michael Burman Työn nimi Pakkausmenetelmät hajautetussa aikasarjatietokannassa Koulutusohjelma Pääaine Computer Science Pääaineen koodi SCI3042 Työn valvoja ja ohjaaja Prof. Kari Smolander Päivämäärä 05.11.2017 Sivumäärä 70+4 Kieli Englanti Tiivistelmä Hajautettujen järjestelmien yleistyminen on aiheuttanut valvontajärjestelmissä tiedon määrän kasvua, sillä aikasarjojen määrä on kasvanut ja niihin talletetaan useammin tietoa.
    [Show full text]
  • Pack, Encrypt, Authenticate Document Revision: 2021 05 02
    PEA Pack, Encrypt, Authenticate Document revision: 2021 05 02 Author: Giorgio Tani Translation: Giorgio Tani This document refers to: PEA file format specification version 1 revision 3 (1.3); PEA file format specification version 2.0; PEA 1.01 executable implementation; Present documentation is released under GNU GFDL License. PEA executable implementation is released under GNU LGPL License; please note that all units provided by the Author are released under LGPL, while Wolfgang Ehrhardt’s crypto library units used in PEA are released under zlib/libpng License. PEA file format and PCOMPRESS specifications are hereby released under PUBLIC DOMAIN: the Author neither has, nor is aware of, any patents or pending patents relevant to this technology and do not intend to apply for any patents covering it. As far as the Author knows, PEA file format in all of it’s parts is free and unencumbered for all uses. Pea is on PeaZip project official site: https://peazip.github.io , https://peazip.org , and https://peazip.sourceforge.io For more information about the licenses: GNU GFDL License, see http://www.gnu.org/licenses/fdl.txt GNU LGPL License, see http://www.gnu.org/licenses/lgpl.txt 1 Content: Section 1: PEA file format ..3 Description ..3 PEA 1.3 file format details ..5 Differences between 1.3 and older revisions ..5 PEA 2.0 file format details ..7 PEA file format’s and implementation’s limitations ..8 PCOMPRESS compression scheme ..9 Algorithms used in PEA format ..9 PEA security model .10 Cryptanalysis of PEA format .12 Data recovery from
    [Show full text]
  • User Commands GZIP ( 1 ) Gzip, Gunzip, Gzcat – Compress Or Expand Files Gzip [ –Acdfhllnnrtvv19 ] [–S Suffix] [ Name ... ]
    User Commands GZIP ( 1 ) NAME gzip, gunzip, gzcat – compress or expand files SYNOPSIS gzip [–acdfhlLnNrtvV19 ] [– S suffix] [ name ... ] gunzip [–acfhlLnNrtvV ] [– S suffix] [ name ... ] gzcat [–fhLV ] [ name ... ] DESCRIPTION Gzip reduces the size of the named files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension .gz, while keeping the same ownership modes, access and modification times. (The default extension is – gz for VMS, z for MSDOS, OS/2 FAT, Windows NT FAT and Atari.) If no files are specified, or if a file name is "-", the standard input is compressed to the standard output. Gzip will only attempt to compress regular files. In particular, it will ignore symbolic links. If the compressed file name is too long for its file system, gzip truncates it. Gzip attempts to truncate only the parts of the file name longer than 3 characters. (A part is delimited by dots.) If the name con- sists of small parts only, the longest parts are truncated. For example, if file names are limited to 14 characters, gzip.msdos.exe is compressed to gzi.msd.exe.gz. Names are not truncated on systems which do not have a limit on file name length. By default, gzip keeps the original file name and timestamp in the compressed file. These are used when decompressing the file with the – N option. This is useful when the compressed file name was truncated or when the time stamp was not preserved after a file transfer. Compressed files can be restored to their original form using gzip -d or gunzip or gzcat.
    [Show full text]
  • «Supercomputing Frontiers and Innovations»
    DOI: 10.14529/jsfi170402 Towards Decoupling the Selection of Compression Algorithms from Quality Constraints – An Investigation of Lossy Compression Efficiency Julian M. Kunkel1, Anastasiia Novikova2, Eugen Betke1 c The Authors 2017. This paper is published with open access at SuperFri.org Data intense scientific domains use data compression to reduce the storage space needed. Lossless data compression preserves information accurately but lossy data compression can achieve much higher compression rates depending on the tolerable error margins. There are many ways of defining precision and to exploit this knowledge, therefore, the field of lossy compression is subject to active research. From the perspective of a scientist, the qualitative definition about the implied loss of data precision should only matter. With the Scientific Compression Library (SCIL), we are developing a meta-compressor that allows users to define various quantities for acceptable error and expected performance behavior. The library then picks a suitable chain of algorithms yielding the user’s requirements, the ongoing work is a preliminary stage for the design of an adaptive selector. This approach is a crucial step towards a scientifically safe use of much-needed lossy data compression, because it disentangles the tasks of determining scientific characteristics of tolerable noise, from the task of determining an optimal compression strategy. Future algorithms can be used without changing application code. In this paper, we evaluate various lossy compression algorithms for compressing different scientific datasets (Isabel, ECHAM6), and focus on the analysis of synthetically created data that serves as blueprint for many observed datasets. We also briefly describe the available quantities of SCIL to define data precision and introduce two efficient compression algorithms for individual data points.
    [Show full text]
  • Using Compression for Energy-Optimized Memory Hierarchies
    Using Compression for Energy-Optimized Memory Hierarchies By Somayeh Sardashti A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN - MADISON 2015 Date of final oral examination: 2/25/2015 The dissertation is approved by the following members of the Final Oral Committee: Prof. David A. Wood, Professor, Computer Sciences Prof. Mark D. Hill, Professor, Computer Sciences Prof. Nam Sung Kim, Associate Professor, Electrical and Computer Engineering Prof. André Seznec, Professor, Computer Architecture Prof. Gurindar S. Sohi, Professor, Computer Sciences Prof. Michael M. Swift, Associate Professor, Computer Sciences © Copyright by Somayeh Sardashti 2015 All Rights Reserved i Abstract In multicore processor systems, last-level caches (LLCs) play a crucial role in reducing system energy by i) filtering out expensive accesses to main memory and ii) reducing the time spent executing in high-power states. Increasing the LLC size can improve system performance and energy by reducing memory accesses, but at the cost of high area and power overheads. In this dissertation, I explored using compression to effectively improve the LLC capacity and ultimately system performance and energy consumption. Cache compression is a promising technique for expanding effective cache capacity with little area overhead. Compressed caches can achieve the benefits of larger caches using the area and power of smaller caches by fitting more cache blocks in the same cache space. However, previous compressed cache designs have demonstrated only limited benefits due to internal fragmentation, limited tags, and metadata overhead. In addition, most previous proposals targeted improving system performance even at high power and energy overheads.
    [Show full text]
  • Toward Decoupling the Selection of Compression Algorithms from Quality Constraints
    Toward Decoupling the Selection of Compression Algorithms from Quality Constraints Julian Kunkel1, Anastasiia Novikova2, Eugen Betke1, and Armin Schaare2 1 Deutsches Klimarechenzentrum, [email protected] 2 Universit¨atHamburg Abstract. Data intense scientific domains use data compression to re- duce the storage space needed. Lossless data compression preserves the original information accurately but on the domain of climate data usu- ally yields a compression factor of only 2:1. Lossy data compression can achieve much higher compression rates depending on the tolerable error/- precision needed. Therefore, the field of lossy compression is still subject to active research. From the perspective of a scientist, the compression algorithm does not matter but the qualitative information about the implied loss of precision of data is a concern. With the Scientific Compression Library (SCIL), we are developing a meta-compressor that allows users to set various quantities that define the acceptable error and the expected performance behavior. The ongo- ing work a preliminary stage for the design of an automatic compression algorithm selector. The task of this missing key component is the con- struction of appropriate chains of algorithms to yield the users require- ments. This approach is a crucial step towards a scientifically safe use of much-needed lossy data compression, because it disentangles the tasks of determining scientific ground characteristics of tolerable noise, from the task of determining an optimal compression strategy given target noise levels and constraints. Future algorithms are used without change in the application code, once they are integrated into SCIL. In this paper, we describe the user interfaces and quantities, two com- pression algorithms and evaluate SCIL's ability for compressing climate data.
    [Show full text]
  • Gzip, Bzip2 and Tar EXPERT PACKING
    LINUXUSER Command Line: gzip, bzip2, tar gzip, bzip2 and tar EXPERT PACKING A short command is all it takes to pack your data or extract it from an archive. BY HEIKE JURZIK rchiving provides many bene- fits: packed and compressed Afiles occupy less space on your disk and require less bandwidth on the Internet. Linux has both GUI-based pro- grams, such as File Roller or Ark, and www.sxc.hu command-line tools for creating and un- packing various archive types. This arti- cle examines some shell tools for ar- chiving files and demonstrates the kind of expert packing that clever combina- tained by the packing process. If you A gzip file can be unpacked using either tions of Linux commands offer the com- prefer to use a different extension, you gunzip or gzip -d. If the tool discovers a mand line user. can set the -S (suffix) flag to specify your file of the same name in the working di- own instead. For example, the command rectory, it prompts you to make sure that Nicely Packed with “gzip” you know you are overwriting this file: The gzip (GNU Zip) program is the de- gzip -S .z image.bmp fault packer on Linux. Gzip compresses $ gunzip screenie.jpg.gz simple files, but it does not create com- creates a compressed file titled image. gunzip: screenie.jpg U plete directory archives. In its simplest bmp.z. form, the gzip command looks like this: The size of the compressed file de- Listing 1: Compression pends on the distribution of identical Compared gzip file strings in the original file.
    [Show full text]
  • Benefits of Hardware Data Compression in Storage Networks Gerry Simmons, Comtech AHA Tony Summers, Comtech AHA SNIA Legal Notice
    Benefits of Hardware Data Compression in Storage Networks Gerry Simmons, Comtech AHA Tony Summers, Comtech AHA SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Oct 17, 2007 Benefits of Hardware Data Compression in Storage Networks 2 © 2007 Storage Networking Industry Association. All Rights Reserved. Abstract Benefits of Hardware Data Compression in Storage Networks This tutorial explains the benefits and algorithmic details of lossless data compression in Storage Networks, and focuses especially on data de-duplication. The material presents a brief history and background of Data Compression - a primer on the different data compression algorithms in use today. This primer includes performance data on the specific compression algorithms, as well as performance on different data types. Participants will come away with a good understanding of where to place compression in the data path, and the benefits to be gained by such placement. The tutorial will discuss technological advances in compression and how they affect system level solutions. Oct 17, 2007 Benefits of Hardware Data Compression in Storage Networks 3 © 2007 Storage Networking Industry Association. All Rights Reserved. Agenda Introduction and Background Lossless Compression Algorithms System Implementations Technology Advances and Compression Hardware Power Conservation and Efficiency Conclusion Oct 17, 2007 Benefits of Hardware Data Compression in Storage Networks 4 © 2007 Storage Networking Industry Association.
    [Show full text]
  • On2 Flix Engine for Linux Installation
    On2 Flix Engine for Linux Installation Topics Preparing the System Java Perl PHP Python Installation Instructions 1. Accept the EULA 2. Verify Prerequisites 3. Uninstall the Previous Version 4. Verify Language Support 5. Register On2 Flix Engine 6. Configure the Installation 7. Install Files 8. Configure the System 9. Build Optional Modules 10. Installation Complete Additional Resources Preparing the System Perform the following to prepare the system for installation: 1. Read the System Overview document provided with this release. 2. Verify Linux prerequisites. a. Operating System: x86 GNU/Linux with GNU C Library (glibc) version 2.3.2 or higher, x86_64 is supported with 32-bit support libraries installed. To determine the version of glibc installed on your system, execute the following: $ /lib/ld-linux.so.2 /lib/libc.so.6 This command should produce output similar to the following (note, your output may differ, however the version number should be equal or greater). © 2009, On2 Technologies, Inc. 1 On2 Flix Engine for Linux Installation GNU C Library stable release version 2.3.2, by Roland McGrath et al. Copyright (C) 2003 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 3.3.3. Compiled on a Linux 2.4.26 system on 2004-05-24. Available extensions: GNU libio by Per Bothner crypt add-on version 2.1 by Michael Glad and others linuxthreads-0.10 by Xavier Leroy BIND-8.2.3-T5B libthread_db work sponsored by Alpha Processor Inc NIS(YP)/NIS+ NSS modules 0.19 by Thorsten Kukuk Report bugs using the `glibcbug' script to <[email protected]>.
    [Show full text]