Practical Data Compression for Modern Memory Hierarchies

Total Page:16

File Type:pdf, Size:1020Kb

Practical Data Compression for Modern Memory Hierarchies Practical Data Compression for Modern Memory Hierarchies Gennady G. Pekhimenko CMU-CS-16-116 July 2016 Computer Science Department School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Todd C. Mowry, Co-Chair Onur Mutlu, Co-Chair Kayvon Fatahalian David A. Wood, University of Wisconsin-Madison Douglas C. Burger, Microsoft Michael A. Kozuch, Intel Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Copyright c 2016 Gennady G. Pekhimenko This research was sponsored by the National Science Foundation under grant numbers CNS-0720790, CCF- 0953246, CCF-1116898, CNS-1423172, CCF-1212962, CNS-1320531, CCF-1147397, and CNS- 1409723, the Defense Advanced Research Projects Agency, the Semiconductor Research Corporation, the Gigascale Systems Research Center, Intel URO Memory Hierarchy Program, Intel ISTC-CC, and gifts from AMD, Google, IBM, Intel, Nvidia, Oracle, Qualcomm, Samsung, and VMware. We also acknowledge the support through PhD fellowships from Nvidia, Microsoft, Qualcomm, and NSERC. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. Keywords: Data Compression, Memory Hierarchy, Cache Compression, Memory Compression, Bandwidth Compression, DRAM, Main Memory, Memory Subsystem Abstract Although compression has been widely used for decades to reduce file sizes (thereby con- serving storage capacity and network bandwidth when transferring files), there has been limited use of hardware-based compression within modern memory hierarchies of com- modity systems. Why not? Especially as programs become increasingly data-intensive, the capacity and bandwidth within the memory hierarchy (including caches, main mem- ory, and their associated interconnects) have already become increasingly important bot- tlenecks. If hardware-based data compression could be applied successfully to the memory hierarchy, it could potentially relieve pressure on these bottlenecks by increasing effective capacity, increasing effective bandwidth, and even reducing energy consumption. In this thesis, we describe a new, practical approach to integrating hardware-based data compression within the memory hierarchy, including on-chip caches, main memory, and both on-chip and off-chip interconnects. This new approach is fast, simple, and effective in saving storage space. A key insight in our approach is that access time (including de- compression latency) is critical in modern memory hierarchies. By combining inexpensive hardware support with modest OS support, our holistic approach to compression achieves substantial improvements in performance and energy efficiency across the memory hierar- chy. Using this new approach, we make several major contributions in this thesis. First, we propose a new compression algorithm, Base-Delta-Immediate Compression (B∆I), that achieves high compression ratio with very low compression/decompression latency. B∆I exploits the existing low dynamic range of values present in many cache lines to compress them to smaller sizes using Base+Delta encoding. Second, we observe that the compressed size of a cache block can be indicative of its reuse. We use this observation to develop a new cache insertion policy for compressed caches, the Size-based Insertion Policy (SIP), which uses the size of a compressed block as one of the metrics to predict its potential future reuse. Third, we propose a new main memory compression framework, Linearly Compressed Pages (LCP), that significanly reduces the complexity and power cost of supporting main iii memory compression. We demonstrate that any compression algorithm can be adapted to fit the requirements of LCP, and that LCP can be efficiently integrated with the existing cache compression designs, avoiding extra compression/decompression. Finally, in addition to exploring compression-related issues and enabling practical so- lutions in modern CPU systems, we discover new problems in realizing hardware-based compression for GPU-based systems and develop new solutions to solve these problems. iv Acknowledgments First of all, I would like to thank my advisers, Todd Mowry and Onur Mutlu, for always trusting me in my research experiments, giving me enough resources and opportunities to improve my work, as well as my presentation and writing skills. I am grateful to Michael Kozuch and Phillip Gibbons for being both my mentors and collaborators. I am grateful to the members of my PhD committee: Kayvon Fatahalian, David Wood, and Doug Burger for their valuable feedback and for making the final steps towards my PhD very smooth. I am grateful to Deb Cavlovich who allowed me to focus on my research by magically solving all other problems. I am grateful to SAFARI group members that were more than just lab mates. Vivek Seshadri was always supportive for my crazy ideas and was willing to dedicate his time and energy to help me in my work. Chris Fallin was a rare example of pure smartness mixed with great work ethic, but still always had time for an interesting discussion. From Yoongu Kim I learned a lot about the importance of details, and hopefully I learned something from his aesthetic sense as well. Lavanya Subramanian was my fellow cubic mate who showed me an example on how to successfully mix work with personal life and how to be supportive for others. Justin Meza helped me to improve my presentation and writing skills in a very friendly manner (as everything else he does). Donghyuk Lee taught me everything I know about DRAM and was always an example of work dedication for me. Nandita Vijaykumar was my mentee, collaborator, and mentor all at the same time, but, most importantly, a friend that was always willing to help. Rachata Ausavarungnirun was our food guru and one of the most reliable and friendly people in the group. Hongyi Xin reminded me about everything I almost forgot from biology and history classes, and also taught me everything I know now in the amazing field of bioinformatics. Kevin Chang and Kevin Hsieh were always helpful and supportive when it matters most. Samira Khan was always available for a friendly chat when I really need it. Saugata Ghose was my rescue guy during our amazing trip to Prague. I also thank other members of the SAFARI group for their assistance and support: HanBin Yoon, Jamie Liu, Ben Jaiyen, Yixin Luo, Yang v Li, and Amirali Boroumand. Michelle Goodstein, Olatunji Ruwase and Evangelos Vlachos, senior PhD students, shared their experience and provided a lot of feedback early in my career. I am grateful to Tyler Huberty and Rui Cai for contributing a lot to my research and for being excellent undergraduate/masters researchers who selected me as a mentor from all the other options they had. During my time at Carnegie Mellon, I met a lot of wonderful people: Michael Pa- pamichael, Gabe Weisz, Alexey Tumanov, Danai Koutra and many others who helped and supported me in many different ways. I am also grateful to people at PDL and CALCM groups for accepting me in their communities. I am grateful to my internship mentors for making my work in their companies mu- tually successful for both sides. At Microsoft Research, I had the privilege to closely work with Karin Strauss, Dimitrios Lymberopoulos, Oriana Riva, Ella Bounimova, Patrice Godefroid, and David Molnar. At NVIDIA Research, I had the privilege to closely work with Evgeny Bolotin, Steve Keckler, and Mike O’Connor. I am also grateful to my amazing collaborators from Georgia Tech: Hadi Esmaeilzadeh, Amir Yazdanbaksh, and Bradley Thwaites. And last, but not least, I would like to acknowledge the enormous love and support that I received from my family: my wife Daria and our daughter Alyssa, my parents: Gennady and Larissa, and my brother Evgeny. vi Contents Abstract iii Acknowledgmentsv 1 Introduction1 1.1 Focus of This Dissertation: Efficiency of the Memory Hierarchy.....1 1.1.1 A Compelling Possibility: Compressing Data throughout the Full Memory Hierarchy.........................2 1.1.2 Why Traditional Data Compression Is Ineffective for Modern Mem- ory Systems.............................3 1.2 Related Work................................3 1.2.1 3D-Stacked DRAM Architectures.................4 1.2.2 In-Memory Computing.......................4 1.2.3 Improving DRAM Performance..................4 1.2.4 Fine-grain Memory Organization and Deduplication.......5 1.2.5 Data Compression for Graphics..................5 1.2.6 Software-based Data Compression.................5 1.2.7 Code Compression.........................6 1.2.8 Hardware-based Data Compression................6 1.3 Thesis Statement: Fast and Simple Compression throughout the Memory Hierarchy.....................6 1.4 Contributions................................7 vii 2 Key Challenges for Hardware-Based Memory Compression9 2.1 Compression and Decompression Latency.................9 2.1.1 Cache Compression........................9 2.1.2 Main Memory........................... 10 2.1.3 On-Chip/Off-chip Buses...................... 10 2.2 Quickly Locating Compressed Data.................... 10 2.3 Fragmentation................................ 11 2.4 Supporting Variable Size after Compression................ 12 2.5 Data Changes after Compression...................... 12 2.6 Summary of Our Proposal......................... 13 3 Base-Delta-Immediate Compression 15 3.1 Introduction................................
Recommended publications
  • Download Media Player Codec Pack Version 4.1 Media Player Codec Pack
    download media player codec pack version 4.1 Media Player Codec Pack. Description: In Microsoft Windows 10 it is not possible to set all file associations using an installer. Microsoft chose to block changes of file associations with the introduction of their Zune players. Third party codecs are also blocked in some instances, preventing some files from playing in the Zune players. A simple workaround for this problem is to switch playback of video and music files to Windows Media Player manually. In start menu click on the "Settings". In the "Windows Settings" window click on "System". On the "System" pane click on "Default apps". On the "Choose default applications" pane click on "Films & TV" under "Video Player". On the "Choose an application" pop up menu click on "Windows Media Player" to set Windows Media Player as the default player for video files. Footnote: The same method can be used to apply file associations for music, by simply clicking on "Groove Music" under "Media Player" instead of changing Video Player in step 4. Media Player Codec Pack Plus. Codec's Explained: A codec is a piece of software on either a device or computer capable of encoding and/or decoding video and/or audio data from files, streams and broadcasts. The word Codec is a portmanteau of ' co mpressor- dec ompressor' Compression types that you will be able to play include: x264 | x265 | h.265 | HEVC | 10bit x265 | 10bit x264 | AVCHD | AVC DivX | XviD | MP4 | MPEG4 | MPEG2 and many more. File types you will be able to play include: .bdmv | .evo | .hevc | .mkv | .avi | .flv | .webm | .mp4 | .m4v | .m4a | .ts | .ogm .ac3 | .dts | .alac | .flac | .ape | .aac | .ogg | .ofr | .mpc | .3gp and many more.
    [Show full text]
  • Divx Codec Package
    Divx codec package Videos. How To Use DivX Mux GUI · How to Stream DivX Plus HD (MKV) files to your Xbox · more. Guides. There are no guides available. Search FAQs. Téléchargement gratuit. Inclut DivX Codec et tout ce dont vous avez besoin pour lire les fichiers DivX, AVI ou MKV dans n'importe quel lecteur multimédia. Kostenloser Download. Umfasst DivX Codec und alles, was Du zur Wiedergabe von DivX-, AVI- oder MKV-Dateien in einem beliebigen Media-Player brauchst. H codecs compress digital video files so that they only use half the space of MPEG-2, to deliver the same quality video. An H encoder delivers. Download grátis. Inclui DivX Codec e tudo o mais de que você precisa para reproduzir arquivos DivX, AVI ou MKV em qualquer player de mídia. Free video software downloads to play & stream DivX (AVI) & DivX Plus HD (MKV) video. Find devices to play DivX video and Hollywood movies in DivX format. You can do it all in one go and be ready for any video format that comes your way. Codec Pack All-in-1 includes: DivX ; XviD Codec Media Player Codec Pack for Microsoft Windows, 10, , 8, 7, Vista, XP, , x | h | HEVC | 10bit x | x | h | AVCHD | AVC | DivX | XviD. They feature improved HEVC and AVC decoders for better stability and the DivX codec pack has been removed for consistency around the. Codec Pack All in 1, free and safe download. Codec Pack All in 1 latest version: A free Video program for Windows. Codec Pack All in 1 is a good, free Windows.
    [Show full text]
  • Installation Manual
    CX-20 Installation manual ENABLING BRIGHT OUTCOMES Barco NV Beneluxpark 21, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Registered office: Barco NV President Kennedypark 35, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Copyright © All rights reserved. No part of this document may be copied, reproduced or translated. It shall not otherwise be recorded, transmitted or stored in a retrieval system without the prior written consent of Barco. Trademarks Brand and product names mentioned in this manual may be trademarks, registered trademarks or copyrights of their respective holders. All brand and product names mentioned in this manual serve as comments or examples and are not to be understood as advertising for the products or their manufacturers. Trademarks USB Type-CTM and USB-CTM are trademarks of USB Implementers Forum. HDMI Trademark Notice The terms HDMI, HDMI High Definition Multimedia Interface, and the HDMI Logo are trademarks or registered trademarks of HDMI Licensing Administrator, Inc. Product Security Incident Response As a global technology leader, Barco is committed to deliver secure solutions and services to our customers, while protecting Barco’s intellectual property. When product security concerns are received, the product security incident response process will be triggered immediately. To address specific security concerns or to report security issues with Barco products, please inform us via contact details mentioned on https://www.barco.com/psirt. To protect our customers, Barco does not publically disclose or confirm security vulnerabilities until Barco has conducted an analysis of the product and issued fixes and/or mitigations. Patent protection Please refer to www.barco.com/about-barco/legal/patents Guarantee and Compensation Barco provides a guarantee relating to perfect manufacturing as part of the legally stipulated terms of guarantee.
    [Show full text]
  • Implementing Compression on Distributed Time Series Database
    Implementing compression on distributed time series database Michael Burman School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 05.11.2017 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Michael Burman Title Implementing compression on distributed time series database Degree programme Major Computer Science Code of major SCI3042 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Date 05.11.2017 Number of pages 70+4 Language English Abstract Rise of microservices and distributed applications in containerized deployments are putting increasing amount of burden to the monitoring systems. They push the storage requirements to provide suitable performance for large queries. In this paper we present the changes we made to our distributed time series database, Hawkular-Metrics, and how it stores data more effectively in the Cassandra. We show that using our methods provides significant space savings ranging from 50 to 95% reduction in storage usage, while reducing the query times by over 90% compared to the nominal approach when using Cassandra. We also provide our unique algorithm modified from Gorilla compression algorithm that we use in our solution, which provides almost three times the throughput in compression with equal compression ratio. Keywords timeseries compression performance storage Aalto-yliopisto, PL 11000, 00076 AALTO www.aalto.fi Diplomityön tiivistelmä Tekijä Michael Burman Työn nimi Pakkausmenetelmät hajautetussa aikasarjatietokannassa Koulutusohjelma Pääaine Computer Science Pääaineen koodi SCI3042 Työn valvoja ja ohjaaja Prof. Kari Smolander Päivämäärä 05.11.2017 Sivumäärä 70+4 Kieli Englanti Tiivistelmä Hajautettujen järjestelmien yleistyminen on aiheuttanut valvontajärjestelmissä tiedon määrän kasvua, sillä aikasarjojen määrä on kasvanut ja niihin talletetaan useammin tietoa.
    [Show full text]
  • Pack, Encrypt, Authenticate Document Revision: 2021 05 02
    PEA Pack, Encrypt, Authenticate Document revision: 2021 05 02 Author: Giorgio Tani Translation: Giorgio Tani This document refers to: PEA file format specification version 1 revision 3 (1.3); PEA file format specification version 2.0; PEA 1.01 executable implementation; Present documentation is released under GNU GFDL License. PEA executable implementation is released under GNU LGPL License; please note that all units provided by the Author are released under LGPL, while Wolfgang Ehrhardt’s crypto library units used in PEA are released under zlib/libpng License. PEA file format and PCOMPRESS specifications are hereby released under PUBLIC DOMAIN: the Author neither has, nor is aware of, any patents or pending patents relevant to this technology and do not intend to apply for any patents covering it. As far as the Author knows, PEA file format in all of it’s parts is free and unencumbered for all uses. Pea is on PeaZip project official site: https://peazip.github.io , https://peazip.org , and https://peazip.sourceforge.io For more information about the licenses: GNU GFDL License, see http://www.gnu.org/licenses/fdl.txt GNU LGPL License, see http://www.gnu.org/licenses/lgpl.txt 1 Content: Section 1: PEA file format ..3 Description ..3 PEA 1.3 file format details ..5 Differences between 1.3 and older revisions ..5 PEA 2.0 file format details ..7 PEA file format’s and implementation’s limitations ..8 PCOMPRESS compression scheme ..9 Algorithms used in PEA format ..9 PEA security model .10 Cryptanalysis of PEA format .12 Data recovery from
    [Show full text]
  • User Commands GZIP ( 1 ) Gzip, Gunzip, Gzcat – Compress Or Expand Files Gzip [ –Acdfhllnnrtvv19 ] [–S Suffix] [ Name ... ]
    User Commands GZIP ( 1 ) NAME gzip, gunzip, gzcat – compress or expand files SYNOPSIS gzip [–acdfhlLnNrtvV19 ] [– S suffix] [ name ... ] gunzip [–acfhlLnNrtvV ] [– S suffix] [ name ... ] gzcat [–fhLV ] [ name ... ] DESCRIPTION Gzip reduces the size of the named files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension .gz, while keeping the same ownership modes, access and modification times. (The default extension is – gz for VMS, z for MSDOS, OS/2 FAT, Windows NT FAT and Atari.) If no files are specified, or if a file name is "-", the standard input is compressed to the standard output. Gzip will only attempt to compress regular files. In particular, it will ignore symbolic links. If the compressed file name is too long for its file system, gzip truncates it. Gzip attempts to truncate only the parts of the file name longer than 3 characters. (A part is delimited by dots.) If the name con- sists of small parts only, the longest parts are truncated. For example, if file names are limited to 14 characters, gzip.msdos.exe is compressed to gzi.msd.exe.gz. Names are not truncated on systems which do not have a limit on file name length. By default, gzip keeps the original file name and timestamp in the compressed file. These are used when decompressing the file with the – N option. This is useful when the compressed file name was truncated or when the time stamp was not preserved after a file transfer. Compressed files can be restored to their original form using gzip -d or gunzip or gzcat.
    [Show full text]
  • «Supercomputing Frontiers and Innovations»
    DOI: 10.14529/jsfi170402 Towards Decoupling the Selection of Compression Algorithms from Quality Constraints – An Investigation of Lossy Compression Efficiency Julian M. Kunkel1, Anastasiia Novikova2, Eugen Betke1 c The Authors 2017. This paper is published with open access at SuperFri.org Data intense scientific domains use data compression to reduce the storage space needed. Lossless data compression preserves information accurately but lossy data compression can achieve much higher compression rates depending on the tolerable error margins. There are many ways of defining precision and to exploit this knowledge, therefore, the field of lossy compression is subject to active research. From the perspective of a scientist, the qualitative definition about the implied loss of data precision should only matter. With the Scientific Compression Library (SCIL), we are developing a meta-compressor that allows users to define various quantities for acceptable error and expected performance behavior. The library then picks a suitable chain of algorithms yielding the user’s requirements, the ongoing work is a preliminary stage for the design of an adaptive selector. This approach is a crucial step towards a scientifically safe use of much-needed lossy data compression, because it disentangles the tasks of determining scientific characteristics of tolerable noise, from the task of determining an optimal compression strategy. Future algorithms can be used without changing application code. In this paper, we evaluate various lossy compression algorithms for compressing different scientific datasets (Isabel, ECHAM6), and focus on the analysis of synthetically created data that serves as blueprint for many observed datasets. We also briefly describe the available quantities of SCIL to define data precision and introduce two efficient compression algorithms for individual data points.
    [Show full text]
  • Using Compression for Energy-Optimized Memory Hierarchies
    Using Compression for Energy-Optimized Memory Hierarchies By Somayeh Sardashti A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN - MADISON 2015 Date of final oral examination: 2/25/2015 The dissertation is approved by the following members of the Final Oral Committee: Prof. David A. Wood, Professor, Computer Sciences Prof. Mark D. Hill, Professor, Computer Sciences Prof. Nam Sung Kim, Associate Professor, Electrical and Computer Engineering Prof. André Seznec, Professor, Computer Architecture Prof. Gurindar S. Sohi, Professor, Computer Sciences Prof. Michael M. Swift, Associate Professor, Computer Sciences © Copyright by Somayeh Sardashti 2015 All Rights Reserved i Abstract In multicore processor systems, last-level caches (LLCs) play a crucial role in reducing system energy by i) filtering out expensive accesses to main memory and ii) reducing the time spent executing in high-power states. Increasing the LLC size can improve system performance and energy by reducing memory accesses, but at the cost of high area and power overheads. In this dissertation, I explored using compression to effectively improve the LLC capacity and ultimately system performance and energy consumption. Cache compression is a promising technique for expanding effective cache capacity with little area overhead. Compressed caches can achieve the benefits of larger caches using the area and power of smaller caches by fitting more cache blocks in the same cache space. However, previous compressed cache designs have demonstrated only limited benefits due to internal fragmentation, limited tags, and metadata overhead. In addition, most previous proposals targeted improving system performance even at high power and energy overheads.
    [Show full text]
  • Toward Decoupling the Selection of Compression Algorithms from Quality Constraints
    Toward Decoupling the Selection of Compression Algorithms from Quality Constraints Julian Kunkel1, Anastasiia Novikova2, Eugen Betke1, and Armin Schaare2 1 Deutsches Klimarechenzentrum, [email protected] 2 Universit¨atHamburg Abstract. Data intense scientific domains use data compression to re- duce the storage space needed. Lossless data compression preserves the original information accurately but on the domain of climate data usu- ally yields a compression factor of only 2:1. Lossy data compression can achieve much higher compression rates depending on the tolerable error/- precision needed. Therefore, the field of lossy compression is still subject to active research. From the perspective of a scientist, the compression algorithm does not matter but the qualitative information about the implied loss of precision of data is a concern. With the Scientific Compression Library (SCIL), we are developing a meta-compressor that allows users to set various quantities that define the acceptable error and the expected performance behavior. The ongo- ing work a preliminary stage for the design of an automatic compression algorithm selector. The task of this missing key component is the con- struction of appropriate chains of algorithms to yield the users require- ments. This approach is a crucial step towards a scientifically safe use of much-needed lossy data compression, because it disentangles the tasks of determining scientific ground characteristics of tolerable noise, from the task of determining an optimal compression strategy given target noise levels and constraints. Future algorithms are used without change in the application code, once they are integrated into SCIL. In this paper, we describe the user interfaces and quantities, two com- pression algorithms and evaluate SCIL's ability for compressing climate data.
    [Show full text]
  • Gzip, Bzip2 and Tar EXPERT PACKING
    LINUXUSER Command Line: gzip, bzip2, tar gzip, bzip2 and tar EXPERT PACKING A short command is all it takes to pack your data or extract it from an archive. BY HEIKE JURZIK rchiving provides many bene- fits: packed and compressed Afiles occupy less space on your disk and require less bandwidth on the Internet. Linux has both GUI-based pro- grams, such as File Roller or Ark, and www.sxc.hu command-line tools for creating and un- packing various archive types. This arti- cle examines some shell tools for ar- chiving files and demonstrates the kind of expert packing that clever combina- tained by the packing process. If you A gzip file can be unpacked using either tions of Linux commands offer the com- prefer to use a different extension, you gunzip or gzip -d. If the tool discovers a mand line user. can set the -S (suffix) flag to specify your file of the same name in the working di- own instead. For example, the command rectory, it prompts you to make sure that Nicely Packed with “gzip” you know you are overwriting this file: The gzip (GNU Zip) program is the de- gzip -S .z image.bmp fault packer on Linux. Gzip compresses $ gunzip screenie.jpg.gz simple files, but it does not create com- creates a compressed file titled image. gunzip: screenie.jpg U plete directory archives. In its simplest bmp.z. form, the gzip command looks like this: The size of the compressed file de- Listing 1: Compression pends on the distribution of identical Compared gzip file strings in the original file.
    [Show full text]
  • Benefits of Hardware Data Compression in Storage Networks Gerry Simmons, Comtech AHA Tony Summers, Comtech AHA SNIA Legal Notice
    Benefits of Hardware Data Compression in Storage Networks Gerry Simmons, Comtech AHA Tony Summers, Comtech AHA SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Oct 17, 2007 Benefits of Hardware Data Compression in Storage Networks 2 © 2007 Storage Networking Industry Association. All Rights Reserved. Abstract Benefits of Hardware Data Compression in Storage Networks This tutorial explains the benefits and algorithmic details of lossless data compression in Storage Networks, and focuses especially on data de-duplication. The material presents a brief history and background of Data Compression - a primer on the different data compression algorithms in use today. This primer includes performance data on the specific compression algorithms, as well as performance on different data types. Participants will come away with a good understanding of where to place compression in the data path, and the benefits to be gained by such placement. The tutorial will discuss technological advances in compression and how they affect system level solutions. Oct 17, 2007 Benefits of Hardware Data Compression in Storage Networks 3 © 2007 Storage Networking Industry Association. All Rights Reserved. Agenda Introduction and Background Lossless Compression Algorithms System Implementations Technology Advances and Compression Hardware Power Conservation and Efficiency Conclusion Oct 17, 2007 Benefits of Hardware Data Compression in Storage Networks 4 © 2007 Storage Networking Industry Association.
    [Show full text]
  • On2 Flix Engine for Linux Installation
    On2 Flix Engine for Linux Installation Topics Preparing the System Java Perl PHP Python Installation Instructions 1. Accept the EULA 2. Verify Prerequisites 3. Uninstall the Previous Version 4. Verify Language Support 5. Register On2 Flix Engine 6. Configure the Installation 7. Install Files 8. Configure the System 9. Build Optional Modules 10. Installation Complete Additional Resources Preparing the System Perform the following to prepare the system for installation: 1. Read the System Overview document provided with this release. 2. Verify Linux prerequisites. a. Operating System: x86 GNU/Linux with GNU C Library (glibc) version 2.3.2 or higher, x86_64 is supported with 32-bit support libraries installed. To determine the version of glibc installed on your system, execute the following: $ /lib/ld-linux.so.2 /lib/libc.so.6 This command should produce output similar to the following (note, your output may differ, however the version number should be equal or greater). © 2009, On2 Technologies, Inc. 1 On2 Flix Engine for Linux Installation GNU C Library stable release version 2.3.2, by Roland McGrath et al. Copyright (C) 2003 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 3.3.3. Compiled on a Linux 2.4.26 system on 2004-05-24. Available extensions: GNU libio by Per Bothner crypt add-on version 2.1 by Michael Glad and others linuxthreads-0.10 by Xavier Leroy BIND-8.2.3-T5B libthread_db work sponsored by Alpha Processor Inc NIS(YP)/NIS+ NSS modules 0.19 by Thorsten Kukuk Report bugs using the `glibcbug' script to <[email protected]>.
    [Show full text]