Implementing Compression on Distributed Time Series Database

Total Page:16

File Type:pdf, Size:1020Kb

Implementing Compression on Distributed Time Series Database Implementing compression on distributed time series database Michael Burman School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 05.11.2017 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Michael Burman Title Implementing compression on distributed time series database Degree programme Major Computer Science Code of major SCI3042 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Date 05.11.2017 Number of pages 70+4 Language English Abstract Rise of microservices and distributed applications in containerized deployments are putting increasing amount of burden to the monitoring systems. They push the storage requirements to provide suitable performance for large queries. In this paper we present the changes we made to our distributed time series database, Hawkular-Metrics, and how it stores data more effectively in the Cassandra. We show that using our methods provides significant space savings ranging from 50 to 95% reduction in storage usage, while reducing the query times by over 90% compared to the nominal approach when using Cassandra. We also provide our unique algorithm modified from Gorilla compression algorithm that we use in our solution, which provides almost three times the throughput in compression with equal compression ratio. Keywords timeseries compression performance storage Aalto-yliopisto, PL 11000, 00076 AALTO www.aalto.fi Diplomityön tiivistelmä Tekijä Michael Burman Työn nimi Pakkausmenetelmät hajautetussa aikasarjatietokannassa Koulutusohjelma Pääaine Computer Science Pääaineen koodi SCI3042 Työn valvoja ja ohjaaja Prof. Kari Smolander Päivämäärä 05.11.2017 Sivumäärä 70+4 Kieli Englanti Tiivistelmä Hajautettujen järjestelmien yleistyminen on aiheuttanut valvontajärjestelmissä tiedon määrän kasvua, sillä aikasarjojen määrä on kasvanut ja niihin talletetaan useammin tietoa. Tämä on aiheuttanut kasvavaa kuormitusta levyjärjestelmille, joilla on ongelmia palvella kasvavia kyselyitä. Tässä paperissa esittelemme muutoksia hajautettuun aikasarjatietokantaamme, Hawkular-Metricsiin, käyttäen hyödyksi tehokkaampaa tiedon pakkausta ja järjestelyä kun tietoa talletetaan Cassandraan. Nopeutimme kyselyjä lähes kymmenkertaisesti ja samalla pienensimme levytilavaatimuksia aineistosta riippuen 50-95%. Esittelemme myös muutoksemme Gorilla pakkausalgoritmiin, jota hyödynnäm- me tulosten saavuttamiseksi. Muutoksemme nopeuttavat pakkaamista melkein kolminkertaiseksi alkuperäiseen algoritmiin nähden ilman pakkaustehon laskua. Avainsanat aikasarjat suorituskyky pakkausmenetelmät 4 Preface I would like to thank Red Hat for providing me with an interesting problem to work on and my team for providing valuable feedback. Special thanks go to John Sanda for great discussions, code reviews and ideas and to Filip Brychta for relentless quality testing. I also like to thank Kari Smolander for feedback on the thesis. In the end, this paper would not have been possible without the support from my wife and my kids, Linnea, Elli and Nella. Otaniemi, 5.11.2017 Michael Burman 5 Contents Abstract2 Abstract (in Finnish)3 Preface4 Contents5 Symbols and abbreviations8 1 Introduction1 1.1 Outline...................................2 2 Preliminaries3 2.1 Time series................................3 2.1.1 Time series storage........................3 2.2 Hawkular.................................4 2.2.1 Hawkular-Metrics.........................4 2.3 Cassandra.................................5 2.3.1 Storage..............................5 2.4 Related work...............................6 2.4.1 Gorilla / Beringei.........................7 2.4.2 Chronix..............................7 2.4.3 Prometheus............................7 2.4.4 InfluxDB..............................8 2.4.5 Cassandra based time series storages..............8 3 Compression9 3.0.1 Dictionary based coding - LZ77 and LZ78...........9 3.0.2 Huffman coding..........................9 3.0.3 Arithmetic coding......................... 10 3.0.4 Asymmetric Numeral Systems.................. 10 3.0.5 Run-Length Encoding...................... 10 3.0.6 Delta coding............................ 10 3.1 Generic compression algorithms..................... 11 3.1.1 LZ4................................ 11 3.1.2 Deflate............................... 11 3.1.3 Zstandard............................. 11 3.2 Gorilla compression............................ 12 3.2.1 Compressing time stamps.................... 12 3.2.2 Compressing values........................ 14 3.3 Integer compression techniques..................... 14 3.3.1 Variable Byte, Byte-aligned................... 16 3.3.2 The Simple family, Word-aligned................ 16 6 3.3.3 Frame-Of-Reference........................ 16 3.3.4 Patched coding.......................... 17 3.3.5 Improvements to PFOR..................... 18 3.4 Value predictors.............................. 18 3.4.1 Computational Predictors.................... 18 3.4.2 Context Based Predictors.................... 19 3.5 Double-precision floating point compression algorithms........ 20 3.6 Implementing compression algorithm.................. 22 4 Research material and methods 23 4.1 Experiments setup............................ 23 4.1.1 Hardware............................. 24 4.1.2 Software.............................. 24 4.2 Compression method evaluation criteria................. 25 4.3 Architectural evaluation criteria..................... 26 4.4 Data sets................................. 26 4.4.1 Synthetic data........................... 26 4.4.2 Real world data.......................... 27 5 Design and implementation 29 5.1 Compression algorithm implementations................ 29 5.1.1 Simple family implementations................. 29 5.1.2 Gorilla implementation...................... 31 5.1.3 FPC implementation....................... 33 5.2 Implementing compression to Hawkular-Metrics............ 33 5.2.1 Data modeling.......................... 34 5.2.2 Execution............................. 35 5.2.3 Transforming data to wide-column format........... 35 5.3 Temporary tables solution........................ 36 5.3.1 Table management........................ 36 5.3.2 Writing data............................ 37 5.3.3 Reading data........................... 38 5.3.4 Data partitioning strategies................... 39 6 Experiments 41 6.1 Data set compression ratio........................ 41 6.1.1 Generated data sets........................ 41 6.1.2 Real world data sets....................... 43 6.2 Performance of the compression job................... 47 6.3 Query performance............................ 50 7 Observations 54 7.1 Compression ratio............................. 54 7.2 Compression speed............................ 55 7.3 Selecting the compression algorithm................... 56 7.4 Architectural choices........................... 57 7 7.5 Query performance............................ 58 7.6 Future work................................ 59 8 Summary 61 References 62 A Optimized Java 71 A.1 Intrinsics.................................. 71 A.2 Inlining.................................. 72 A.3 Superword Level Parallelism....................... 72 A.4 Generic.................................. 73 B Source code repositories 74 B.0.1 Repositories............................ 74 B.1 Licensing.................................. 74 8 Symbols and abbreviations Symbols δ delta Operators ⊕ eXclusive OR Abbreviations GC Garbage Collector JIT Just In Time compiler JVM Java Virtual Machine ODS Facebook’s Operational Data Store LSM Log-Structured Merge Tree TSM Time-Structured Merge Tree RLE Run-Length Encoding FFT Fast Fourier Transform LZ77 Lempel and Ziv compression algorithm from the 1977 paper LZ78 Lempel and Ziv compression algorithm from the 1978 paper FOR Frame-Of-Reference PFOR Patched Frame-Of-Reference AFOR-1 Delbru et al. 32-integer version of binary packing AFOR-3 Greedy algorithm for binary packing partitioning VSEncoding Vector of Split Encoding FCM Finite Context Method Predictors DFCM Differential FCM ANS Asymmetric Numeral System FSE Finite State Entropy Zstd Zstandard NSM n-ary storage model DSM decomposite storage model SLP Superword Level Parallelism 1 Introduction Distributed applications are generating increasing amount of data given all the modern possibilities of instrumenting an application. This data must be stored and processed for queries that require historical knowledge, such as autoscaling based on the historical usage data or when it is used by the customers for internal or external billing purposes, in which case the exact data must be kept and stored with multiple copies. Containers, which are isolated application images that package everything needed to run an application, including code and runtime requirements[126] have been the hype in the infrastructure development for few years now. To manage large amount of containers, solutions such as Kubernetes[127], which is a solution for automating deployment, management and scaling of containers, have risen in popularity. Due to the lower overhead of containers compared to the virtual machines, it has allowed infrastructure to run workloads with higher efficiency. Containerized systems have been a natural deployment system for microservices and lambda architectures, thus driving up the amount of components in the system. These changes have required the monitoring systems to digest more and more data all the time, quickly growing the amount of
Recommended publications
  • Download Media Player Codec Pack Version 4.1 Media Player Codec Pack
    download media player codec pack version 4.1 Media Player Codec Pack. Description: In Microsoft Windows 10 it is not possible to set all file associations using an installer. Microsoft chose to block changes of file associations with the introduction of their Zune players. Third party codecs are also blocked in some instances, preventing some files from playing in the Zune players. A simple workaround for this problem is to switch playback of video and music files to Windows Media Player manually. In start menu click on the "Settings". In the "Windows Settings" window click on "System". On the "System" pane click on "Default apps". On the "Choose default applications" pane click on "Films & TV" under "Video Player". On the "Choose an application" pop up menu click on "Windows Media Player" to set Windows Media Player as the default player for video files. Footnote: The same method can be used to apply file associations for music, by simply clicking on "Groove Music" under "Media Player" instead of changing Video Player in step 4. Media Player Codec Pack Plus. Codec's Explained: A codec is a piece of software on either a device or computer capable of encoding and/or decoding video and/or audio data from files, streams and broadcasts. The word Codec is a portmanteau of ' co mpressor- dec ompressor' Compression types that you will be able to play include: x264 | x265 | h.265 | HEVC | 10bit x265 | 10bit x264 | AVCHD | AVC DivX | XviD | MP4 | MPEG4 | MPEG2 and many more. File types you will be able to play include: .bdmv | .evo | .hevc | .mkv | .avi | .flv | .webm | .mp4 | .m4v | .m4a | .ts | .ogm .ac3 | .dts | .alac | .flac | .ape | .aac | .ogg | .ofr | .mpc | .3gp and many more.
    [Show full text]
  • Divx Codec Package
    Divx codec package Videos. How To Use DivX Mux GUI · How to Stream DivX Plus HD (MKV) files to your Xbox · more. Guides. There are no guides available. Search FAQs. Téléchargement gratuit. Inclut DivX Codec et tout ce dont vous avez besoin pour lire les fichiers DivX, AVI ou MKV dans n'importe quel lecteur multimédia. Kostenloser Download. Umfasst DivX Codec und alles, was Du zur Wiedergabe von DivX-, AVI- oder MKV-Dateien in einem beliebigen Media-Player brauchst. H codecs compress digital video files so that they only use half the space of MPEG-2, to deliver the same quality video. An H encoder delivers. Download grátis. Inclui DivX Codec e tudo o mais de que você precisa para reproduzir arquivos DivX, AVI ou MKV em qualquer player de mídia. Free video software downloads to play & stream DivX (AVI) & DivX Plus HD (MKV) video. Find devices to play DivX video and Hollywood movies in DivX format. You can do it all in one go and be ready for any video format that comes your way. Codec Pack All-in-1 includes: DivX ; XviD Codec Media Player Codec Pack for Microsoft Windows, 10, , 8, 7, Vista, XP, , x | h | HEVC | 10bit x | x | h | AVCHD | AVC | DivX | XviD. They feature improved HEVC and AVC decoders for better stability and the DivX codec pack has been removed for consistency around the. Codec Pack All in 1, free and safe download. Codec Pack All in 1 latest version: A free Video program for Windows. Codec Pack All in 1 is a good, free Windows.
    [Show full text]
  • PKZIP MVS User's Guide
    PKZIP for MVS MVS/ESA, OS/390, & z/OS User’s Guide PKMU-V5R5000 PKWARE, Inc. PKWARE, Inc. 9009 Springboro Pike Miamisburg, Ohio 45342 Sales: 937-847-2374 Support: 937-847-2687 Fax: 937-847-2375 Web Site: http://www.pkzip.com Sales - E-Mail: [email protected] Support - http://www.pkzip.com/support 5.5 Edition (2003) PKZIP for MVS™, PKZIP for OS/400™, PKZIP for VSE™, PKZIP for UNIX™, and PKZIP for Windows™ are just a few of the many members in the PKZIP® family. PKWARE, Inc. would like to thank all the individuals and companies -- including our customers, resellers, distributors, and technology partners -- who have helped make PKZIP® the industry standard for Trusted ZIP solutions. PKZIP® enables our customers to efficiently and securely transmit and store information across systems of all sizes, ranging from desktops to mainframes. This edition applies to the following PKWARE of Ohio, Inc. licensed program: PKZIP for MVS™ (Version 5, Release 5, 2003) PKZIP(R) is a registered trademark of PKWARE(R) Inc. Other product names mentioned in this manual may be a trademark or registered trademarks of their respective companies and are hereby acknowledged. Any reference to licensed programs or other material, belonging to any company, is not intended to state or imply that such programs or material are available or may be used. The copyright in this work is owned by PKWARE of Ohio, Inc., and the document is issued in confidence for the purpose only for which it is supplied. It must not be reproduced in whole or in part or used for tendering purposes except under an agreement or with the consent in writing of PKWARE of Ohio, Inc., and then only on condition that this notice is included in any such reproduction.
    [Show full text]
  • Installation Manual
    CX-20 Installation manual ENABLING BRIGHT OUTCOMES Barco NV Beneluxpark 21, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Registered office: Barco NV President Kennedypark 35, 8500 Kortrijk, Belgium www.barco.com/en/support www.barco.com Copyright © All rights reserved. No part of this document may be copied, reproduced or translated. It shall not otherwise be recorded, transmitted or stored in a retrieval system without the prior written consent of Barco. Trademarks Brand and product names mentioned in this manual may be trademarks, registered trademarks or copyrights of their respective holders. All brand and product names mentioned in this manual serve as comments or examples and are not to be understood as advertising for the products or their manufacturers. Trademarks USB Type-CTM and USB-CTM are trademarks of USB Implementers Forum. HDMI Trademark Notice The terms HDMI, HDMI High Definition Multimedia Interface, and the HDMI Logo are trademarks or registered trademarks of HDMI Licensing Administrator, Inc. Product Security Incident Response As a global technology leader, Barco is committed to deliver secure solutions and services to our customers, while protecting Barco’s intellectual property. When product security concerns are received, the product security incident response process will be triggered immediately. To address specific security concerns or to report security issues with Barco products, please inform us via contact details mentioned on https://www.barco.com/psirt. To protect our customers, Barco does not publically disclose or confirm security vulnerabilities until Barco has conducted an analysis of the product and issued fixes and/or mitigations. Patent protection Please refer to www.barco.com/about-barco/legal/patents Guarantee and Compensation Barco provides a guarantee relating to perfect manufacturing as part of the legally stipulated terms of guarantee.
    [Show full text]
  • Compressed Transitive Delta Encoding 1. Introduction
    Compressed Transitive Delta Encoding Dana Shapira Department of Computer Science Ashkelon Academic College Ashkelon 78211, Israel [email protected] Abstract Given a source file S and two differencing files ∆(S; T ) and ∆(T;R), where ∆(X; Y ) is used to denote the delta file of the target file Y with respect to the source file X, the objective is to be able to construct R. This is intended for the scenario of upgrading soft- ware where intermediate releases are missing, or for the case of file system backups, where non consecutive versions must be recovered. The traditional way is to decompress ∆(S; T ) in order to construct T and then apply ∆(T;R) on T and obtain R. The Compressed Transitive Delta Encoding (CTDE) paradigm, introduced in this paper, is to construct a delta file ∆(S; R) working directly on the two given delta files, ∆(S; T ) and ∆(T;R), without any decompression or the use of the base file S. A new algorithm for solving CTDE is proposed and its compression performance is compared against the traditional \double delta decompression". Not only does it use constant additional space, as opposed to the traditional method which uses linear additional memory storage, but experiments show that the size of the delta files involved is reduced by 15% on average. 1. Introduction Differential file compression represents a target file T with respect to a source file S. That is, both the encoder and decoder have available identical copies of S. A new file T is encoded and subsequently decoded by making use of S.
    [Show full text]
  • Pack, Encrypt, Authenticate Document Revision: 2021 05 02
    PEA Pack, Encrypt, Authenticate Document revision: 2021 05 02 Author: Giorgio Tani Translation: Giorgio Tani This document refers to: PEA file format specification version 1 revision 3 (1.3); PEA file format specification version 2.0; PEA 1.01 executable implementation; Present documentation is released under GNU GFDL License. PEA executable implementation is released under GNU LGPL License; please note that all units provided by the Author are released under LGPL, while Wolfgang Ehrhardt’s crypto library units used in PEA are released under zlib/libpng License. PEA file format and PCOMPRESS specifications are hereby released under PUBLIC DOMAIN: the Author neither has, nor is aware of, any patents or pending patents relevant to this technology and do not intend to apply for any patents covering it. As far as the Author knows, PEA file format in all of it’s parts is free and unencumbered for all uses. Pea is on PeaZip project official site: https://peazip.github.io , https://peazip.org , and https://peazip.sourceforge.io For more information about the licenses: GNU GFDL License, see http://www.gnu.org/licenses/fdl.txt GNU LGPL License, see http://www.gnu.org/licenses/lgpl.txt 1 Content: Section 1: PEA file format ..3 Description ..3 PEA 1.3 file format details ..5 Differences between 1.3 and older revisions ..5 PEA 2.0 file format details ..7 PEA file format’s and implementation’s limitations ..8 PCOMPRESS compression scheme ..9 Algorithms used in PEA format ..9 PEA security model .10 Cryptanalysis of PEA format .12 Data recovery from
    [Show full text]
  • User Commands GZIP ( 1 ) Gzip, Gunzip, Gzcat – Compress Or Expand Files Gzip [ –Acdfhllnnrtvv19 ] [–S Suffix] [ Name ... ]
    User Commands GZIP ( 1 ) NAME gzip, gunzip, gzcat – compress or expand files SYNOPSIS gzip [–acdfhlLnNrtvV19 ] [– S suffix] [ name ... ] gunzip [–acfhlLnNrtvV ] [– S suffix] [ name ... ] gzcat [–fhLV ] [ name ... ] DESCRIPTION Gzip reduces the size of the named files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension .gz, while keeping the same ownership modes, access and modification times. (The default extension is – gz for VMS, z for MSDOS, OS/2 FAT, Windows NT FAT and Atari.) If no files are specified, or if a file name is "-", the standard input is compressed to the standard output. Gzip will only attempt to compress regular files. In particular, it will ignore symbolic links. If the compressed file name is too long for its file system, gzip truncates it. Gzip attempts to truncate only the parts of the file name longer than 3 characters. (A part is delimited by dots.) If the name con- sists of small parts only, the longest parts are truncated. For example, if file names are limited to 14 characters, gzip.msdos.exe is compressed to gzi.msd.exe.gz. Names are not truncated on systems which do not have a limit on file name length. By default, gzip keeps the original file name and timestamp in the compressed file. These are used when decompressing the file with the – N option. This is useful when the compressed file name was truncated or when the time stamp was not preserved after a file transfer. Compressed files can be restored to their original form using gzip -d or gunzip or gzcat.
    [Show full text]
  • In-Place Reconstruction of Delta Compressed Files
    In-Place Reconstruction of Delta Compressed Files Randal C. Burns Darrell D. E. Long’ IBM Almaden ResearchCenter Departmentof Computer Science 650 Harry Rd., San Jose,CA 95 120 University of California, SantaCruz, CA 95064 [email protected] [email protected] Abstract results in high latency and low bandwidth to web-enabled clients and prevents the timely delivery of software. We present an algorithm for modifying delta compressed Differential or delta compression [5, 11, compactly en- files so that the compressedversions may be reconstructed coding a new version of a file using only the changedbytes without scratchspace. This allows network clients with lim- from a previous version, can be usedto reducethe size of the ited resources to efficiently update software by retrieving file to be transmitted and consequently the time to perform delta compressedversions over a network. software update. Currently, decompressingdelta encoded Delta compressionfor binary files, compactly encoding a files requires scratch space,additional disk or memory stor- version of data with only the changedbytes from a previous age, used to hold a required second copy of the file. Two version, may be used to efficiently distribute software over copiesof the compressedfile must be concurrently available, low bandwidth channels, such as the Internet. Traditional as the delta file contains directives to read data from the old methods for rebuilding these delta files require memory or file version while the new file version is being materialized storagespace on the target machinefor both the old and new in another region of storage. This presentsa problem. Net- version of the file to be reconstructed.
    [Show full text]
  • «Supercomputing Frontiers and Innovations»
    DOI: 10.14529/jsfi170402 Towards Decoupling the Selection of Compression Algorithms from Quality Constraints – An Investigation of Lossy Compression Efficiency Julian M. Kunkel1, Anastasiia Novikova2, Eugen Betke1 c The Authors 2017. This paper is published with open access at SuperFri.org Data intense scientific domains use data compression to reduce the storage space needed. Lossless data compression preserves information accurately but lossy data compression can achieve much higher compression rates depending on the tolerable error margins. There are many ways of defining precision and to exploit this knowledge, therefore, the field of lossy compression is subject to active research. From the perspective of a scientist, the qualitative definition about the implied loss of data precision should only matter. With the Scientific Compression Library (SCIL), we are developing a meta-compressor that allows users to define various quantities for acceptable error and expected performance behavior. The library then picks a suitable chain of algorithms yielding the user’s requirements, the ongoing work is a preliminary stage for the design of an adaptive selector. This approach is a crucial step towards a scientifically safe use of much-needed lossy data compression, because it disentangles the tasks of determining scientific characteristics of tolerable noise, from the task of determining an optimal compression strategy. Future algorithms can be used without changing application code. In this paper, we evaluate various lossy compression algorithms for compressing different scientific datasets (Isabel, ECHAM6), and focus on the analysis of synthetically created data that serves as blueprint for many observed datasets. We also briefly describe the available quantities of SCIL to define data precision and introduce two efficient compression algorithms for individual data points.
    [Show full text]
  • Delta Compression Techniques
    D but the concept can also be applied to multimedia Delta Compression and structured data. Techniques Delta compression should not be confused with Elias delta codes, a technique for encod- Torsten Suel ing integer values, or with the idea of coding Department of Computer Science and sorted sequences of integers by first taking the Engineering, Tandon School of Engineering, difference (or delta) between consecutive values. New York University, Brooklyn, NY, USA Also, delta compression requires the encoder to have complete knowledge of the reference files and thus differs from more general techniques for Synonyms redundancy elimination in networks and storage systems where the encoder has limited or even Data differencing; Delta encoding; Differential no knowledge of the reference files, though the compression boundaries with that line of work are not clearly defined. Definition Delta compression techniques encode a target Overview file with respect to one or more reference files, such that a decoder who has access to the same Many applications of big data technologies in- reference files can recreate the target file from the volve very large data sets that need to be stored on compressed data. Delta compression is usually disk or transmitted over networks. Consequently, applied in cases where there is a high degree of data compression techniques are widely used to redundancy between target and references files, reduce data sizes. However, there are many sce- leading to a much smaller compressed size than narios where there are significant redundancies could be achieved by just compressing the tar- between different data files that cannot be ex- get file by itself.
    [Show full text]
  • Using Compression for Energy-Optimized Memory Hierarchies
    Using Compression for Energy-Optimized Memory Hierarchies By Somayeh Sardashti A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN - MADISON 2015 Date of final oral examination: 2/25/2015 The dissertation is approved by the following members of the Final Oral Committee: Prof. David A. Wood, Professor, Computer Sciences Prof. Mark D. Hill, Professor, Computer Sciences Prof. Nam Sung Kim, Associate Professor, Electrical and Computer Engineering Prof. André Seznec, Professor, Computer Architecture Prof. Gurindar S. Sohi, Professor, Computer Sciences Prof. Michael M. Swift, Associate Professor, Computer Sciences © Copyright by Somayeh Sardashti 2015 All Rights Reserved i Abstract In multicore processor systems, last-level caches (LLCs) play a crucial role in reducing system energy by i) filtering out expensive accesses to main memory and ii) reducing the time spent executing in high-power states. Increasing the LLC size can improve system performance and energy by reducing memory accesses, but at the cost of high area and power overheads. In this dissertation, I explored using compression to effectively improve the LLC capacity and ultimately system performance and energy consumption. Cache compression is a promising technique for expanding effective cache capacity with little area overhead. Compressed caches can achieve the benefits of larger caches using the area and power of smaller caches by fitting more cache blocks in the same cache space. However, previous compressed cache designs have demonstrated only limited benefits due to internal fragmentation, limited tags, and metadata overhead. In addition, most previous proposals targeted improving system performance even at high power and energy overheads.
    [Show full text]
  • Zlib Home Site
    zlib Home Site http://zlib.net/ A Massively Spiffy Yet Delicately Unobtrusive Compression Library (Also Free, Not to Mention Unencumbered by Patents) (Not Related to the Linux zlibc Compressing File-I/O Library) Welcome to the zlib home page, web pages originally created by Greg Roelofs and maintained by Mark Adler . If this page seems suspiciously similar to the PNG Home Page , rest assured that the similarity is completely coincidental. No, really. zlib was written by Jean-loup Gailly (compression) and Mark Adler (decompression). Current release: zlib 1.2.6 January 29, 2012 Version 1.2.6 has many changes over 1.2.5, including these improvements: gzread() can now read a file that is being written concurrently gzgetc() is now a macro for increased speed Added a 'T' option to gzopen() for transparent writing (no compression) Added deflatePending() to return the amount of pending output Allow deflateSetDictionary() and inflateSetDictionary() at any time in raw mode deflatePrime() can now insert bits in the middle of the stream ./configure now creates a configure.log file with all of the results Added a ./configure --solo option to compile zlib with no dependency on any libraries Fixed a problem with large file support macros Fixed a bug in contrib/puff Many portability improvements You can also look at the complete Change Log . Version 1.2.5 fixes bugs in gzseek() and gzeof() that were present in version 1.2.4 (March 2010). All users are encouraged to upgrade immediately. Version 1.2.4 has many changes over 1.2.3, including these improvements:
    [Show full text]