Lz4m: a Fast Compression Algorithm for In-Memory Data

Total Page:16

File Type:pdf, Size:1020Kb

Lz4m: a Fast Compression Algorithm for In-Memory Data 2017 IEEE International Conference on Consumer Electronics (ICCE) LZ4m: A Fast Compression Algorithm for In-Memory Data Se-Jun Kwon∗, Sang-Hoon Kim†, Hyeong-Jun Kim∗, and Jin-Soo Kim∗ ∗College of Information and Communication Engineering, Sungkyunkwan University Email: [email protected], [email protected], [email protected] †School of Computing, Korea Advanced Institute of Science and Technology (KAIST) Email: [email protected] Abstract—Compressing in-memory data is a cost-effective as Deflate [7], LZ4 [8], and LZO [9] compress in-memory data solution for dealing with the memory demand from data-intensive better than WKdm. However, they exhibit poor performance applications. This paper proposes a fast data compression algo- in terms of compression and decompression speed, thereby rithm for in-memory data that improves performance by utilizing the characteristics frequently observed from in-memory data. impairing system performance. This paper proposes a new novel compression algorithm I. INTRODUCTION called LZ4m, which stands for LZ4 for in-memory data. We Dealing with the ever-increasing memory requirements of expedite the scanning of input stream of the original LZ4 data-intensive applications is the subject of intensive work algorithm by utilizing the characteristics frequently observed in both industry and academia. In particular, it becomes an from in-memory data. We also revise the encoding scheme important issue in consumer devices as their fast evolution of the LZ4 algorithm so that the compressed data can be places a high demand on memory. As a cost-effective solution represented in a more concise form. Our evaluation results for the demand, manufacturers attempt to compress in-memory with the data obtained from a running mobile device show data thereby increasing the effective memory size and lower- that LZ4m outperforms previous compression algorithms in ing manufacturing cost [1], [2]. Specifically, in-memory data compression and decompression speed by up to 2.1× and compression techniques become important since the market 1.8×, respectively, with a marginal loss of less than 3% in for consumer devices becomes extremely price sensitive and compression ratio. sales are shifted from premium to cheaper models equipped II. ZIV-LEMPEL ALGORITHM AND LZ4 with small memory [3]. Data compression, or compression shortly, refers to the Many compression algorithms aiming at low compression technique that reduces the size of data by representing the latency are based on the Lempel-Ziv algorithm [10]. The key data in a more concise form. There have been many studies strategy of Lempel-Ziv algorithm is to scan an input stream that employ the compression for in-memory data, ranging and find the longest match between the substring starting from from computer architecture systems to operating systems. For the current scanning position and the substrings in the already instance, some memory technologies [4], [?] compress cache scanned part of the input stream. If such a match is found, the lines at the memory controller before writing them to memory, matched substring at the current position is substituted with providing a larger amount of memory than physically equipped a pair of the backward offset to the match and the length of one. Compcache and zram [5] find out rarely referenced page the match. However, identifying the longest match can incur frames and compress them to reduce their memory footprint. high overheads in time and space. Thus, algorithms in practice A compression algorithm for in-memory data must be usually alter the strategy to quickly identify a substring that fast in both compression and decompression while providing is long enough to be compressed [7], [8], [9]. acceptable compression ratio1 as the performance of memory LZ4 is one of the most popular and successful compres- access heavily influences on overall system performance. To sion algorithms based on the Lempel-Ziv algorithm. Fig- this end, Wilson et al. have proposed the WKdm algorithm ure 1 illustrates how LZ4 identifies the match and encodes that utilizes the regularities that are frequently observed from an input stream. LZ4 scans an input stream with a 4-byte in-memory data [6]. However, although WKdm exhibits out- window (“DABE” at position 10 in this case) and checks standing compression speed, its poor compression ratio is the whether the substring in the window appeared before. To assist major concern that hinders the application of the algorithm. On the match, LZ4 maintains a hash table that maps a 4-byte the other hand, general-purpose compression algorithms such substring to a position from the start of the input stream. If the hash table contains an entry for the current 4-byte This work was supported by the National Research Foundation of Ko- substring in the scanning window (Figure 1(a)- 1 ), it implies rea (NRF) grant founded by the Korea Government (MSIP) (No. NRF- that the current substring appeared at the certain position in 2016R1A2A1A05005494). 1Throughout the paper, we use the term compression ratio to refer to the the previously scanned stream (Figure 1(a)- 2 ). Therefore, the ratio of the compressed size to the original size. substring starting at the position has an identical prefix with 978-1-5090-5544-9/17/$31.00 ©2017 IEEE 2017 IEEE International Conference on Consumer Electronics (ICCE) Scanning window ¥ Scanning window ¥ Position 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Position 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Data F C D A B E G E A B D A B E G E A F C D D Data F C D A B E G E A B D A B E G E A B C D D Substring Position Substring Position ¤ FCDA 0 ¤ FCDA 0 £ CDAB 1 BEGE 4 12 DABE 2 10 £ ABDA 8 (a) Matching process of LZ4 (a) Matching process of LZ4m Token Body Token Body Literal Match Literal Match Literal Match Literal Match Offset length length Length Data Offset Length length length Length Data Offset Length 4 bits 4 bits 0+ bytes 0+ bytes 2 bytes 0+ bytes 2 bits 3 bits 3 bits 0+ bytes 0+ bytes 1 byte 0+ bytes (b) Encoding of LZ4 (b) Encoding of LZ4m Fig. 1. Matching process and encoding of LZ4 Fig. 2. Matching process and encoding of LZ4m III. PROPOSED LZ4M ALGORITHM LZ4 focuses on the compression and decompression speed the substring starting at the current scanning position. LZ4 with an acceptable compression ratio. However, as LZ4 is utilizes this property and finds the longest match starting from designed as a general-purpose compression algorithm, it does these two positions. In this example, the longest match string not utilize the inherent characteristics of in-memory data. is “DABEGEA”. Then, the corresponding hash table entry In-memory data consists of virtual memory pages that is updated to the current scanning position and the scanning contains the data from the stack and the heap regions of window is slid forward by the length of the match (Figure applications. The stack and the heap regions contain constants, 1(a)- 3 ). If no entry for the current substring is found in the variables, pointers, and arrays of basic types which are usually hash table, a new entry is inserted into the hash table and structured and aligned to the word boundary of the system [6]. the scanning window is advanced by one byte. The scanning Based on the observation, we can expect that data compression is repeated until the scanning window reaches the end of the can be accelerated without a significant loss of compressing stream. opportunity by scanning and matching data in the word granularity. In addition, as the larger granularity requires less According to the matching scheme, the input stream is bits to represent the length and the offset, the substrings (i.e., partitioned into a number of substrings, called literals and literals and matches) can be encoded in a more concise form. matches. The literal means a substring that does not match We revise the LZ4 algorithm to utilize these characteristics any of previously appeared substrings. The literal is always and propose the LZ4m algorithm which stands for LZ4 for in- followed by one or more matches unless the literal is the last memory data. Figure 2(a) illustrates the modified compression substring of the input stream. LZ4 encodes the substrings into scheme and the unit encoding of LZ4m. LZ4m uses the same encoding units which are comprised of a token and a body as scanning window and hash table of the original LZ4. In shown in Figure 1(b). Every encoding unit is started with a 1- contrast to the original LZ4 algorithm, LZ4m scans an input byte token. If the substring to encode is a literal followed by a stream and finds the match in a 4-byte granularity. If the hash match, the upper 4 bits of the token are set to the length of the table indicates no prefix match exists, LZ4m advances the literal and the lower 4 bits are set to the length of the following window by 4 bytes and repeats identifying the prefix match. match. If 4 bits are not enough to represent the length, i.e., the As shown in the Figure 2(a), after examining the prefix at substring is longer than 15 bytes, the corresponding bits are position 8, the next position of the window is 12 instead of 9. set to 1111(2), and the length subtracted by 15 is placed on the If a prefix match is found, LZ4m compares subsequent data body which follows the token.
Recommended publications
  • Data Compression: Dictionary-Based Coding 2 / 37 Dictionary-Based Coding Dictionary-Based Coding
    Dictionary-based Coding already coded not yet coded search buffer look-ahead buffer cursor (N symbols) (L symbols) We know the past but cannot control it. We control the future but... Last Lecture Last Lecture: Predictive Lossless Coding Predictive Lossless Coding Simple and effective way to exploit dependencies between neighboring symbols / samples Optimal predictor: Conditional mean (requires storage of large tables) Affine and Linear Prediction Simple structure, low-complex implementation possible Optimal prediction parameters are given by solution of Yule-Walker equations Works very well for real signals (e.g., audio, images, ...) Efficient Lossless Coding for Real-World Signals Affine/linear prediction (often: block-adaptive choice of prediction parameters) Entropy coding of prediction errors (e.g., arithmetic coding) Using marginal pmf often already yields good results Can be improved by using conditional pmfs (with simple conditions) Heiko Schwarz (Freie Universität Berlin) — Data Compression: Dictionary-based Coding 2 / 37 Dictionary-based Coding Dictionary-Based Coding Coding of Text Files Very high amount of dependencies Affine prediction does not work (requires linear dependencies) Higher-order conditional coding should work well, but is way to complex (memory) Alternative: Do not code single characters, but words or phrases Example: English Texts Oxford English Dictionary lists less than 230 000 words (including obsolete words) On average, a word contains about 6 characters Average codeword length per character would be limited by 1
    [Show full text]
  • Package 'Brotli'
    Package ‘brotli’ May 13, 2018 Type Package Title A Compression Format Optimized for the Web Version 1.2 Description A lossless compressed data format that uses a combination of the LZ77 algorithm and Huffman coding. Brotli is similar in speed to deflate (gzip) but offers more dense compression. License MIT + file LICENSE URL https://tools.ietf.org/html/rfc7932 (spec) https://github.com/google/brotli#readme (upstream) http://github.com/jeroen/brotli#read (devel) BugReports http://github.com/jeroen/brotli/issues VignetteBuilder knitr, R.rsp Suggests spelling, knitr, R.rsp, microbenchmark, rmarkdown, ggplot2 RoxygenNote 6.0.1 Language en-US NeedsCompilation yes Author Jeroen Ooms [aut, cre] (<https://orcid.org/0000-0002-4035-0289>), Google, Inc [aut, cph] (Brotli C++ library) Maintainer Jeroen Ooms <[email protected]> Repository CRAN Date/Publication 2018-05-13 20:31:43 UTC R topics documented: brotli . .2 Index 4 1 2 brotli brotli Brotli Compression Description Brotli is a compression algorithm optimized for the web, in particular small text documents. Usage brotli_compress(buf, quality = 11, window = 22) brotli_decompress(buf) Arguments buf raw vector with data to compress/decompress quality value between 0 and 11 window log of window size Details Brotli decompression is at least as fast as for gzip while significantly improving the compression ratio. The price we pay is that compression is much slower than gzip. Brotli is therefore most effective for serving static content such as fonts and html pages. For binary (non-text) data, the compression ratio of Brotli usually does not beat bz2 or xz (lzma), however decompression for these algorithms is too slow for browsers in e.g.
    [Show full text]
  • The Basic Principles of Data Compression
    The Basic Principles of Data Compression Author: Conrad Chung, 2BrightSparks Introduction Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format. In this topic we will cover how compression works, the advantages and disadvantages of compression, as well as types of compression. What is Compression? Compression is the process of encoding data more efficiently to achieve a reduction in file size. One type of compression available is referred to as lossless compression. This means the compressed file will be restored exactly to its original state with no loss of data during the decompression process. This is essential to data compression as the file would be corrupted and unusable should data be lost. Another compression category which will not be covered in this article is “lossy” compression often used in multimedia files for music and images and where data is discarded. Lossless compression algorithms use statistic modeling techniques to reduce repetitive information in a file. Some of the methods may include removal of spacing characters, representing a string of repeated characters with a single character or replacing recurring characters with smaller bit sequences. Advantages/Disadvantages of Compression Compression of files offer many advantages. When compressed, the quantity of bits used to store the information is reduced. Files that are smaller in size will result in shorter transmission times when they are transferred on the Internet. Compressed files also take up less storage space. File compression can zip up several small files into a single file for more convenient email transmission.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]
  • Context-Aware Encoding & Delivery in The
    Context-Aware Encoding & Delivery in the Web ICWE 2020 Benjamin Wollmer, Wolfram Wingerath, Norbert Ritter Universität Hamburg 9 - 12 June, 2020 Business Impact of Page Speed Business Uplift Speed Speed Downlift Uplift Business Downlift Felix Gessert: Mobile Site Speed and the Impact on E-Commerce, CodeTalks 2019 So Far On Compression… GZip SDCH Deflate Delta Brotli Encoding GZIP/Deflate – The De Facto Standard in the Web Encoding Size None 200 kB Gzip ~36 kB This example text is used to show how LZ77 finds repeating elements in the example[70;14] text ~81.9% saved data J. Alakuijala, E. Kliuchnikov, Z. Szabadka, L. Vandevenne: Comparison of Brotli, Deflate, Zopfli, LZMA, LZHAM and Bzip2 Compression Algorithms, 2015 Delta Encoding – Updating Stale Content Encoding Size None 200 kB Gzip ~36 kB Delta Encoding ~34 kB 83% saved data J. C. Mogul, F. Douglis, A. Feldmann, B. Krishnamurthy: Potential Benefits of Delta Encoding and Data Compression for HTTP, 1997 SDCH – Reusing Dictionaries Encoding Size This is an example None 200 kB Gzip ~36 kB Another example Delta Encoding ~34 kB SDCH ~7 kB Up to 81% better results (compared to gzip) O. Shapira: Shared Dictionary Compression for HTTP at LinkedIn, 2015 Brotli – SDCH for Everyone Encoding Size None 200 kB Gzip ~36 kB Delta Encoding ~34 kB SDCH ~7 kB Brotli ~29 kB ~85.6% saved data J. Alakuijala, E. Kliuchnikov, Z. Szabadka, L. Vandevenne: Comparison of Brotli, Deflate, Zopfli, LZMA, LZHAM and Bzip2 Compression Algorithms, 2015 So Far On Compression… Theory vs. Reality GZip (~80%) SDCH Deflate
    [Show full text]
  • Lempel-Ziv Sliding Window Update with Suffix Arrays
    i-ETC: ISEL Academic Journal of Electronics, Telecommunications and Computers CETC2011 Issue, Vol. 2, n. 1 (2013) ID-4 LEMPEL-ZIV SLIDING WINDOW UPDATE WITH SUFFIX ARRAYS Artur Ferreira1,3,4 Arlindo Oliveira2,4 Mario´ Figueiredo3,4 1Instituto Superior de Engenharia de Lisboa (ISEL) 2Instituto de Engenharia de Sistemas e Computadores – Investigac¸ao˜ e Desenvolvimento (INESC-ID) 3Instituto de Telecomunicac¸oes˜ (IT) 4Instituto Superior Tecnico´ (IST), Lisboa, PORTUGAL [email protected] [email protected] [email protected] Keywords: Lempel-Ziv compression, suffix arrays, sliding window update, substring search. Abstract: The sliding window dictionary-based algorithms of the Lempel-Ziv (LZ) 77 family are widely used for uni- versal lossless data compression. The encoding component of these algorithms performs repeated substring search. Data structures, such as hash tables, binary search trees, and suffix trees have been used to speedup these searches, at the expense of memory usage. Previous work has shown how suffix arrays (SA) can be used for dictionary representation and LZ77 decomposition. In this paper, we improve over that work by proposing a new efficient algorithm to update the sliding window each time a token is produced at the output. The pro- posed algorithm toggles between two SA on consecutive tokens. The resulting SA-based encoder requires less memory than the conventional tree-based encoders. In comparing our SA-based technique against tree-based encoders, on a large set of benchmark files, we find that, in some compression settings, our encoder is also faster than tree-based encoders. 1 INTRODUCTION dictionaries [4] and to find repeating sub-sequences for data deduplication [1], among other applications.
    [Show full text]
  • Answers to Exercises
    Answers to Exercises A bird does not sing because he has an answer, he sings because he has a song. —Chinese Proverb Intro.1: abstemious, abstentious, adventitious, annelidous, arsenious, arterious, face- tious, sacrilegious. Intro.2: When a software house has a popular product they tend to come up with new versions. A user can update an old version to a new one, and the update usually comes as a compressed file on a floppy disk. Over time the updates get bigger and, at a certain point, an update may not fit on a single floppy. This is why good compression is important in the case of software updates. The time it takes to compress and decompress the update is unimportant since these operations are typically done just once. Recently, software makers have taken to providing updates over the Internet, but even in such cases it is important to have small files because of the download times involved. 1.1: (1) ask a question, (2) absolutely necessary, (3) advance warning, (4) boiling hot, (5) climb up, (6) close scrutiny, (7) exactly the same, (8) free gift, (9) hot water heater, (10) my personal opinion, (11) newborn baby, (12) postponed until later, (13) unexpected surprise, (14) unsolved mysteries. 1.2: A reasonable way to use them is to code the five most-common strings in the text. Because irreversible text compression is a special-purpose method, the user may know what strings are common in any particular text to be compressed. The user may specify five such strings to the encoder, and they should also be written at the start of the output stream, for the decoder’s use.
    [Show full text]
  • Data Compression in Solid State Storage
    Data Compression in Solid State Storage John Fryar [email protected] Santa Clara, CA August 2013 1 Acknowledgements This presentation would not have been possible without the counsel, hard work and graciousness of the following individuals and/or organizations: Raymond Savarda Sandgate Technologies Santa Clara, CA August 2013 2 Disclaimers The opinions expressed herein are those of the author and do not necessarily represent those of any other organization or individual unless specifically cited. A thorough attempt to acknowledge all sources has been made. That said, we’re all human… Santa Clara, CA August 2013 3 Learning Objectives At the conclusion of this tutorial the audience will have been exposed to: • The different types of Data Compression • Common Data Compression Algorithms • The Deflate/Inflate (GZIP/GUNZIP) algorithms in detail • Implementation Options (Software/Hardware) • Impacts of design parameters in Performance • SSD benefits and challenges • Resources for Further Study Santa Clara, CA August 2013 4 Agenda • Background, Definitions, & Context • Data Compression Overview • Data Compression Algorithm Survey • Deflate/Inflate (GZIP/GUNZIP) in depth • Software Implementations • HW Implementations • Tradeoffs & Advanced Topics • SSD Benefits and Challenges • Conclusions Santa Clara, CA August 2013 5 Definitions Item Description Comments Open A system which will compress Must strictly adhere to standards System data for use by other entities. on compress / decompress I.E. the compressed data will algorithms exit the system Interoperability among vendors mandated for Open Systems Closed A system which utilizes Can support a limited, optimized System compressed data internally but subset of standard. does not expose compressed Also allows custom algorithms data to the outside world No Interoperability req’d.
    [Show full text]
  • Video/Audio Compression
    5.9. Video Compression (1) Basics: video := time sequence of single images frequent point of view: video compression = image compression with a temporal component assumption: successive images of a video sequence are similar, e.g. directly adjacent images contain almost the same information that has to be carried only once wanted: strategies to exploit temporal redundancy and irrelevance! motion prediction/estimation, motion compensation, block matching intraframe and interframe coding video compression algorithms and standards are distinguished according to the peculiar conditions, e.g. videoconferencing, applications such as broadcast video Prof. Dr. Paul Müller, University of Kaiserslautern 1 Video Compression (2) Simple approach: M-JPEG compression of a time sequence of images based on the JPEG standard unfortunately, not standardized! makes use of the baseline system of JPEG, intraframe coding, color subsampling 4:1:1, 6 bit quantizer temporal redundancy is not used! applicable for compression ratios from 5:1 to 20:1, higher rates call for interframe coding possibility to synchronize audio data is not provided direct access to full images at every time position application in proprietary consumer video cutting software and hardware solutions Prof. Dr. Paul Müller, University of Kaiserslautern 2 Video Compression (3) Motion prediction and compensation: kinds of motion: • change of color values / change of position of picture elements • translation, rotation, scaling, deformation of objects • change of lights and shadows • translation, rotation, zoom of camera kinds of motion prediction techniques: • prediction of pixels or ranges of pixels neighbouring but no semantic relations • model based prediction grid model with model parameters describing the motion, e.g. head- shoulder-arrangements • object or region based prediction extraction of (video) objects, processing of geometric and texture information, e.g.
    [Show full text]
  • An Inter-Data Encoding Technique That Exploits Synchronized Data for Network Applications
    1 An Inter-data Encoding Technique that Exploits Synchronized Data for Network Applications Wooseung Nam, Student Member, IEEE, Joohyun Lee, Member, IEEE, Ness B. Shroff, Fellow, IEEE, and Kyunghan Lee, Member, IEEE Abstract—In a variety of network applications, there exists significant amount of shared data between two end hosts. Examples include data synchronization services that replicate data from one node to another. Given that shared data may have high correlation with new data to transmit, we question how such shared data can be best utilized to improve the efficiency of data transmission. To answer this, we develop an inter-data encoding technique, SyncCoding, that effectively replaces bit sequences of the data to be transmitted with the pointers to their matching bit sequences in the shared data so called references. By doing so, SyncCoding can reduce data traffic, speed up data transmission, and save energy consumption for transmission. Our evaluations of SyncCoding implemented in Linux show that it outperforms existing popular encoding techniques, Brotli, LZMA, Deflate, and Deduplication. The gains of SyncCoding over those techniques in the perspective of data size after compression in a cloud storage scenario are about 12.5%, 20.8%, 30.1%, and 66.1%, and are about 78.4%, 80.3%, 84.3%, and 94.3% in a web browsing scenario, respectively. Index Terms—Source coding; Data compression; Encoding; Data synchronization; Shared data; Reference selection F 1 INTRODUCTION are capable of exploiting previously stored or delivered URING the last decade, cloud-based data synchroniza- data for storing or transmitting new data. However, they D tion services for end-users such as Dropbox, OneDrive, mostly work at the level of files or chunks of a fixed and Google Drive have attracted a huge number of sub- size (e.g., 4MB in Dropbox, 8kB in Neptune [4]), which scribers.
    [Show full text]
  • I Came to Drop Bombs Auditing the Compression Algorithm Weapons Cache
    I Came to Drop Bombs Auditing the Compression Algorithm Weapons Cache Cara Marie NCC Group Blackhat USA 2016 About Me • NCC Group Senior Security Consultant Pentested numerous networks, web applications, mobile applications, etc. • Hackbright Graduate • Ticket scalper in a previous life • @bones_codes | [email protected] What is a Decompression Bomb? A decompression bomb is a file designed to crash or render useless the program or system reading it. Vulnerable Vectors • Chat clients • Image hosting • Web browsers • Web servers • Everyday web-services software • Everyday client software • Embedded devices (especially vulnerable due to weak hardware) • Embedded documents • Gzip’d log uploads A History Lesson early 90’s • ARC/LZH/ZIP/RAR bombs were used to DoS FidoNet systems 2002 • Paul L. Daniels publishes Arbomb (Archive “Bomb” detection utility) 2003 • Posting by Steve Wray on FullDisclosure about a bzip2 bomb antivirus software DoS 2004 • AERAsec Network Services and Security publishes research on the various reactions of antivirus software against decompression bombs, includes a comparison chart 2014 • Several CVEs for PIL are issued — first release July 2010 (CVE-2014-3589, CVE-2014-3598, CVE-2014-9601) 2015 • CVE for libpng — first release Aug 2004 (CVE-2015-8126) Why Are We Still Talking About This?!? Why Are We Still Talking About This?!? Compression is the New Hotness Who This Is For Who This Is For The Archives An archive bomb, a.k.a. zip bomb, is often employed to disable antivirus software, in order to create an opening for more traditional viruses • Singly compressed large file • Self-reproducing compressed files, i.e. Russ Cox’s Zips All The Way Down • Nested compressed files, i.e.
    [Show full text]
  • HTTP/2 Compression Dictionaries
    HTTP/2 Compression Dictionaries Vlad Krasnov In a nutshell ● Allow cross-stream compression in HTTP/2 by means of "dictionaries" ● Including a set(s?) of static dictionaries for initialization ○ Each dictionary targets a different MIME type ● Up to 256 dictionaries per connection ● Default dictionary size is 217 ○ Defined by settings ● Server indicates if a stream might be used for compression in the future by sending a SET_DICTIONARY frame, before the data ○ Client keeps part of the data ● Server can use a previously defined dictionary with a USE_DICTIONARY frame Rationale ● Yesterday ○ HTTP/1 with large assets ○ CPU time was expensive ○ Static assets compressed only with "gzip" ■ What about "brotli", "sdch", "sdch+gzip", "sdch+brotli"? ● Today ○ HTTP/2 with (ideally) smaller assets ○ CPU time significantly cheaper ■ Cloudflare uses gzip -8 for dynamic compression ■ Tomorrow: FPGAs ○ Store in gzip -> compress to other formats on demand ● Network ○ Gets cheaper, but slowly ○ Less data -> less packet loss Benefits ● Client ○ Less bandwidth wasted ■ Reduced packet loss ■ Faster page loads ● Server ○ Less bandwidth wasted ○ Improved compression ratio almost for free ■ Alternatively: keep compression ratio, reduce CPU usage ○ Greater incentive to re-compress static content ● CDN ○ Highly efficient origin pulls ■ Almost free in many cases Performance simulation ● Crawled over ~2000 Alexa top ● Used Chromedriver to load each page ● Simulated cross-stream compression with gzip and brotli ● Several compression strategies and dictionary sizes ● Best overall strategy: ○ If asset first of its type -> use static dictionary for type ○ Else use dynamic dictionary for type ○ Append asset to the dictionary for the type Performance Performance Performance Performance Performance ● Brotli -5 w.
    [Show full text]