Polynomial Data Compression for Large-Scale Physics Experiments

Total Page:16

File Type:pdf, Size:1020Kb

Polynomial Data Compression for Large-Scale Physics Experiments Noname manuscript No. (will be inserted by the editor) Polynomial data compression for large-scale physics experiments Pierre Aubert · Thomas Vuillaume · Gilles Maurin · Jean Jacquemier · Giovanni Lamanna · Nahid Emad Received: date / Accepted: date Abstract The new generation research exper- ation ground-based high-energy gamma ray ob- iments will introduce huge data surge to a con- servatory, Cherenkov Telescope Array (CTA), tinuously increasing data production by current requiring important compression performance. experiments. This data surge necessitates effi- Stand-alone, the proposed compression method cient compression techniques. These compres- is very fast and reasonably efficient. Alterna- sion techniques must guarantee an optimum tively, applied as pre-compression algorithm, it tradeoff between compression rate and the cor- can accelerate common methods like LZMA, responding compression /decompression speed keeping close performance. ratio without affecting the data integrity. This work presents a lossless compression al- Keywords Big data · HPC · lossless compres- gorithm to compress physics data generated by sion · white noise Astronomy, Astrophysics and Particle Physics experiments. The developed algorithms have been tuned 1 Introduction and tested on a real use case : the next gener- P. Aubert · T. Vuillaume · G. Maurin · J. Several current and next generation experimen- Jacquemier · G. Lamanna tal infrastructures are concerned by increasing Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, volume of data that they generate and manage. CNRS, LAPP, 74000 Annecy, France This is also the case in the Astrophysics and As- arXiv:1805.01844v1 [cs.NA] 3 May 2018 E-mail: [email protected] troparticle Physics research domains where sev- P. Aubert · N. Emad eral projects are going to produce a data del- Laboratoire d’informatique Parallélisme Réseaux Al- gorithmes Distribués, UFR des Sciences 45 avenue uge of the order of several tens of Peta-Bytes des États-Unis 78035 Versailles (PB) per year [1] (as in the case of CTA) up to P. Aubert · N. Emad some Exa-Bytes (as for the next generation as- Maison de la Simulation, Université de Versailles tronomical radio observatory SKA[2]). Such an Saint-Quentin-en-Yvelines, USR 3441 CEA Saclay increasing data-rate implies considerable tech- 91191 Gif-sur-Yvette cedex nical issues at all levels of the data flow, such 2 P. Aubert, T. Vuillaume, J. Jacquemier, G. Maurin, G. Lamanna & N. Emad as data storage, processing, dissemination and ods. Section4 reports the improvement ob- preservation. tained from our best polynomial compression The most efficient compression algorithms method on given distributions and CTA data generally used for pictures (JPEG), videos [14]. Section5 gives further details about com- (H264) or music (MP3) files, provide compres- pression quality. In section6, some concluding sion ratios greater than 10. These algorithms remarks and future plans will be given. are lossy, therefore not applicable in scientific context where the data content is critical and inexact approximations and/or partial data dis- 2 Motivation carding are not acceptable. In the context of this work, we focus on compression methods to re- As the data volumes generated by current spond to data size reduction storage, handling, and upcoming experiments rapidly increase, and transmitting issues while not compromising the transfer and storage of data becomes an the data content. economical and technical issue. As an exam- Following types of lossless compression ple, CTA, the next generation ground-based methods are applicable for aforementioned sit- gamma-ray observatory, will generate hundreds uations. LZMA [3], LZ78 [4], BZIP2 [5], GZIP PB of data by 2030. The CTA facility is based [6], Zstandard [7] or the Huffman algorithms are on two observing sites, one per hemisphere and often employed because they provide the best will be composed of more than one hundred tele- compression ratio. The compression speeds of scopes in total. Each of them is equipped with these methods however impose significant con- photo-sensors equipping the telescopes’ cameras straints considering the data volumes at hand. and generating about two hundred PB/year of Characters lossless compression, CTW uncompressed raw data that are then reduced (Context Tree Weighting) [8], LZ77 [9], LZW on sites after data selection conditions to the or- [10], Burrows-Wheeler transform, or PPM [11], der of the PB/year off-site data yield. The CTA cannot be used efficiently on physics data as pipeline thus implies a need for both lossy and they do not have the same characteristics as lossless compression, and the amount of lossy text data, like the occurance or repetition of compression should be minimized while also en- characters. Other experiments have recently suring good data reading and writing speed. solved this data compression issue [12], [13] for The writing speed needs to be close to real-time, smaller data rates. since there is limited capacity on site to buffer With the increasing data rate, both the com- such large data volumes. Furthermore, decom- pression speed and ratio have to be improved. pression speed is also an issue; the whole cumu- This paper primarily addresses the data com- lated data are expected to be reprocessed yearly, pression challenges. In this paper, we propose a which means that the amount of data needed polynomial approach to compress integer data to be read from disk (, decompressed) and pro- dominated by a white noise in a shorter time cessed will grow each year (e.g. 4 PB, 8 PB, 12 than the classical methods with a reasonable PB, ...). compression ratio. This paper focuses on both In CTA, as in many other experiments, the compression ratio and time because the de- the data acquired by digitization can be de- compression time is typically shorter. scribed by two components: a Poissonian dis- The paper is organized as follows. Section tribution representing the signal, dominated by 2 explains some motivations. Section3 de- a Gaussian-like distribution representing the scribes our three polynomial compression meth- noise, which is most commonly white noise. As Polynomial data compression for large-scale physics experiments 3 values in the same integer and compute them back. Fig. 1 Example of analog signal digitization in most Fig. 2 Illustration of the reduction principle. The physics experiments. In many cases the white noise upper line represents the data (different colours for (a Gaussian distribution) dominates the signal (gen- different values). In the second line, the orange blocks erally a Poissonian distribution). So, the biggest part represent the changes between the different values to of the data we want to compress follows a Gaussian compress. The last line shows the compressed data distribution. (as they are stored). First, the minimum value of the data, next, the base b = max − min + 1, which de- fines the data variations set, Z/bZ, finally the data shown in figure1, the noise generally signifi- variations. Several data can be stored in the same cantly dominates the searched signal. unsigned int and only the changes between the data are stored. The common parameters like the range In this paper, we propose a compression of the data (minimum and maximum or compression algorithm optimised on experimental situation base) are stored only once. with such characteristics, Gaussian distribution added to a Poissonian one. Furthermore, in order to respond to time re- quirement and allow for almost real-time execu- tion the proposed solution can be also combined with the most powerful known compression al- 3.1 Basic compression method gorithms such as LZMA to increase tremen- 2 N dously its speed. Considering a n elements data vector, v N , its minimum, vmin and its maximum vmax de- fine its associated ring. If the data ring is smaller 3 The polynomial compression than the unsigned int ring, it is possible to store several values in one unsigned int. The smaller An unsigned int range, q0; 232q defines a math- is the base, the higher is the compression ra- 32 ematical set Z/dZ, called ring, where d = 2 . tio. As the data are in vmin; vmax , the range J K The digitized data also define a ring, in this between 0 and vmin is useless. Therefore, the case, the minimum is vmin and the maximum data can be compressed by subtracting the min- is vmax so the corresponding ring is defined as imum value, forming a smaller base. The mini- Z/bZ with b = vmax − vmin + 1. In many cases mum can be stored once before the compressed b < d, so, it is possible to store several pieces of data. The compression base B is defined by : data in the same unsigned int (see in figure2). B = vmax − vmin + 1. With this base we are able This compression can be made by using a poly- to store ¹vmax − vminº different values. The com- nomial approach. The power of a base is given pression ratio, p, is given by the number of bases by the values range. This allows to add different B that can be stored in one unsigned int (in 4 P. Aubert, T. Vuillaume, J. Jacquemier, G. Maurin, G. Lamanna & N. Emad Fig. 4 This figure shows how the data of the vec- tor v are stored in the packed vector. The first line gives the base used to store the values, the second Fig. 3 Illustration of the advanced reduction. The line shows the variables used to store the values with upper line represents the data (different colours for respect to their base. To increase the compression ra- different values). In the second line, the orange blocks tio we need to split the last base B in to base R and represent the changes between the different values to R0 in order to use the storage capacity of an unsigned compress. The last line shows the compressed data int as much as we can.
Recommended publications
  • ROOT I/O Compression Improvements for HEP Analysis
    EPJ Web of Conferences 245, 02017 (2020) https://doi.org/10.1051/epjconf/202024502017 CHEP 2019 ROOT I/O compression improvements for HEP analysis Oksana Shadura1;∗ Brian Paul Bockelman2;∗∗ Philippe Canal3;∗∗∗ Danilo Piparo4;∗∗∗∗ and Zhe Zhang1;y 1University of Nebraska-Lincoln, 1400 R St, Lincoln, NE 68588, United States 2Morgridge Institute for Research, 330 N Orchard St, Madison, WI 53715, United States 3Fermilab, Kirk Road and Pine St, Batavia, IL 60510, United States 4CERN, Meyrin 1211, Geneve, Switzerland Abstract. We overview recent changes in the ROOT I/O system, enhancing it by improving its performance and interaction with other data analysis ecosys- tems. Both the newly introduced compression algorithms, the much faster bulk I/O data path, and a few additional techniques have the potential to significantly improve experiment’s software performance. The need for efficient lossless data compression has grown significantly as the amount of HEP data collected, transmitted, and stored has dramatically in- creased over the last couple of years. While compression reduces storage space and, potentially, I/O bandwidth usage, it should not be applied blindly, because there are significant trade-offs between the increased CPU cost for reading and writing files and the reduces storage space. 1 Introduction In the past years, Large Hadron Collider (LHC) experiments are managing about an exabyte of storage for analysis purposes, approximately half of which is stored on tape storages for archival purposes, and half is used for traditional disk storage. Meanwhile for High Lumi- nosity Large Hadron Collider (HL-LHC) storage requirements per year are expected to be increased by a factor of 10 [1].
    [Show full text]
  • Arxiv:2004.10531V1 [Cs.OH] 8 Apr 2020
    ROOT I/O compression improvements for HEP analysis Oksana Shadura1;∗ Brian Paul Bockelman2;∗∗ Philippe Canal3;∗∗∗ Danilo Piparo4;∗∗∗∗ and Zhe Zhang1;y 1University of Nebraska-Lincoln, 1400 R St, Lincoln, NE 68588, United States 2Morgridge Institute for Research, 330 N Orchard St, Madison, WI 53715, United States 3Fermilab, Kirk Road and Pine St, Batavia, IL 60510, United States 4CERN, Meyrin 1211, Geneve, Switzerland Abstract. We overview recent changes in the ROOT I/O system, increasing per- formance and enhancing it and improving its interaction with other data analy- sis ecosystems. Both the newly introduced compression algorithms, the much faster bulk I/O data path, and a few additional techniques have the potential to significantly to improve experiment’s software performance. The need for efficient lossless data compression has grown significantly as the amount of HEP data collected, transmitted, and stored has dramatically in- creased during the LHC era. While compression reduces storage space and, potentially, I/O bandwidth usage, it should not be applied blindly: there are sig- nificant trade-offs between the increased CPU cost for reading and writing files and the reduce storage space. 1 Introduction In the past years LHC experiments are commissioned and now manages about an exabyte of storage for analysis purposes, approximately half of which is used for archival purposes, and half is used for traditional disk storage. Meanwhile for HL-LHC storage requirements per year are expected to be increased by factor 10 [1]. arXiv:2004.10531v1 [cs.OH] 8 Apr 2020 Looking at these predictions, we would like to state that storage will remain one of the major cost drivers and at the same time the bottlenecks for HEP computing.
    [Show full text]
  • Unfoldr Dstep
    Asymmetric Numeral Systems Jeremy Gibbons WG2.11#19 Salem ANS 2 1. Coding Huffman coding (HC) • efficient; optimally effective for bit-sequence-per-symbol arithmetic coding (AC) • Shannon-optimal (fractional entropy); but computationally expensive asymmetric numeral systems (ANS) • efficiency of Huffman, effectiveness of arithmetic coding applications of streaming (another story) • ANS introduced by Jarek Duda (2006–2013). Now: Facebook (Zstandard), Apple (LZFSE), Google (Draco), Dropbox (DivANS). ANS 3 2. Intervals Pairs of rationals type Interval (Rational, Rational) = with operations unit (0, 1) = weight (l, r) x l (r l) x = + − ⇥ narrow i (p, q) (weight i p, weight i q) = scale (l, r) x (x l)/(r l) = − − widen i (p, q) (scale i p, scale i q) = so that narrow and unit form a monoid, and inverse relationships: weight i x i x unit 2 () 2 weight i x y scale i y x = () = narrow i j k widen i k j = () = ANS 4 3. Models Given counts :: [(Symbol, Integer)] get encodeSym :: Symbol Interval ! decodeSym :: Rational Symbol ! such that decodeSym x s x encodeSym s = () 2 1 1 1 1 Eg alphabet ‘a’, ‘b’, ‘c’ with counts 2, 3, 5 encoded as (0, /5), ( /5, /2), and ( /2, 1). { } ANS 5 4. Arithmetic coding encode1 :: [Symbol ] Rational ! encode1 pick foldl estep unit where = ◦ 1 estep :: Interval Symbol Interval 1 ! ! estep is narrow i (encodeSym s) 1 = decode1 :: Rational [Symbol ] ! decode1 unfoldr dstep where = 1 dstep :: Rational Maybe (Symbol, Rational) 1 ! dstep x let s decodeSym x in Just (s, scale (encodeSym s) x) 1 = = where pick :: Interval Rational satisfies pick i i.
    [Show full text]
  • Compresso: Efficient Compression of Segmentation Data for Connectomics
    Compresso: Efficient Compression of Segmentation Data For Connectomics Brian Matejek, Daniel Haehn, Fritz Lekschas, Michael Mitzenmacher, Hanspeter Pfister Harvard University, Cambridge, MA 02138, USA bmatejek,haehn,lekschas,michaelm,[email protected] Abstract. Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient stor- age and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with struc- tured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing ap- proaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and im- age segmentation datasets. Our method provides a factor of 600-2200x compression for label volumes, with running times suitable for practice. Keywords: compression, encoding, segmentation, connectomics 1 Introduction Connectomics|reconstructing the wiring diagram of a mammalian brain at nanometer resolution|results in datasets at the scale of petabytes [21,8]. Ma- chine learning methods find cell membranes and create cell body labelings for every neuron [18,12,14] (Fig. 1). These segmentations are stored as label volumes that are typically encoded in 32 bits or 64 bits per voxel to support labeling of millions of different nerve cells (neurons). Storing such data is expensive and transferring the data is slow. To cut costs and delays, we need compression methods to reduce data sizes.
    [Show full text]
  • Forcepoint DLP Supported File Formats and Size Limits
    Forcepoint DLP Supported File Formats and Size Limits Supported File Formats and Size Limits | Forcepoint DLP | v8.8.1 This article provides a list of the file formats that can be analyzed by Forcepoint DLP, file formats from which content and meta data can be extracted, and the file size limits for network, endpoint, and discovery functions. See: ● Supported File Formats ● File Size Limits © 2021 Forcepoint LLC Supported File Formats Supported File Formats and Size Limits | Forcepoint DLP | v8.8.1 The following tables lists the file formats supported by Forcepoint DLP. File formats are in alphabetical order by format group. ● Archive For mats, page 3 ● Backup Formats, page 7 ● Business Intelligence (BI) and Analysis Formats, page 8 ● Computer-Aided Design Formats, page 9 ● Cryptography Formats, page 12 ● Database Formats, page 14 ● Desktop publishing formats, page 16 ● eBook/Audio book formats, page 17 ● Executable formats, page 18 ● Font formats, page 20 ● Graphics formats - general, page 21 ● Graphics formats - vector graphics, page 26 ● Library formats, page 29 ● Log formats, page 30 ● Mail formats, page 31 ● Multimedia formats, page 32 ● Object formats, page 37 ● Presentation formats, page 38 ● Project management formats, page 40 ● Spreadsheet formats, page 41 ● Text and markup formats, page 43 ● Word processing formats, page 45 ● Miscellaneous formats, page 53 Supported file formats are added and updated frequently. Key to support tables Symbol Description Y The format is supported N The format is not supported P Partial metadata
    [Show full text]
  • Compression: Putting the Squeeze on Storage
    Compression: Putting the Squeeze on Storage Live Webcast September 2, 2020 11:00 am PT 1 | ©2020 Storage Networking Industry Association. All Rights Reserved. Today’s Presenters Ilker Cebeli John Kim Brian Will Moderator Chair, SNIA Networking Storage Forum Intel® QuickAssist Technology Samsung NVIDIA Software Architect Intel 2 | ©2020 Storage Networking Industry Association. All Rights Reserved. SNIA-At-A-Glance 3 3 | ©2020 Storage Networking Industry Association. All Rights Reserved. NSF Technologies 4 4 | ©2020 Storage Networking Industry Association. All Rights Reserved. SNIA Legal Notice § The material contained in this presentation is copyrighted by the SNIA unless otherwise noted. § Member companies and individual members may use this material in presentations and literature under the following conditions: § Any slide or slides used must be reproduced in their entirety without modification § The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. § This presentation is a project of the SNIA. § Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. § The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 5 | ©2020 Storage Networking Industry Association.
    [Show full text]
  • RFC 8878: Zstandard Compression and the 'Application/Zstd'
    Stream: Internet Engineering Task Force (IETF) RFC: 8878 Obsoletes: 8478 Category: Informational Published: February 2021 ISSN: 2070-1721 Authors: Y. Collet M. Kucherawy, Ed. Facebook Facebook RFC 8878 Zstandard Compression and the 'application/zstd' Media Type Abstract Zstandard, or "zstd" (pronounced "zee standard"), is a lossless data compression mechanism. This document describes the mechanism and registers a media type, content encoding, and a structured syntax suffix to be used when transporting zstd-compressed content via MIME. Despite use of the word "standard" as part of Zstandard, readers are advised that this document is not an Internet Standards Track specification; it is being published for informational purposes only. This document replaces and obsoletes RFC 8478. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are candidates for any level of Internet Standard; see Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc8878. Copyright Notice Copyright (c) 2021 IETF Trust and the persons identified as the document authors. All rights reserved. Collet & Kucherawy Informational Page 1 RFC 8878 application/zstd February 2021 This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document.
    [Show full text]
  • A Novel Coding Architecture for Lidar Point Cloud Sequence
    IEEE Robotics and Automation Letters (RAL) paper presented at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) October 25-29, 2020, Las Vegas, NV, USA (Virtual) A Novel Coding Architecture for LiDAR Point Cloud Sequence Xuebin Sun1*, Sukai Wang2*, Miaohui Wang3, Zheng Wang4 and Ming Liu2, Senior Member, IEEE Abstract— In this paper, we propose a novel coding architec- the point cloud data. However, these methods are unsuitable ture for LiDAR point cloud sequences based on clustering and for unmanned vehicles. Traditional image or video encoding prediction neural networks. LiDAR point clouds are structured, algorithms, such as JPEG2000 , JPEG-LS [3], and HEVC which provides an opportunity to convert the 3D data to a 2D array, represented as range images. Thus, we cast the [4], were designed mostly for encoding integer pixel values, LiDAR point clouds compression as a range images coding and using them to encode floating-point LiDAR data will problem. Inspired by the high efficiency video coding (HEVC) cause significant distortion. Furthermore, the range image is algorithm, we design a novel coding architecture for the point characterized by sharp edges and homogeneous regions with cloud sequence. The scans are divided into two categories: nearly constant values, which is quite different from textured intra-frames and inter-frames. For intra-frames, a cluster-based intra-prediction technique is utilized to remove the spatial video. Thus, coding the range image with traditional tools redundancy. For inter-frames, we design a prediction network such as the block-based discrete cosine transform (DCT) model using convolutional LSTM cells, which is capable of followed by coarse quantization can result in significant predicting future inter-frames according to the encoded intra- coding errors at sharp edges, causing a safety hazard in frames.
    [Show full text]
  • The Design, Implementation, and Deployment of a System
    The Design, Implementation, and Deployment of a System to Transparently Compress Hundreds of Petabytes of Image Files for a File-Storage Service Daniel Reiter Horn, Ken Elkabany, and Chris Lesniewski-Lass, Dropbox; Keith Winstein, Stanford University https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/horn This paper is included in the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’17). March 27–29, 2017 • Boston, MA, USA ISBN 978-1-931971-37-9 Open access to the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation is sponsored by USENIX. The Design, Implementation, and Deployment of a System to Transparently Compress Hundreds of Petabytes of Image Files For a File-Storage Service Daniel Reiter Horn Ken Elkabany Chris Lesniewski-Laas Keith Winstein Dropbox Dropbox Dropbox Stanford University Abstract 200 150 We report the design, implementation, and deployment JPEGrescan (progressive) Lepton of Lepton, a fault-tolerant system that losslessly com- (this work) presses JPEG images to 77% of their original size on av- 100 erage. Lepton replaces the lowest layer of baseline JPEG compression—a Huffman code—with a parallelized arith- MozJPEG metic code, so that the exact bytes of the original JPEG 50 (arithmetic) file can be recovered quickly. Lepton matches the com- 40 pression efficiency of the best prior work, while decoding 30 more than nine times faster and in a streaming manner. Decompression speed (Mbits/s) Better Lepton has been released as open-source software and 20 packjpg has been deployed for a year on the Dropbox file-storage backend.
    [Show full text]
  • Massively-Parallel Lossless Data Decompression
    Massively-Parallel Lossless Data Decompression Evangelia Sitaridi∗, Rene Muellery, Tim Kaldeweyz, Guy Lohmany and Kenneth A. Ross∗ ∗Columbia University: feva, [email protected] yIBM Almaden Research: fmuellerr, [email protected] zIBM Watson: [email protected] Abstract—Today’s exponentially increasing data volumes and returns caused by per-block overheads. In order to exploit the high cost of storage make compression essential for the Big the high degree of parallelism of GPUs, with potentially Data industry. Although research has concentrated on efficient thousands of concurrent threads, our implementation needs compression, fast decompression is critical for analytics queries that repeatedly read compressed data. While decompression can to take advantage of both intra-block parallelism and inter- be parallelized somewhat by assigning each data block to a block parallelism. For intra-block parallelism, a group of different process, break-through speed-ups require exploiting the GPU threads decompresses the same data block concurrently. massive parallelism of modern multi-core processors and GPUs Achieving this parallelism is challenging due to the inherent for data decompression within a block. We propose two new data dependencies among the threads that collaborate on techniques to increase the degree of parallelism during decom- pression. The first technique exploits the massive parallelism decompressing that block. of GPU and SIMD architectures. The second sacrifices some In this paper, we propose and evaluate two approaches to compression efficiency to eliminate data dependencies that limit address this intra-block decompression challenge. The first parallelism during decompression. We evaluate these techniques technique exploits the SIMD-like execution model of GPUs on the decompressor of the DEFLATE scheme, called Inflate, which is based on LZ77 compression and Huffman encoding.
    [Show full text]
  • Zstandard Compression in Openzfs
    1 of 6 Zstandard Compression in OpenZFS BY ALLAN JUDE ZFS is a highly advanced filesystem with integrated volume manager that was added to FreeBSD in 2007 and has since become a major part of the operating system. ZFS includes a transparent and adjustable compression feature that can seamlessly compress data before stor- ing it and decompress it before returning it for the application’s use. Because the compression is managed by ZFS, applications need not be aware of it. Filesystem compression not only saves space, but in many circumstances, can even lower read and write latency by reducing the total volume of data that needs to be stored or retrieved. Originally ZFS supported a small number of compression algorithms: LZJB (an improved Lem- pel–Ziv variant created by Jeff Bonwick, one of the co-creators of ZFS, it is moderately fast but only offers low compression ratios), ZLE (Zero Length Encoding, which only compresses runs of zeros), and the nine levels of gzip (the familiar slow, but moderately high-compression algorithm). Users could thus choose In 2015, LZ4 replaced LZJB as between no compression, fast but modest compression, or slow but higher compression. Unsurprisingly, these the default when users enable same users often went to great lengths to separate out compression without specifying data that should be compressed from data that was already compressed in order to avoid ZFS trying to re- an algorithm. compress it and wasting time to no benefit. For various historical reasons, compression still defaults to “off” in newly created ZFS storage pools. In 2013, ZFS added a new compression algorithm, LZ4, which offered both higher speed and better compression ratios than LZJB.
    [Show full text]
  • GROOT: a Real-Time Streaming System of High-Fidelity Volumetric Videos
    GROOT: A Real-time Streaming System of High-Fidelity Volumetric Videos Kyungjin Lee Juheon Yi Youngki Lee Seoul National University Seoul National University Seoul National University [email protected] [email protected] [email protected] Sunghyun Choi Young Min Kim Samsung Research Seoul National University [email protected] [email protected] Abstract ACM Reference Format: We present GROOT, a mobile volumetric video streaming system Kyungjin Lee, Juheon Yi, Youngki Lee, Sunghyun Choi, and Young Min Kim. 2020. GROOT: A Real-time Streaming System of High-Fidelity Volumetric that delivers three-dimensional data to mobile devices for a fully Videos . In The 26th Annual International Conference on Mobile Computing immersive virtual and augmented reality experience. The system and Networking (MobiCom ’20), September 21–25, 2020, London, United King- design for streaming volumetric videos should be fundamentally dom. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3372224. different from conventional 2D video streaming systems. First, the 3419214 amount of data required to deliver the 3D volume is considerably larger than conventional videos with frames of 2D images, even 1 Introduction compared to high-resolution 2D or 360◦ videos. Second, the 3D data Volumetric video is an emerging media that provides a highly immer- representation, which encodes the surface of objects within the sive and interactive user experience. Different from 2D videos and volume, is a sparse and unorganized data structure with varying 360◦ videos, the volumetric video consists of 3D data, enabling users scales, whereas a conventional video is composed of a sequence of to watch the video with six-degrees-of-freedom (6DoF).
    [Show full text]