Assessing the Overhead of Offloading Compression Tasks

Assessing the Overhead of Offloading Compression Tasks

Assessing the Overhead of Offloading Compression Tasks Laura Promberger Rainer Schwemmer Holger Fröning [email protected] [email protected] [email protected] CERN CERN heidelberg.de Geneva, Switzerland Geneva, Switzerland Heidelberg University Heidelberg University Heidelberg, Germany Heidelberg, Germany ABSTRACT CCS CONCEPTS Exploring compression is increasingly promising as trade- • Computer systems organization ! Reconfigurable off between computations and data movement. There are computing; • Information systems ! Data compres- two main reasons: First, the gap between processing speed sion. and I/O continues to grow, and technology trends indicate a continuation of this. Second, performance is determined KEYWORDS by energy efficiency, and the overall power consumption Compression, Offloading, Accelerator Card, Overhead, Mea- is dominated by the consumption of data movements. For surement these reasons there is already a plethora of related works on ACM Reference Format: compression from various domains. Most recently, a couple Laura Promberger, Rainer Schwemmer, and Holger Fröning. 2020. of accelerators have been introduced to offload compression Assessing the Overhead of Offloading Compression Tasks. In 49th tasks from the main processor, for instance by AHA, Intel and International Conference on Parallel Processing - ICPP : Workshops Microsoft. Yet, one lacks the understanding of the overhead (ICPP Workshops ’20), August 17–20, 2020, Edmonton, AB, Canada. of compression when offloading tasks. In particular, such of- ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3409390. floading is most beneficial for overlap with other tasks, ifthe 3409405 associated overhead on the main processor is negligible. This work evaluates the integration costs compared to a solely 1 INTRODUCTION software-based solution considering multiple compression al- The current computational landscape is dominated by in- gorithms. Among others, High Energy Physics data are used creasingly costly data movements, both in terms of energy as a prime example of big data sources. The results imply and time, while computations continuously decrease in these that on average the zlib implementation on the accelerator costs [10, 13]. Furthermore, the amount of data generated achieves a comparable compression ratio to zlib level 2 and processed, preferably as fast and interactive as possible, on a CPU, while having up to 17 times the throughput and is growing dramatically. As a consequence, especially for big utilizing over 80 % less CPU resources. These results suggest data applications, it becomes more and more promising to that, given the right orchestration of compression and data trade computations for communication in order to diminish movement tasks, the overhead of offloading compression is the implications of data movements. Overall run time bene- limited but present. Considering that compression is only a fits should be possible, even though this might substantially single task of a larger data processing pipeline, this overhead increase the amount of computations. Of particular interest cannot be neglected. in this context is the recent introduction of dedicated acceler- ators for compression tasks, including accelerators provided by AHA [12], Intel QuickAssist [14] or Microsoft’s Project Permission to make digital or hard copies of part or all of this work for Corsica [22]. personal or classroom use is granted without fee provided that copies are Most published works about compression utilizing acceler- not made or distributed for profit or commercial advantage and that copies ators are focusing on the compression performance, includ- bear this notice and the full citation on the first page. Copyrights for third- ing work on GPU [4], addressing SIMD in general [18, 25, 27], party components of this work must be honored. For all other uses, contact and FPGAs [7, 9, 17, 24]. The common part of all these works the owner/author(s). is the focus on improving compression performance, albeit ICPP Workshops ’20, August 17–20, 2020, Edmonton, AB, Canada © 2020 Copyright held by the owner/author(s). for various reasons. However, it is commonly neglected to ACM ISBN 978-1-4503-8868-9/20/08. assess the additional integration costs for the entire system. https://doi.org/10.1145/3409390.3409405 Only if the overhead of orchestrating compression tasks on ICPP Workshops ’20, August 17–20, 2020, Edmonton, AB, Canada Promberger, et al. accelerators and the associated data movements is as small here range from developing FPGA-specific compression al- as possible, one can truly speak of an off-loaded task. Other- gorithms [9], over integrating the FPGA compression hard- wise, the host is not entirely free to perform other, hopefully ware into existing systems [17, 24], to implementing well- overlapping, tasks. known compression algorithms, like gzip, and comparing Thus, the present work will assess these integration costs the achieved performance to other systems (CPU, GPGPU). by evaluating the overhead associated with orchestrating For example, some work explored CPUs, FPGAs, and CPU- offloaded compression tasks, as well as assessing obtained FPGA co-design for LZ77 accelerations [7], while other ana- compression throughput and ratio by comparing it to well- lyzed the hardware acceleration capabilities of the IBM Pow- known general-purpose compression algorithms solely run erEN processor for zlib [16]. Regarding FPGAs, Abdelfattah on the host. The data sets used are a representative set of et al. [1] and Qiao et al. [21] implemented their own deflate standard compression corpora, and additionally, as a prime version on an FPGA. example of big data, High Energy Physics (HEP) data sets However, the large majority of those works focuses on from the European Organization of Nuclear Research (CERN) the compression performance in terms of throughput and are used. For these data sets the following will be assessed: compression ratio, but not the resulting integration costs for compression time, ratio, overhead and power consumption the entire system. Overall, little to no published work can be of a compression accelerator. found covering the topic of system integration costs in the In particular, this work makes following contributions: context of compression accelerators. (1) Proposing and implementing a well-defined, methodi- 3 METHODOLOGY AND BENCHMARK cal, multi-threaded benchmark; LAYOUT (2) Analyzing the performance of the accelerator in terms of machine utilization, power consumption, through- 3.1 Server and Input Data put and compression ratio; The server selected was an Intel Xeon E5-2630 v4. The server’s (3) Comparing the achieved results to multiple software- technical specifications are listed in Table 1. based, i.e. non-offloaded, compression algorithms. For the input data four standard compression corpora and three HEP data sets were selected. To be representative for Note that this work analyzes only compression and not de- big data, compression corpora with at least 200 MB were compression. Big data applications, like utilized at CERN, preferred: often have I/O limitations and expectations, where a lowest throughput threshold must be passed e.g. to fulfill real-time • enwik9[20] consists of about 1 GB of English Wikipedia requirements. Compression - compared to decompression - from 2006 is the throughput limiting task and as a result studied in this • proteins.200MB[8] consists protein data of the Pizza work. Notably, for all selected algorithms the decompression & Chili corpus speed is at least the compression speed, if not significantly • silesia corpus[2] consists of multiple different files, higher. summing up to 203 MB, which were tarred into a single As a result, this work will provide details for a better file understanding of the offloading costs for compression tasks. • calgary corpus[2] is one of the most famous compres- The methodology allows in particular system designers to sion corpora. Tarred into a single file, it only sums identify deployment opportunities given existing compute up to 3.1 MB. Therefore, it was copied multiple times systems, data sources, and integration constraints. to reach the 150 MB to conform with the benchmark requirements 2 RELATED WORK The HEP data came from three different CERN experi- ments (ATLAS [4.1 GB], ALICE [1 GB] and LHCb [4.9 GB]). To diminish the implications on (network) data movements At CERN, particles are accelerated in a vacuum very close and storage, massive utilization of compression can be found, to the speed of light. They are then collided and analyzed by among others, in communities working with databases, graph the experiments. The data created consists of geographical computations or computer graphics. For theses communities locations where particles were registered. Due to the nature a variety of works exists which addresses compression opti- of quantum mechanics this process is close to a random mizations by utilizing hardware-related features like SIMD number generator. A high compression ratio can therefore and GPGPUs[4, 18, 25, 27]. not be expected. All data sets contain uncompressed data1, Moreover, a multitude of research has been conducted to efficiently use Field Programmable Gate Array (FPGA) 1Parts of the detectors perform already some compression, e.g. similar to technology, as they excel at integer computations. Works the representation of sparse matrices in coordinate format. When talking Assessing the Overhead of Offloading Compression Tasks

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us