Introduction to Loading Data | Bigquery | Google Cloud

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Loading Data | Bigquery | Google Cloud 8/23/2020 Introduction to loading data | BigQuery | Google Cloud Introduction to loading data This page provides an overview of loading data into BigQuery. Overview There are many situations where you can query data without loading it (#alternatives_to_loading_data). For all other situations, you must rst load your data into BigQuery before you can run queries. To load data into BigQuery, you can: Load a set of data records from Cloud Storage (/bigquery/docs/loading-data-cloud-storage) or from a local le (/bigquery/docs/loading-data-local). The records can be in Avro, CSV, JSON (newline delimited only), ORC, or Parquet format. Export data from Datastore (/datastore/docs) or Firestore (/restore/docs) and load the exported data into BigQuery. Load data from other Google services (#loading_data_from_other_google_services), such as Google Ad Manager and Google Ads. Stream data one record at a time using streaming inserts (/bigquery/streaming-data-into-bigquery). Write data from a Dataow pipeline to BigQuery. Use DML (/bigquery/docs/reference/standard-sql/data-manipulation-language) statements to perform bulk inserts. Note that BigQuery charges for DML queries. See Data Manipulation Language pricing (/bigquery/pricing#dml). Loading data into BigQuery from Drive is not currently supported, but you can query data in Drive by using an external table (/bigquery/external-data-drive). You can load data into a new table or partition, append data to an existing table or partition, or overwrite a table or partition. For more information about working with partitions, see Managing partitioned tables (/bigquery/docs/managing-partitioned-tables). When your data is loaded into BigQuery, it is converted into columnar format for Capacitor https://cloud.google.com/bigquery/docs/loading-data/ 1/8 8/23/2020 Introduction to loading data | BigQuery | Google Cloud (/blog/big-data/2016/04/inside-capacitor-bigquerys-next-generation-columnar-storage-format) (BigQuery's storage format). Limitations Loading data into BigQuery is subject to the some limitations, depending on the location and format of the source data: Limitations on loading local les (/bigquery/docs/loading-data-local#limitations) Limitations on loading data from Cloud Storage (/bigquery/docs/loading-data-cloud-storage#limitations) CSV limitations (/bigquery/docs/loading-data-cloud-storage-csv#limitations) JSON limitations (/bigquery/docs/loading-data-cloud-storage-json#limitations) Datastore export limitations (/bigquery/docs/loading-data-cloud-datastore#limitations) Firestore export limitations (/bigquery/docs/loading-data-cloud-restore#limitations) Limitations on nested and repeated data (/bigquery/docs/nested-repeated#limitations) Choosing a data ingestion format When you are loading data, choose a data ingestion format based upon the following factors: Schema support. Avro, ORC, Parquet, Datastore exports, and Firestore exports are self-describing formats. BigQuery creates the table schema automatically based on the source data. For JSON and CSV data, you can provide an explicit schema, or you can use schema auto-detection (/bigquery/docs/schema-detect). Flat data or nested and repeated elds. Avro, CSV, JSON, ORC, and Parquet all support at data. Avro, JSON, ORC, Parquet, Datastore exports, and Firestore exports also support data with nested and repeated elds. Nested and repeated data is useful for expressing hierarchical data. Nested and https://cloud.google.com/bigquery/docs/loading-data/ 2/8 8/23/2020 Introduction to loading data | BigQuery | Google Cloud repeated elds also reduce duplication when denormalizing the data (#loading_denormalized_nested_and_repeated_data). Embedded newlines. When you are loading data from JSON les, the rows must be newline delimited. BigQuery expects newline-delimited JSON les to contain a single record per line. Encoding. BigQuery supports UTF-8 encoding for both nested or repeated and at data. BigQuery supports ISO-8859-1 encoding for at data only for CSV les. External limitations. Your data might come from a document store database that natively stores data in JSON format. Or, your data might come from a source that only exports in CSV format. Loading compressed and uncompressed data The Avro binary format is the preferred format for loading both compressed and uncompressed data. Avro data is faster to load because the data can be read in parallel, even when the data blocks are compressed. Compressed Avro les are not supported, but compressed data blocks are. BigQuery supports the DEFLATE and Snappy codecs for compressed data blocks in Avro les. Parquet binary format is also a good choice because Parquet's ecient, per-column encoding typically results in a better compression ratio and smaller les. Parquet les also leverage compression techniques that allow les to be loaded in parallel. Compressed Parquet les are not supported, but compressed data blocks are. BigQuery supports Snappy, GZip, and LZO_1X codecs for compressed data blocks in Parquet les. The ORC binary format offers benets similar to the benets of the Parquet format. Data in ORC les is fast to load because data stripes can be read in parallel. The rows in each data stripe are loaded sequentially. To optimize load time, use a data stripe size of approximately 256 MB or less. Compressed ORC les are not supported, but compressed le footer and stripes are. BigQuery supports Zlib, Snappy, LZO, and LZ4 compression for ORC le footers and stripes. For other data formats such as CSV and JSON, BigQuery can load uncompressed les signicantly faster than compressed les because uncompressed les can be read in parallel. https://cloud.google.com/bigquery/docs/loading-data/ 3/8 8/23/2020 Introduction to loading data | BigQuery | Google Cloud Because uncompressed les are larger, using them can lead to bandwidth limitations and higher Cloud Storage costs for data staged in Cloud Storage prior to being loaded into BigQuery. Keep in mind that line ordering isn't guaranteed for compressed or uncompressed les. It's important to weigh these tradeoffs depending on your use case. In general, if bandwidth is limited, compress your CSV and JSON les by using gzip before uploading them to Cloud Storage. Currently, when you load data into BigQuery, gzip is the only supported le compression type for CSV and JSON les. If loading speed is important to your app and you have a lot of bandwidth to load your data, leave your les uncompressed. Loading denormalized, nested, and repeated data Many developers are accustomed to working with relational databases and normalized data schemas (https://en.wikipedia.org/wiki/Database_normalization). Normalization eliminates duplicate data from being stored, and provides consistency when regular updates are made to the data. BigQuery performs best when your data is denormalized. Rather than preserving a relational schema, such as a star or snowake schema, you can improve performance by denormalizing your data and taking advantage of nested and repeated elds. Nested and repeated elds can maintain relationships without the performance impact of preserving a relational (normalized) schema. The storage savings from using normalized data has less of an affect in modern systems. Increases in storage costs are worth the performance gains of using denormalized data. Joins require data coordination (communication bandwidth). Denormalization localizes the data to individual slots (/bigquery/docs/slots), so that execution can be done in parallel. To maintain relationships while denormalizing your data, you can use nested and repeated elds instead of completely attening your data. When relational data is completely attened, network communication (shuing) can negatively impact query performance. For example, denormalizing an orders schema without using nested and repeated elds might require you to group the data by a eld like order_id (when there is a one-to-many relationship). Because of the shuing involved, grouping the data is less effective than denormalizing the data by using nested and repeated elds. In some circumstances, denormalizing your data and using nested and repeated elds doesn't result in increased performance. Avoid denormalization in these use cases: https://cloud.google.com/bigquery/docs/loading-data/ 4/8 8/23/2020 Introduction to loading data | BigQuery | Google Cloud You have a star schema with frequently changing dimensions. BigQuery complements an Online Transaction Processing (OLTP) system with row-level mutation but can't replace it. Nested and repeated elds are supported in the following data formats: Avro JSON (newline delimited) ORC Parquet Datastore exports Firestore exports For information about specifying nested and repeated elds in your schema when you're loading data, see Specifying nested and repeated elds (/bigquery/docs/nested-repeated). Loading data from other Google services BigQuery Data Transfer Service The BigQuery Data Transfer Service (/bigquery-transfer/docs/transfer-service-overview) automates loading data into BigQuery from these services: Google Software as a Service (SaaS) apps Campaign Manager (/bigquery-transfer/docs/doubleclick-campaign-transfer) Cloud Storage (/bigquery-transfer/docs/cloud-storage-transfer) Google Ad Manager (/bigquery-transfer/docs/doubleclick-publisher-transfer) Google Ads (/bigquery-transfer/docs/adwords-transfer) Google Merchant Center (/bigquery-transfer/docs/merchant-center-transfer) (beta) Google Play (/bigquery-transfer/docs/play-transfer) Search Ads 360 (/bigquery-transfer/docs/sa360-transfer) (beta) https://cloud.google.com/bigquery/docs/loading-data/
Recommended publications
  • Data Compression: Dictionary-Based Coding 2 / 37 Dictionary-Based Coding Dictionary-Based Coding
    Dictionary-based Coding already coded not yet coded search buffer look-ahead buffer cursor (N symbols) (L symbols) We know the past but cannot control it. We control the future but... Last Lecture Last Lecture: Predictive Lossless Coding Predictive Lossless Coding Simple and effective way to exploit dependencies between neighboring symbols / samples Optimal predictor: Conditional mean (requires storage of large tables) Affine and Linear Prediction Simple structure, low-complex implementation possible Optimal prediction parameters are given by solution of Yule-Walker equations Works very well for real signals (e.g., audio, images, ...) Efficient Lossless Coding for Real-World Signals Affine/linear prediction (often: block-adaptive choice of prediction parameters) Entropy coding of prediction errors (e.g., arithmetic coding) Using marginal pmf often already yields good results Can be improved by using conditional pmfs (with simple conditions) Heiko Schwarz (Freie Universität Berlin) — Data Compression: Dictionary-based Coding 2 / 37 Dictionary-based Coding Dictionary-Based Coding Coding of Text Files Very high amount of dependencies Affine prediction does not work (requires linear dependencies) Higher-order conditional coding should work well, but is way to complex (memory) Alternative: Do not code single characters, but words or phrases Example: English Texts Oxford English Dictionary lists less than 230 000 words (including obsolete words) On average, a word contains about 6 characters Average codeword length per character would be limited by 1
    [Show full text]
  • Package 'Brotli'
    Package ‘brotli’ May 13, 2018 Type Package Title A Compression Format Optimized for the Web Version 1.2 Description A lossless compressed data format that uses a combination of the LZ77 algorithm and Huffman coding. Brotli is similar in speed to deflate (gzip) but offers more dense compression. License MIT + file LICENSE URL https://tools.ietf.org/html/rfc7932 (spec) https://github.com/google/brotli#readme (upstream) http://github.com/jeroen/brotli#read (devel) BugReports http://github.com/jeroen/brotli/issues VignetteBuilder knitr, R.rsp Suggests spelling, knitr, R.rsp, microbenchmark, rmarkdown, ggplot2 RoxygenNote 6.0.1 Language en-US NeedsCompilation yes Author Jeroen Ooms [aut, cre] (<https://orcid.org/0000-0002-4035-0289>), Google, Inc [aut, cph] (Brotli C++ library) Maintainer Jeroen Ooms <[email protected]> Repository CRAN Date/Publication 2018-05-13 20:31:43 UTC R topics documented: brotli . .2 Index 4 1 2 brotli brotli Brotli Compression Description Brotli is a compression algorithm optimized for the web, in particular small text documents. Usage brotli_compress(buf, quality = 11, window = 22) brotli_decompress(buf) Arguments buf raw vector with data to compress/decompress quality value between 0 and 11 window log of window size Details Brotli decompression is at least as fast as for gzip while significantly improving the compression ratio. The price we pay is that compression is much slower than gzip. Brotli is therefore most effective for serving static content such as fonts and html pages. For binary (non-text) data, the compression ratio of Brotli usually does not beat bz2 or xz (lzma), however decompression for these algorithms is too slow for browsers in e.g.
    [Show full text]
  • The Basic Principles of Data Compression
    The Basic Principles of Data Compression Author: Conrad Chung, 2BrightSparks Introduction Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format. In this topic we will cover how compression works, the advantages and disadvantages of compression, as well as types of compression. What is Compression? Compression is the process of encoding data more efficiently to achieve a reduction in file size. One type of compression available is referred to as lossless compression. This means the compressed file will be restored exactly to its original state with no loss of data during the decompression process. This is essential to data compression as the file would be corrupted and unusable should data be lost. Another compression category which will not be covered in this article is “lossy” compression often used in multimedia files for music and images and where data is discarded. Lossless compression algorithms use statistic modeling techniques to reduce repetitive information in a file. Some of the methods may include removal of spacing characters, representing a string of repeated characters with a single character or replacing recurring characters with smaller bit sequences. Advantages/Disadvantages of Compression Compression of files offer many advantages. When compressed, the quantity of bits used to store the information is reduced. Files that are smaller in size will result in shorter transmission times when they are transferred on the Internet. Compressed files also take up less storage space. File compression can zip up several small files into a single file for more convenient email transmission.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]
  • Context-Aware Encoding & Delivery in The
    Context-Aware Encoding & Delivery in the Web ICWE 2020 Benjamin Wollmer, Wolfram Wingerath, Norbert Ritter Universität Hamburg 9 - 12 June, 2020 Business Impact of Page Speed Business Uplift Speed Speed Downlift Uplift Business Downlift Felix Gessert: Mobile Site Speed and the Impact on E-Commerce, CodeTalks 2019 So Far On Compression… GZip SDCH Deflate Delta Brotli Encoding GZIP/Deflate – The De Facto Standard in the Web Encoding Size None 200 kB Gzip ~36 kB This example text is used to show how LZ77 finds repeating elements in the example[70;14] text ~81.9% saved data J. Alakuijala, E. Kliuchnikov, Z. Szabadka, L. Vandevenne: Comparison of Brotli, Deflate, Zopfli, LZMA, LZHAM and Bzip2 Compression Algorithms, 2015 Delta Encoding – Updating Stale Content Encoding Size None 200 kB Gzip ~36 kB Delta Encoding ~34 kB 83% saved data J. C. Mogul, F. Douglis, A. Feldmann, B. Krishnamurthy: Potential Benefits of Delta Encoding and Data Compression for HTTP, 1997 SDCH – Reusing Dictionaries Encoding Size This is an example None 200 kB Gzip ~36 kB Another example Delta Encoding ~34 kB SDCH ~7 kB Up to 81% better results (compared to gzip) O. Shapira: Shared Dictionary Compression for HTTP at LinkedIn, 2015 Brotli – SDCH for Everyone Encoding Size None 200 kB Gzip ~36 kB Delta Encoding ~34 kB SDCH ~7 kB Brotli ~29 kB ~85.6% saved data J. Alakuijala, E. Kliuchnikov, Z. Szabadka, L. Vandevenne: Comparison of Brotli, Deflate, Zopfli, LZMA, LZHAM and Bzip2 Compression Algorithms, 2015 So Far On Compression… Theory vs. Reality GZip (~80%) SDCH Deflate
    [Show full text]
  • Lempel-Ziv Sliding Window Update with Suffix Arrays
    i-ETC: ISEL Academic Journal of Electronics, Telecommunications and Computers CETC2011 Issue, Vol. 2, n. 1 (2013) ID-4 LEMPEL-ZIV SLIDING WINDOW UPDATE WITH SUFFIX ARRAYS Artur Ferreira1,3,4 Arlindo Oliveira2,4 Mario´ Figueiredo3,4 1Instituto Superior de Engenharia de Lisboa (ISEL) 2Instituto de Engenharia de Sistemas e Computadores – Investigac¸ao˜ e Desenvolvimento (INESC-ID) 3Instituto de Telecomunicac¸oes˜ (IT) 4Instituto Superior Tecnico´ (IST), Lisboa, PORTUGAL [email protected] [email protected] [email protected] Keywords: Lempel-Ziv compression, suffix arrays, sliding window update, substring search. Abstract: The sliding window dictionary-based algorithms of the Lempel-Ziv (LZ) 77 family are widely used for uni- versal lossless data compression. The encoding component of these algorithms performs repeated substring search. Data structures, such as hash tables, binary search trees, and suffix trees have been used to speedup these searches, at the expense of memory usage. Previous work has shown how suffix arrays (SA) can be used for dictionary representation and LZ77 decomposition. In this paper, we improve over that work by proposing a new efficient algorithm to update the sliding window each time a token is produced at the output. The pro- posed algorithm toggles between two SA on consecutive tokens. The resulting SA-based encoder requires less memory than the conventional tree-based encoders. In comparing our SA-based technique against tree-based encoders, on a large set of benchmark files, we find that, in some compression settings, our encoder is also faster than tree-based encoders. 1 INTRODUCTION dictionaries [4] and to find repeating sub-sequences for data deduplication [1], among other applications.
    [Show full text]
  • Answers to Exercises
    Answers to Exercises A bird does not sing because he has an answer, he sings because he has a song. —Chinese Proverb Intro.1: abstemious, abstentious, adventitious, annelidous, arsenious, arterious, face- tious, sacrilegious. Intro.2: When a software house has a popular product they tend to come up with new versions. A user can update an old version to a new one, and the update usually comes as a compressed file on a floppy disk. Over time the updates get bigger and, at a certain point, an update may not fit on a single floppy. This is why good compression is important in the case of software updates. The time it takes to compress and decompress the update is unimportant since these operations are typically done just once. Recently, software makers have taken to providing updates over the Internet, but even in such cases it is important to have small files because of the download times involved. 1.1: (1) ask a question, (2) absolutely necessary, (3) advance warning, (4) boiling hot, (5) climb up, (6) close scrutiny, (7) exactly the same, (8) free gift, (9) hot water heater, (10) my personal opinion, (11) newborn baby, (12) postponed until later, (13) unexpected surprise, (14) unsolved mysteries. 1.2: A reasonable way to use them is to code the five most-common strings in the text. Because irreversible text compression is a special-purpose method, the user may know what strings are common in any particular text to be compressed. The user may specify five such strings to the encoder, and they should also be written at the start of the output stream, for the decoder’s use.
    [Show full text]
  • Data Compression in Solid State Storage
    Data Compression in Solid State Storage John Fryar [email protected] Santa Clara, CA August 2013 1 Acknowledgements This presentation would not have been possible without the counsel, hard work and graciousness of the following individuals and/or organizations: Raymond Savarda Sandgate Technologies Santa Clara, CA August 2013 2 Disclaimers The opinions expressed herein are those of the author and do not necessarily represent those of any other organization or individual unless specifically cited. A thorough attempt to acknowledge all sources has been made. That said, we’re all human… Santa Clara, CA August 2013 3 Learning Objectives At the conclusion of this tutorial the audience will have been exposed to: • The different types of Data Compression • Common Data Compression Algorithms • The Deflate/Inflate (GZIP/GUNZIP) algorithms in detail • Implementation Options (Software/Hardware) • Impacts of design parameters in Performance • SSD benefits and challenges • Resources for Further Study Santa Clara, CA August 2013 4 Agenda • Background, Definitions, & Context • Data Compression Overview • Data Compression Algorithm Survey • Deflate/Inflate (GZIP/GUNZIP) in depth • Software Implementations • HW Implementations • Tradeoffs & Advanced Topics • SSD Benefits and Challenges • Conclusions Santa Clara, CA August 2013 5 Definitions Item Description Comments Open A system which will compress Must strictly adhere to standards System data for use by other entities. on compress / decompress I.E. the compressed data will algorithms exit the system Interoperability among vendors mandated for Open Systems Closed A system which utilizes Can support a limited, optimized System compressed data internally but subset of standard. does not expose compressed Also allows custom algorithms data to the outside world No Interoperability req’d.
    [Show full text]
  • Video/Audio Compression
    5.9. Video Compression (1) Basics: video := time sequence of single images frequent point of view: video compression = image compression with a temporal component assumption: successive images of a video sequence are similar, e.g. directly adjacent images contain almost the same information that has to be carried only once wanted: strategies to exploit temporal redundancy and irrelevance! motion prediction/estimation, motion compensation, block matching intraframe and interframe coding video compression algorithms and standards are distinguished according to the peculiar conditions, e.g. videoconferencing, applications such as broadcast video Prof. Dr. Paul Müller, University of Kaiserslautern 1 Video Compression (2) Simple approach: M-JPEG compression of a time sequence of images based on the JPEG standard unfortunately, not standardized! makes use of the baseline system of JPEG, intraframe coding, color subsampling 4:1:1, 6 bit quantizer temporal redundancy is not used! applicable for compression ratios from 5:1 to 20:1, higher rates call for interframe coding possibility to synchronize audio data is not provided direct access to full images at every time position application in proprietary consumer video cutting software and hardware solutions Prof. Dr. Paul Müller, University of Kaiserslautern 2 Video Compression (3) Motion prediction and compensation: kinds of motion: • change of color values / change of position of picture elements • translation, rotation, scaling, deformation of objects • change of lights and shadows • translation, rotation, zoom of camera kinds of motion prediction techniques: • prediction of pixels or ranges of pixels neighbouring but no semantic relations • model based prediction grid model with model parameters describing the motion, e.g. head- shoulder-arrangements • object or region based prediction extraction of (video) objects, processing of geometric and texture information, e.g.
    [Show full text]
  • An Inter-Data Encoding Technique That Exploits Synchronized Data for Network Applications
    1 An Inter-data Encoding Technique that Exploits Synchronized Data for Network Applications Wooseung Nam, Student Member, IEEE, Joohyun Lee, Member, IEEE, Ness B. Shroff, Fellow, IEEE, and Kyunghan Lee, Member, IEEE Abstract—In a variety of network applications, there exists significant amount of shared data between two end hosts. Examples include data synchronization services that replicate data from one node to another. Given that shared data may have high correlation with new data to transmit, we question how such shared data can be best utilized to improve the efficiency of data transmission. To answer this, we develop an inter-data encoding technique, SyncCoding, that effectively replaces bit sequences of the data to be transmitted with the pointers to their matching bit sequences in the shared data so called references. By doing so, SyncCoding can reduce data traffic, speed up data transmission, and save energy consumption for transmission. Our evaluations of SyncCoding implemented in Linux show that it outperforms existing popular encoding techniques, Brotli, LZMA, Deflate, and Deduplication. The gains of SyncCoding over those techniques in the perspective of data size after compression in a cloud storage scenario are about 12.5%, 20.8%, 30.1%, and 66.1%, and are about 78.4%, 80.3%, 84.3%, and 94.3% in a web browsing scenario, respectively. Index Terms—Source coding; Data compression; Encoding; Data synchronization; Shared data; Reference selection F 1 INTRODUCTION are capable of exploiting previously stored or delivered URING the last decade, cloud-based data synchroniza- data for storing or transmitting new data. However, they D tion services for end-users such as Dropbox, OneDrive, mostly work at the level of files or chunks of a fixed and Google Drive have attracted a huge number of sub- size (e.g., 4MB in Dropbox, 8kB in Neptune [4]), which scribers.
    [Show full text]
  • I Came to Drop Bombs Auditing the Compression Algorithm Weapons Cache
    I Came to Drop Bombs Auditing the Compression Algorithm Weapons Cache Cara Marie NCC Group Blackhat USA 2016 About Me • NCC Group Senior Security Consultant Pentested numerous networks, web applications, mobile applications, etc. • Hackbright Graduate • Ticket scalper in a previous life • @bones_codes | [email protected] What is a Decompression Bomb? A decompression bomb is a file designed to crash or render useless the program or system reading it. Vulnerable Vectors • Chat clients • Image hosting • Web browsers • Web servers • Everyday web-services software • Everyday client software • Embedded devices (especially vulnerable due to weak hardware) • Embedded documents • Gzip’d log uploads A History Lesson early 90’s • ARC/LZH/ZIP/RAR bombs were used to DoS FidoNet systems 2002 • Paul L. Daniels publishes Arbomb (Archive “Bomb” detection utility) 2003 • Posting by Steve Wray on FullDisclosure about a bzip2 bomb antivirus software DoS 2004 • AERAsec Network Services and Security publishes research on the various reactions of antivirus software against decompression bombs, includes a comparison chart 2014 • Several CVEs for PIL are issued — first release July 2010 (CVE-2014-3589, CVE-2014-3598, CVE-2014-9601) 2015 • CVE for libpng — first release Aug 2004 (CVE-2015-8126) Why Are We Still Talking About This?!? Why Are We Still Talking About This?!? Compression is the New Hotness Who This Is For Who This Is For The Archives An archive bomb, a.k.a. zip bomb, is often employed to disable antivirus software, in order to create an opening for more traditional viruses • Singly compressed large file • Self-reproducing compressed files, i.e. Russ Cox’s Zips All The Way Down • Nested compressed files, i.e.
    [Show full text]
  • HTTP/2 Compression Dictionaries
    HTTP/2 Compression Dictionaries Vlad Krasnov In a nutshell ● Allow cross-stream compression in HTTP/2 by means of "dictionaries" ● Including a set(s?) of static dictionaries for initialization ○ Each dictionary targets a different MIME type ● Up to 256 dictionaries per connection ● Default dictionary size is 217 ○ Defined by settings ● Server indicates if a stream might be used for compression in the future by sending a SET_DICTIONARY frame, before the data ○ Client keeps part of the data ● Server can use a previously defined dictionary with a USE_DICTIONARY frame Rationale ● Yesterday ○ HTTP/1 with large assets ○ CPU time was expensive ○ Static assets compressed only with "gzip" ■ What about "brotli", "sdch", "sdch+gzip", "sdch+brotli"? ● Today ○ HTTP/2 with (ideally) smaller assets ○ CPU time significantly cheaper ■ Cloudflare uses gzip -8 for dynamic compression ■ Tomorrow: FPGAs ○ Store in gzip -> compress to other formats on demand ● Network ○ Gets cheaper, but slowly ○ Less data -> less packet loss Benefits ● Client ○ Less bandwidth wasted ■ Reduced packet loss ■ Faster page loads ● Server ○ Less bandwidth wasted ○ Improved compression ratio almost for free ■ Alternatively: keep compression ratio, reduce CPU usage ○ Greater incentive to re-compress static content ● CDN ○ Highly efficient origin pulls ■ Almost free in many cases Performance simulation ● Crawled over ~2000 Alexa top ● Used Chromedriver to load each page ● Simulated cross-stream compression with gzip and brotli ● Several compression strategies and dictionary sizes ● Best overall strategy: ○ If asset first of its type -> use static dictionary for type ○ Else use dynamic dictionary for type ○ Append asset to the dictionary for the type Performance Performance Performance Performance Performance ● Brotli -5 w.
    [Show full text]