HDFS File Formats: Study and Performance Comparison

Total Page:16

File Type:pdf, Size:1020Kb

HDFS File Formats: Study and Performance Comparison UNIVERSIDAD DE VALLADOLID ESCUELA TECNICA´ SUPERIOR INGENIEROS DE TELECOMUNICACION´ TRABAJO FIN DE MASTER MASTER UNIVERSITARIO EN INVESTIGACION´ EN TECNOLOG´IAS DE LA INFORMACION´ Y LAS COMUNICACIONES HDFS File Formats: Study and Performance Comparison Autor: D. Alvaro´ Alonso Isla Tutor: Dr. D. Miguel Angel´ Mart´ınez Prieto Dr. D. An´ıbal Bregon´ Bregon´ Valladolid, 29 de Junio de 2018 T´ITULO: HDFS File Formats: Study and Perfor- mance Comparison AUTOR: D. Alvaro´ Alonso Isla TUTOR: Dr. D. Miguel Angel´ Mart´ınez Prieto Dr. D. An´ıbal Bregon´ Bregon´ DEPARTAMENTO: Departamento de Informatica´ Tribunal PRESIDENTE: Dr. D. Pablo de la Fuente Redondo VOCAL: Dr. D. Alejandra Mart´ınez Mones´ SECRETARIO: Dr. D. Guillermo Vega Gorgojo FECHA: 29 de Junio de 2018 CALIFICACION´ : iii Resumen del TFM El sistema distribuido Hadoop se esta´ volviendo cada vez mas´ popular a la hora de al- macenar y procesar grandes cantidades de datos (Big Data). Al estar compuesto por muchas maquinas,´ su sistema de ficheros, llamado HDFS (Hadoop Distributed File Sys- tem), tambien´ es distribuido. Al ser HDFS un sistema de almacenamiento distinto a los tradicionales, se han desarrollado nuevos formatos de almacenamiento de ficheros para ajustarse a las caracter´ısticas particulares de HDFS. En este trabajo estudiamos estos nuevos formatos de ficheros, prestando especial atencion´ a sus caracter´ısticas particulares, con el fin de ser capaces de intuir cual´ de ellos tendra´ un mejor funcionamiento segun´ las necesidades de los datos que tengamos. Para alcanzar este objetivo, hemos propuesto un marco teorico´ de comparacion,´ con el que poder reconocer facilmente´ que´ formatos se ajustan a nuestras necesidades. Ademas,´ hemos realizado un estudio experimental para reforzar la comparacion´ anterior, y poder obtener conclusiones sobre que´ formatos son los mejores para los casos expuestos. Para ello se han seleccionado dos conjuntos de datos con distintas caracter´ısticas, y un grupo de consultas basicas,´ que se ejecutan con trabajos MapReduce escritos en Java y a traves´ de la herramienta Hive. El objetivo final de este trabajo sera´ ser capaces de identificar las distintas fortalezas y debilidades de los formatos de almacenamiento. Palabras clave Big Data, Hadoop, HDFS, MapReduce, Formatos de Almacenamiento. v Abstract The distributed system Hadoop has become very popular for storing and process large amounts of data (Big Data). As it is composed of many machines, its file system, called HDFS (Hadoop Distributed File System), is also distributed. But as HDFS is not a tradi- tional storage system, plenty of new file formats have been developed, to take advantage of its features. In this work we study that new formats to find out their characteristics, and being able to decide which ones can be better knowing the needs of our data. For that goal, we have made a theoretical framework to compare them, and easily recognize which formats fit our needs. Also we have made an experimental study to find out how the formats work in some specific situations, selecting two very different datasets and a set of simple queries, resolved with MapReduce jobs, written with Java or run using Hive tool. The final goal of this work is to be able to identify the different strengths and weakenesses of the file formats. Keywords Big Data, Hadoop, HDFS, MapReduce, File Formats. vii viii Agradecimientos Este trabajo ha sido posible gracias a la colaboracion´ de Boeing Research & Technol- ogy Europe (BR&T-E) y forma parte del proyecto AIRPORTS. Este proyecto esta´ finan- ciado por CDTI y MINECO (FEDER) ref. IDI-20150616, CIEN 2015. Quiero agradecer a todos los miembros de BR&T-E por dejarnos utilizar su cluster Hadoop para la real- izacion´ de las pruebas de este proyecto, y en especial a David Scarlatti por facilitarme toda la informacion´ que he ido necesitando sobre las caracter´ısticas del sistema. He de hacer un agradecimiento especial a mi companero˜ Ivan´ Garc´ıa, con el que he compartido largas conversaciones sobre el progreso que ´ıbamos haciendo en nuestros tra- bajos, que ayuda mucho a sobrellevar los malos momentos. Ademas,´ en mas´ de una ocasion´ le he tenido que pedir consejo sobre ciertos aspectos que el´ conoc´ıa mejor que yo. Tambien´ quiero dar las gracias a mis tutores Miguel Angel´ Mart´ınez y An´ıbal Bregon,´ por haber dedicado tanto tiempo en ayudarme y llevar un completo seguimiento de mi avance. Gracias a Miguel Angel,´ que de nuevo me ha prestado una de sus ideas para llevarla a cabo y hacer un proyecto de ella, aunque en esta ocasion´ el resultado es bas- tante distinto de lo que la idea inicial planteaba. Y especialmente agradecer a An´ıbal su especial dedicacion,´ que en los momentos finales del trabajo en los que Miguel Angel´ no estaba disponible, ha hecho un esfuerzo sobrehumano para sacar tiempo para mi trabajo, aconsejarme y guiarme, a pesar de ser un tema en el que el´ no esta´ especializado. Finalmente, y no por ello menos importante, gracias a mi familia y a mi pareja Ainoa, que tantas horas han tenido que escuchar mis problemas, sin saber que´ responderme porque no entend´ıan. Gracias por todo el apoyo prestado y por esos momentos en que me han ayudado a desconectar y descansar. ix x Contents 1 Introduction 1 1.1 Motivation . 2 1.2 Objectives and Results of the Work . 3 1.3 Methodology . 4 1.4 Document structure . 4 2 Distributed file systems 7 2.1 Basic concepts . 7 2.2 Characteristics . 8 2.2.1 Transparency . 8 2.2.2 Naming . 8 2.2.3 Scalability . 9 2.2.4 Remote Multiple Access . 10 2.3 Architectures . 10 2.4 Fault tolerance . 11 3 Hadoop Distributed File System 13 3.1 GFS . 14 3.2 HDFS Architecture . 15 3.3 HDFS Writing and Reading . 17 3.4 HDFS Blocks Robustness . 18 3.4.1 Replication . 18 3.4.2 Reading/writing Failure . 19 3.4.3 Namenode Failure . 20 4 HDFS storage formats 21 4.1 Text files . 23 4.1.1 CSV . 23 4.1.2 JSON . 23 4.2 SequenceFiles . 24 4.3 Avro . 26 4.4 RCFiles . 27 4.5 ORC . 29 4.6 Parquet . 30 4.7 Comparison . 33 xi 5 Experimental results 37 5.1 Methodology . 37 5.1.1 Mapreduce . 37 5.1.2 Datasets . 38 5.1.3 Considered Formats . 39 5.1.4 Queries . 40 5.1.5 Experimentation environment . 41 5.1.6 Used Parameters . 41 5.2 Results . 42 5.2.1 Opensky . 42 5.2.2 Github log . 57 6 Conclusions 79 6.1 Future work . 80 I APPENDICES 85 A Opensky Schema 87 B GitHub log Schema 89 xii List of Figures 3.1 HDFS architecture . 16 3.2 HDFS writing operation . 17 3.3 HDFS reading operation . 18 3.4 HDFS replication example . 19 4.1 SequenceFile record compression . 24 4.2 SequenceFile block compression . 25 4.3 Avro structure . 27 4.4 RCFile structure . 28 4.5 ORC structure . 30 4.6 Parquet schema example . 31 4.7 Parquet definition level example . 32 4.8 Parquet repetition level example . 32 4.9 Parquet file structure . 33 5.1 Opensky format sizes . 42 5.2 Opensky query 1 performance . 44 5.3 Opensky query 2 performance . 47 5.4 Opensky query 3 performance . 50 5.5 Opensky query 4 performance . 53 5.6 Opensky query 5 performance . 55 5.7 GitHub log format sizes . 57 5.8 GitHub log query 1 performance . 59 5.9 GitHub log query 2.1 performance . 62 5.10 GitHub log query 2.2 performance . 65 5.11 GitHub log query 3 performance . 68 5.12 GitHub log query 4 performance . 71 5.13 GitHub log query 5.1 performance . 73 5.14 GitHub log query 5.2 performance . 76 xiii xiv List of Tables 4.1 File formats comparison table . 34 5.1 Opensky format sizes (GB) . 42 5.2 Opensky query 1 results . 45 5.3 Opensky query 2 results . 48 5.4 Opensky query 3 results . 51 5.5 Opensky query 4 results . 54 5.6 Opensky query 5 results . 56 5.7 GitHub log format sizes (GB) . 58 5.8 GitHub log query 1 results . 60 5.9 GitHub log query 2.1 results . 63 5.10 GitHub log query 2.2 results . 66 5.11 GitHub log query 3 results . 69 5.12 GitHub log query 4 results . 72 5.13 GitHub log query 5.1 results . 74 5.14 GitHub log query 5.2 results . 77 xv xvi Chapter 1 Introduction Since several years, the Big Data concept is becoming more and more popular, but It is not something new. We can say that this concept reached maturity in 2013, when it was introduced as a new term in the Oxford English Dictionary (OED) [1], where it is defined as “Extremely large data sets that may be analysed computationally to reveal pat- terns, trends, and associations, especially relating to human behaviour and interactions”. However, this is not a very accurate definition; it only refers to the volume of the data and some fields in which it can.
Recommended publications
  • Schematic Entry
    Schematic Entry Copyrights Software, documentation and related materials: Copyright © 2002 Altium Limited This software product is copyrighted and all rights are reserved. The distribution and sale of this product are intended for the use of the original purchaser only per the terms of the License Agreement. This document may not, in whole or part, be copied, photocopied, reproduced, translated, reduced or transferred to any electronic medium or machine-readable form without prior consent in writing from Altium Limited. U.S. Government use, duplication or disclosure is subject to RESTRICTED RIGHTS under applicable government regulations pertaining to trade secret, commercial computer software developed at private expense, including FAR 227-14 subparagraph (g)(3)(i), Alternative III and DFAR 252.227-7013 subparagraph (c)(1)(ii). P-CAD is a registered trademark and P-CAD Schematic, P-CAD Relay, P-CAD PCB, P-CAD ProRoute, P-CAD QuickRoute, P-CAD InterRoute, P-CAD InterRoute Gold, P-CAD Library Manager, P-CAD Library Executive, P-CAD Document Toolbox, P-CAD InterPlace, P-CAD Parametric Constraint Solver, P-CAD Signal Integrity, P-CAD Shape-Based Autorouter, P-CAD DesignFlow, P-CAD ViewCenter, Master Designer and Associate Designer are trademarks of Altium Limited. Other brand names are trademarks of their respective companies. Altium Limited www.altium.com Table of Contents chapter 1 Introducing P-CAD Schematic P-CAD Schematic Features ................................................................................................1 About
    [Show full text]
  • File Formats for Big Data Storage Systems
    International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 – 8958, Volume-9 Issue-1, October 2019 File Formats for Big Data Storage Systems Samiya Khan, Mansaf Alam analysis or visualization. Hadoop allows the user to store data Abstract: Big data is one of the most influential technologies of on its repository in several ways. Some of the commonly the modern era. However, in order to support maturity of big data available and understandable working formats include XML systems, development and sustenance of heterogeneous [9], CSV [10] and JSON [11]. environments is requires. This, in turn, requires integration of Although, JSON, XML and CSV are human-readable technologies as well as concepts. Computing and storage are the two core components of any big data system. With that said, big formats, but they are not the best way, to store data, on a data storage needs to communicate with the execution engine and Hadoop cluster. In fact, in some cases, storage of data in such other processing and visualization technologies to create a raw formats may prove to be highly inefficient. Moreover, comprehensive solution. This brings the facet of big data file parallel storage is not possible for data stored in such formats. formats into picture. This paper classifies available big data file In view of the fact that storage efficiency and parallelism are formats into five categories namely text-based, row-based, the two leading advantages of using Hadoop, the use of raw column-based, in-memory and data storage services. It also compares the advantages, shortcomings and possible use cases of file formats may just defeat the whole purpose.
    [Show full text]
  • Research Article Improving I/O Efficiency in Hadoop-Based Massive Data Analysis Programs
    Hindawi Scientific Programming Volume 2018, Article ID 2682085, 9 pages https://doi.org/10.1155/2018/2682085 Research Article Improving I/O Efficiency in Hadoop-Based Massive Data Analysis Programs Kyong-Ha Lee ,1 Woo Lam Kang,2 and Young-Kyoon Suh 3 1Research Data Hub Center, Korea Institute of Science and Technology Information, Daejeon, Republic of Korea 2School of Computing, KAIST, Daejeon, Republic of Korea 3School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea Correspondence should be addressed to Young-Kyoon Suh; [email protected] Received 30 April 2018; Revised 24 October 2018; Accepted 6 November 2018; Published 2 December 2018 Academic Editor: Basilio B. Fraguela Copyright © 2018 Kyong-Ha Lee et al. /is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Apache Hadoop has been a popular parallel processing tool in the era of big data. While practitioners have rewritten many conventional analysis algorithms to make them customized to Hadoop, the issue of inefficient I/O in Hadoop-based programs has been repeatedly reported in the literature. In this article, we address the problem of the I/O inefficiency in Hadoop-based massive data analysis by introducing our efficient modification of Hadoop. We first incorporate a columnar data layout into the conventional Hadoop framework, without any modification of the Hadoop internals. We also provide Hadoop with indexing capability to save a huge amount of I/O while processing not only selection predicates but also star-join queries that are often used in many analysis tasks.
    [Show full text]
  • Lossless Compression of Internal Files in Parallel Reservoir Simulation
    Lossless Compression of Internal Files in Parallel Reservoir Simulation Suha Kayum Marcin Rogowski Florian Mannuss 9/26/2019 Outline • I/O Challenges in Reservoir Simulation • Evaluation of Compression Algorithms on Reservoir Simulation Data • Real-world application - Constraints - Algorithm - Results • Conclusions 2 Challenge Reservoir simulation 1 3 Reservoir Simulation • Largest field in the world are represented as 50 million – 1 billion grid block models • Each runs takes hours on 500-5000 cores • Calibrating the model requires 100s of runs and sophisticated methods • “History matched” model is only a beginning 4 Files in Reservoir Simulation • Internal Files • Input / Output Files - Interact with pre- & post-processing tools Date Restart/Checkpoint Files 5 Reservoir Simulation in Saudi Aramco • 100’000+ simulations annually • The largest simulation of 10 billion cells • Currently multiple machines in TOP500 • Petabytes of storage required 600x • Resources are Finite • File Compression is one solution 50x 6 Compression algorithm evaluation 2 7 Compression ratio Tested a number of algorithms on a GRID restart file for two models 4 - Model A – 77.3 million active grid blocks 3.5 - Model K – 8.7 million active grid blocks 3 - 15.6 GB and 7.2 GB respectively 2.5 2 Compression ratio is between 1.5 1 compression ratio compression - From 2.27 for snappy (Model A) 0.5 0 - Up to 3.5 for bzip2 -9 (Model K) Model A Model K lz4 snappy gzip -1 gzip -9 bzip2 -1 bzip2 -9 8 Compression speed • LZ4 and Snappy significantly outperformed other algorithms
    [Show full text]
  • Arrow: Integration to 'Apache' 'Arrow'
    Package ‘arrow’ September 5, 2021 Title Integration to 'Apache' 'Arrow' Version 5.0.0.2 Description 'Apache' 'Arrow' <https://arrow.apache.org/> is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. This package provides an interface to the 'Arrow C++' library. Depends R (>= 3.3) License Apache License (>= 2.0) URL https://github.com/apache/arrow/, https://arrow.apache.org/docs/r/ BugReports https://issues.apache.org/jira/projects/ARROW/issues Encoding UTF-8 Language en-US SystemRequirements C++11; for AWS S3 support on Linux, libcurl and openssl (optional) Biarch true Imports assertthat, bit64 (>= 0.9-7), methods, purrr, R6, rlang, stats, tidyselect, utils, vctrs RoxygenNote 7.1.1.9001 VignetteBuilder knitr Suggests decor, distro, dplyr, hms, knitr, lubridate, pkgload, reticulate, rmarkdown, stringi, stringr, testthat, tibble, withr Collate 'arrowExports.R' 'enums.R' 'arrow-package.R' 'type.R' 'array-data.R' 'arrow-datum.R' 'array.R' 'arrow-tabular.R' 'buffer.R' 'chunked-array.R' 'io.R' 'compression.R' 'scalar.R' 'compute.R' 'config.R' 'csv.R' 'dataset.R' 'dataset-factory.R' 'dataset-format.R' 'dataset-partition.R' 'dataset-scan.R' 'dataset-write.R' 'deprecated.R' 'dictionary.R' 'dplyr-arrange.R' 'dplyr-collect.R' 'dplyr-eval.R' 'dplyr-filter.R' 'expression.R' 'dplyr-functions.R' 1 2 R topics documented: 'dplyr-group-by.R' 'dplyr-mutate.R' 'dplyr-select.R' 'dplyr-summarize.R'
    [Show full text]
  • Summer 2010 PPAXAXCENTURIONCENTURION Boston Police Patrolmen’S Association, Inc
    Boston Police Patrolmen’s Association, Inc. PRST. STD. 9-11 Shetland Street U.S. POSTAGE Flagwoman in Boston, Massachusetts 02119 PAID PERMIT NO. 2226 South Boston at WORCESTER, MA $53.00 per hour! Where’s the Globe photographer? See the back and forth with the Globe’s Scot Lehigh. See pages A10 & A11 Nation’s First Police Department • Established 1854 Volume 40, Number 3 • Summer 2010 PPAXAXCENTURIONCENTURION Boston Police Patrolmen’s Association, Inc. Boston Emergency Medical Technicians NATIONAL ASSOCIATION OF POLICE ORGANIZATIONS A DISGRACE!!! Police Picket Patrick City gives Woodman family, Thousands attend two-day picket, attorney Gov. Patrick jeered, AZ Gov. Brewer cheered By Jim Carnell, Pax Editor $3 million housands of Massachusetts municipal settlement Tpolice officers showed up to demon- strate at the National Governor’s Associa- By Jim Carnell, Pax Editor tion meeting held recently in Boston, hosted n yet another discouraging, insulting by our own little Lord Fauntleroy, Gover- slap at working police officers, the city I nor Deval Patrick. recently gave the family of David On Friday, July 9th, about three thousand Woodman and cop-hating Attorney officers appeared outside of Fenway Park Howard Friedman $3 million dollars, to greet the Governors and their staffs at an despite the fact that a formal lawsuit had event featuring our diminutive Governor. not even been filed. Governor Patrick has focused his (and his Woodman died at the Beth Israel allies in the bought-and-sold local media) Hospital eleven days after his initial en- attention upon police officers in particular, counter with police following the Celtics’ attacking police officer’s pay, benefits and 2008 victory.
    [Show full text]
  • Parquet Data Format Performance
    Parquet data format performance Jim Pivarski Princeton University { DIANA-HEP February 21, 2018 1 / 22 What is Parquet? 1974 HBOOK tabular rowwise FORTRAN first ntuples in HEP 1983 ZEBRA hierarchical rowwise FORTRAN event records in HEP 1989 PAW CWN tabular columnar FORTRAN faster ntuples in HEP 1995 ROOT hierarchical columnar C++ object persistence in HEP 2001 ProtoBuf hierarchical rowwise many Google's RPC protocol 2002 MonetDB tabular columnar database “first” columnar database 2005 C-Store tabular columnar database also early, became HP's Vertica 2007 Thrift hierarchical rowwise many Facebook's RPC protocol 2009 Avro hierarchical rowwise many Hadoop's object permanance and interchange format 2010 Dremel hierarchical columnar C++, Java Google's nested-object database (closed source), became BigQuery 2013 Parquet hierarchical columnar many open source object persistence, based on Google's Dremel paper 2016 Arrow hierarchical columnar many shared-memory object exchange 2 / 22 What is Parquet? 1974 HBOOK tabular rowwise FORTRAN first ntuples in HEP 1983 ZEBRA hierarchical rowwise FORTRAN event records in HEP 1989 PAW CWN tabular columnar FORTRAN faster ntuples in HEP 1995 ROOT hierarchical columnar C++ object persistence in HEP 2001 ProtoBuf hierarchical rowwise many Google's RPC protocol 2002 MonetDB tabular columnar database “first” columnar database 2005 C-Store tabular columnar database also early, became HP's Vertica 2007 Thrift hierarchical rowwise many Facebook's RPC protocol 2009 Avro hierarchical rowwise many Hadoop's object permanance and interchange format 2010 Dremel hierarchical columnar C++, Java Google's nested-object database (closed source), became BigQuery 2013 Parquet hierarchical columnar many open source object persistence, based on Google's Dremel paper 2016 Arrow hierarchical columnar many shared-memory object exchange 2 / 22 Developed independently to do the same thing Google Dremel authors claimed to be unaware of any precedents, so this is an example of convergent evolution.
    [Show full text]
  • Impala: a Modern, Open-Source SQL Engine for Hadoop
    Impala: A Modern, Open-Source SQL Engine for Hadoop Marcel Kornacker Alexander Behm Victor Bittorf Taras Bobrovytsky Casey Ching Alan Choi Justin Erickson Martin Grund Daniel Hecht Matthew Jacobs Ishaan Joshi Lenni Kuff Dileep Kumar Alex Leblang Nong Li Ippokratis Pandis Henry Robinson David Rorke Silvius Rus John Russell Dimitris Tsirogiannis Skye Wanderman-Milne Michael Yoder Cloudera http://impala.io/ ABSTRACT that is on par or exceeds that of commercial MPP analytic Cloudera Impala is a modern, open-source MPP SQL en- DBMSs, depending on the particular workload. gine architected from the ground up for the Hadoop data This paper discusses the services Impala provides to the processing environment. Impala provides low latency and user and then presents an overview of its architecture and high concurrency for BI/analytic read-mostly queries on main components. The highest performance that is achiev- Hadoop, not delivered by batch frameworks such as Apache able today requires using HDFS as the underlying storage Hive. This paper presents Impala from a user's perspective, manager, and therefore that is the focus on this paper; when gives an overview of its architecture and main components there are notable differences in terms of how certain technical and briefly demonstrates its superior performance compared aspects are handled in conjunction with HBase, we note that against other popular SQL-on-Hadoop systems. in the text without going into detail. Impala is the highest performing SQL-on-Hadoop system, especially under multi-user workloads. As Section7 shows, 1. INTRODUCTION for single-user queries, Impala is up to 13x faster than alter- Impala is an open-source 1, fully-integrated, state-of-the- natives, and 6.7x faster on average.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]
  • Forcepoint DLP Supported File Formats and Size Limits
    Forcepoint DLP Supported File Formats and Size Limits Supported File Formats and Size Limits | Forcepoint DLP | v8.8.1 This article provides a list of the file formats that can be analyzed by Forcepoint DLP, file formats from which content and meta data can be extracted, and the file size limits for network, endpoint, and discovery functions. See: ● Supported File Formats ● File Size Limits © 2021 Forcepoint LLC Supported File Formats Supported File Formats and Size Limits | Forcepoint DLP | v8.8.1 The following tables lists the file formats supported by Forcepoint DLP. File formats are in alphabetical order by format group. ● Archive For mats, page 3 ● Backup Formats, page 7 ● Business Intelligence (BI) and Analysis Formats, page 8 ● Computer-Aided Design Formats, page 9 ● Cryptography Formats, page 12 ● Database Formats, page 14 ● Desktop publishing formats, page 16 ● eBook/Audio book formats, page 17 ● Executable formats, page 18 ● Font formats, page 20 ● Graphics formats - general, page 21 ● Graphics formats - vector graphics, page 26 ● Library formats, page 29 ● Log formats, page 30 ● Mail formats, page 31 ● Multimedia formats, page 32 ● Object formats, page 37 ● Presentation formats, page 38 ● Project management formats, page 40 ● Spreadsheet formats, page 41 ● Text and markup formats, page 43 ● Word processing formats, page 45 ● Miscellaneous formats, page 53 Supported file formats are added and updated frequently. Key to support tables Symbol Description Y The format is supported N The format is not supported P Partial metadata
    [Show full text]
  • Pdfnews 10/04, Page 2 –
    Precursor PDF News 10:4 Adobe InDesign to Challenge Quark At last week’s Seybold conference in Boston, Adobe Systems finally took the wraps off its upcoming “Quark-killer” called InDesign. The new-from-the-ground-up page layout program is set for a summer debut. InDesign will be able to open QuarkXPress 3 and 4 documents and will even offer a set of Quark keyboard shortcuts. The beefy new program will also offer native Photoshop, Illustrator and PDF file format support. The PDF capabilities in particular will change workflows to PostScript 3 output devices. Pricing is expected to be $699 U.S. Find out more on- line at: http://www.adobe.com/prodindex/indesign/main.html PageMaker 6.5 Plus: Focus on Business With the appearance on InDesign, Adobe PageMaker Plus has been targeted toward business customers with increased integration with Microsoft Office products, more templates and included Photoshop 5.0 Limited Edition in the box. Upgrade prices are $99 U.S. PageMaker Plus is expected to ship by the end of March. Find out more on-line at: http://www.adobe.com/prodindex/pagemaker/main.html Adobe Acrobat 4.0 Also at Seybold, Adobe announced that Acrobat 4.0 will ship in late March for $249 ($99 for upgrades) The new version offers Press Optimization for PDF file creation and provides the ability to view a document with the fonts it was created with or fonts native to the viewing machine. Greater post file creation editing capabilities are also built-in And, of course, the new version is 100% hooked in to PostScript 3.
    [Show full text]
  • Bicriteria Data Compression∗
    Bicriteria data compression∗ Andrea Farruggia, Paolo Ferragina, Antonio Frangioni, and Rossano Venturini Dipartimento di Informatica, Universit`adi Pisa, Italy ffarruggi, ferragina, frangio, [email protected] Abstract lem, named \compress once, decompress many times", In this paper we address the problem of trading that can be cast into two main families: the com- optimally, and in a principled way, the compressed pressors based on the Burrows-Wheeler Transform [6], size/decompression time of LZ77 parsings by introduc- and the ones based on the Lempel-Ziv parsing scheme ing what we call the Bicriteria LZ77-Parsing problem. [35, 36]. Compressors are known in both families that The goal is to determine an LZ77 parsing which require time linear in the input size, both for compress- minimizes the space occupancy in bits of the compressed ing and decompressing the data, and take compressed- file, provided that the decompression time is bounded space which can be bound in terms of the k-th order by T . Symmetrically, we can exchange the role of empirical entropy of the input [25, 35]. the two resources and thus ask for minimizing the But the compressors running behind those large- decompression time provided that the compressed space scale storage systems are not derived from those scien- is bounded by a fixed amount given in advance. tific results. The reason relies in the fact that theo- We address this goal in three stages: (i) we intro- retically efficient compressors are optimal in the RAM duce the novel Bicriteria LZ77-Parsing problem which model, but they elicit many cache/IO misses during formalizes in a principled way what data-compressors the decompression step.
    [Show full text]