Data Compression Technique on Hadoop's Next Generation

Total Page:16

File Type:pdf, Size:1020Kb

Data Compression Technique on Hadoop's Next Generation INTERNATIONAL JOURNAL OF SCIENTIFIC PROGRESS AND RESEARCH (IJSPR) ISSN: 2349-4689 Issue 92, Volume 33, Number 01, 2017 Data Compression Technique On Hadoop’s Next Generation: Yarn Vanisha Mavi1, Nidhi Tyagi2 1M.Tech student, CSE, AKTU Lucknow 2Professor MIET, Meerut Abstract - Data Compression is a strategy for encoding decides that permits considerable diminishment in the aggregate Compression number of bits to store or transmit a document [1]. There is a complete range of different data compression techniques available both online and offline working such that it becomes really difficult to choose which technique serves the best. In this paper we represent number of techniques to compress and decompress the text data and summarize the design, Lossy Lossless development, and current state of deployment of Hadoop. It also gives an overview of YARN concepts, related issues and challenges, tools and techniques. Figure 1:-Types of compression Keywords - Big Data, YARN, Data compressions. 3.2 What to compress? 1.1 INTRODUCTION Compressing input files: Big data described as a collection of structured, If the input file is compressed, then the bytes read in unstructured and semi structured data [1]. from HDFS is reduced, which means less time to read 2.1 Hadoop data. This time conservation is beneficial to the Hadoop is a distributed system infrastructure researched performance of job execution. and developed by the Apache Foundation [2]. The users If the input files are compressed, they will be can develop the distributed applications although they do decompressed automatically as they are read by not know the lower distributed layer so that the users can MapReduce, using the filename extension to determine make full use of the power of the cluster to perform the which codec to use. For example, a file ending in .gz high-speed computing and storage. The core technology of can be identified as gzip-compressed file and thus read Hadoop is the Hadoop Distributed File System (HDFS) and with GzipCodec. the MapReduce. Compressing output files: 3.1 Compression, Often we need to store the output as history files. If the amount of output per day is extensive, and we often Compression, source coding, or bit-rate reduction involves need to store history results for future use, then these encoding information using fewer bits than the original accumulated results will take extensive amount of representation [3]. Compression can be either lossy or HDFS space. However, these history files may not be lossless. Lossless compression reduces bits by identifying used very frequently, resulting in a waste of HDFS and eliminating statistical redundancy. No information is space. Therefore, it is necessary to compress the lost in lossless compression. Lossy compression reduces output before storing on HDFS. bits by identifying unnecessary information and removing it. The process of reducing the size of a data file is referred Compressing map output: to as data compression [4]. In the context of data Even if your MapReduce application reads and writes transmission, it is called source coding (encoding done at uncompressed data, it may benefit from compressing the source of the data before it is stored or transmitted) in the intermediate output of the map phase. Since the opposition to channel coding. map output is written to disk and transferred across the network to the reducer nodes, by using a fast compressor such as LZO or Snappy, you can get performance gains simply because the volume of data to transfer is reduced. www.ijspr.com IJSPR | 21 INTERNATIONAL JOURNAL OF SCIENTIFIC PROGRESS AND RESEARCH (IJSPR) ISSN: 2349-4689 Issue 92, Volume 33, Number 01, 2017 3.3 Common input format intermediate data and the output. MR2 [6] is shown first, followed by MR1 [7]. Compression File Split Tool Algorithm format extension table MRv2 Coding gzip gzip DEFLATE .gz no hadoop jar hadoop-examples-.jar sort "- Dmapreduce.compress.map.output=true" bzip2 bzip2 bzip2 .bz2 yes "Dmapreduce.map.output.compression.codec=org.ap Yes, if LZO Lzop LZO .lzo ache.hadoop.io.compress.GzipCodec" indexed "-Dmapreduce.output.compress=true" Snappy n/a Snappy . Snappy no "Dmapreduce.output.compression.codec=org.apache. Table 1:- Common input formats hadoop.io.compress.GzipCodec" –outKey 3.4 Choosing a Data Compression Format org.apache.hadoop.io.Text -outValue Whether to compress your data and which compression org.apache.hadoop.io.Text input output formats to use can have a significant impact on MRv1 Coding performance. Two of the most important places to consider data compression are in terms of MapReduce jobs and data hadoop jar hadoop-examples-.jar sort "- stored in HBase. For the most part, the principles are Dmapred.compress.map.output=true" similar for each [5]. "- You need to balance the processing capacity required to Dmapred.map.output.compression.codec=org.apache. compress and uncompress the data, the disk IO required hadoop.io.compress.GzipCodec" to read and write the data, and the network bandwidth "-Dmapred.output.compress=true" required to send the data across the network. The correct balance of these factors depends upon the "- characteristics of your cluster and your data, as well as Dmapred.output.compression.codec=org.apache.hado your usage patterns. op.io.compress.GzipCodec" -outKey Compression is not recommended if your data is org.apache.hadoop.io.Text-outValue already compressed (such as images in JPEG format). org.apache.hadoop.io.Text input output In fact, the resulting file can actually be larger than the 4.1 YARN original. YARN (yet another resource negotiator) is a key element GZIP compression uses more CPU resources than of the hadoop data processing architecture that provides Snappy or LZO, but provides a higher compression different data handling mechanism, including interactive ratio. GZip is often a good choice for cold data, which SQL (Structured query language) and batch processing. In is accessed infrequently. Snappy or LZO are a better YARN, the Resource Manager will be the resources choice for hot data, which is accessed frequently. distributor while the Application Master is responsible for the communication with the Resource Manager and BZip2 can also produce more compression than GZip cooperate with the Node-manager to complete the tasks for some types of files, at the cost of some speed when [8]. Yarn can be considered as operating system of hadoop compressing and decompressing. HBase does not ecosystem. It improves the performance of data processing support BZip2 compression. in Hadoop by separating the resource management and Snappy often performs better than LZO. It is worth scheduling capabilities of map reduce from its data running tests to see if you detect a significant processing components. YARN combines a central difference. resource manager that reconciles the way applications use Hadoop system resources with node manager agents that For MapReduce, if you need your compressed data to monitor the processing operations of individual cluster be splittable, BZip2, LZO, and Snappy formats are nodes. Separating HDFS from MapReduce with YARN splittable, but GZip is not. Splittability is not relevant to makes the Hadoop environment more suitable for HBase data. operational applications that can't wait for batch jobs to For MapReduce, you can compress either the finish. YARN. Given the limitations of MapReduce, the intermediate data, the output, or both. Adjust the main purpose of YARN is to divide the tasks for the parameters you provide for the MapReduce job JobTracker. In YARN, the Resources are managed by the accordingly. The following examples compress both the Resource Manager and the jobs are traced by the www.ijspr.com IJSPR | 22 INTERNATIONAL JOURNAL OF SCIENTIFIC PROGRESS AND RESEARCH (IJSPR) ISSN: 2349-4689 Issue 92, Volume 33, Number 01, 2017 Application Master. The Task Tracker has become the The Application Master is cooperating with the NodeManager. Hence, the global Resource Manager and NodeManager to put tasks in the suitable containers to the local NodeManager compose the data computing run the tasks and monitor the tasks. When the framework. In YARN, the Resource Manager will be the container has errors, the Application Master will apply resources distributor while the Application Master is for another resource from the Scheduler to continue responsible for the communication with the Resource the process. Manager and cooperate with the NodeManager to complete Container the tasks [9]. In YARN, the Container is the source unit which is 4.2 YARN architecture the available node splitting the organization resources. Compared with the old MapReduce Architecture, it is easy Instead of the Map and Reduce source pools in to find out that YARN is more structured and simple. Then, MapReduce, the Application Master can apply for any the following section will introduce the YARN numbers of the Container. architecture. Due to the same property Containers, all the Containers can There are following four core components of the be exchanged in the task execution to improve efficiency. YARN Architecture [10]: 4.3 Difference between MapReduce and YARN (Yet ResourceManager another resource negotiator):- Drawbacks of MapReduce become the benefits of YARN (yet another resource According to the different functions of the negotiator). ResourceManager, is a designer has divided it into two lower level components: The Scheduler and the Here are the drawbacks of MapReduce:- Application Manager. On the one hand, the Scheduler assigns the resource to the different running The MAPREDUCE Model is unable to process applications based on the cluster size, queues, and the streaming data and in-memory interface resource constraints. The Scheduler is only responsible operations. Some of the limitations are as for the resources allocation but is not responsible for follows:- the monitoring the application implementation and SCALABILITY task failure. On the other hand, the Application Manager is in charge of receiving jobs and AVALIABILITY redistributing the containers for the failure objects.
Recommended publications
  • Metadefender Core V4.12.2
    MetaDefender Core v4.12.2 © 2018 OPSWAT, Inc. All rights reserved. OPSWAT®, MetadefenderTM and the OPSWAT logo are trademarks of OPSWAT, Inc. All other trademarks, trade names, service marks, service names, and images mentioned and/or used herein belong to their respective owners. Table of Contents About This Guide 13 Key Features of Metadefender Core 14 1. Quick Start with Metadefender Core 15 1.1. Installation 15 Operating system invariant initial steps 15 Basic setup 16 1.1.1. Configuration wizard 16 1.2. License Activation 21 1.3. Scan Files with Metadefender Core 21 2. Installing or Upgrading Metadefender Core 22 2.1. Recommended System Requirements 22 System Requirements For Server 22 Browser Requirements for the Metadefender Core Management Console 24 2.2. Installing Metadefender 25 Installation 25 Installation notes 25 2.2.1. Installing Metadefender Core using command line 26 2.2.2. Installing Metadefender Core using the Install Wizard 27 2.3. Upgrading MetaDefender Core 27 Upgrading from MetaDefender Core 3.x 27 Upgrading from MetaDefender Core 4.x 28 2.4. Metadefender Core Licensing 28 2.4.1. Activating Metadefender Licenses 28 2.4.2. Checking Your Metadefender Core License 35 2.5. Performance and Load Estimation 36 What to know before reading the results: Some factors that affect performance 36 How test results are calculated 37 Test Reports 37 Performance Report - Multi-Scanning On Linux 37 Performance Report - Multi-Scanning On Windows 41 2.6. Special installation options 46 Use RAMDISK for the tempdirectory 46 3. Configuring Metadefender Core 50 3.1. Management Console 50 3.2.
    [Show full text]
  • Redalyc.A Lossy Method for Compressing Raw CCD Images
    Revista Mexicana de Astronomía y Astrofísica ISSN: 0185-1101 [email protected] Instituto de Astronomía México Watson, Alan M. A Lossy Method for Compressing Raw CCD Images Revista Mexicana de Astronomía y Astrofísica, vol. 38, núm. 2, octubre, 2002, pp. 233-249 Instituto de Astronomía Distrito Federal, México Available in: http://www.redalyc.org/articulo.oa?id=57138212 How to cite Complete issue Scientific Information System More information about this article Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal Journal's homepage in redalyc.org Non-profit academic project, developed under the open access initiative Revista Mexicana de Astronom´ıa y Astrof´ısica, 38, 233{249 (2002) A LOSSY METHOD FOR COMPRESSING RAW CCD IMAGES Alan M. Watson Instituto de Astronom´ıa Universidad Nacional Aut´onoma de M´exico, Campus Morelia, M´exico Received 2002 June 3; accepted 2002 August 7 RESUMEN Se presenta un m´etodo para comprimir las im´agenes en bruto de disposi- tivos como los CCD. El m´etodo es muy sencillo: cuantizaci´on con p´erdida y luego compresi´on sin p´erdida con herramientas de uso general como gzip o bzip2. Se convierten los archivos comprimidos a archivos de FITS descomprimi´endolos con gunzip o bunzip2, lo cual es una ventaja importante en la distribuci´on de datos com- primidos. El grado de cuantizaci´on se elige para eliminar los bits de bajo orden, los cuales sobre-muestrean el ruido, no proporcionan informaci´on, y son dif´ıciles o imposibles de comprimir. El m´etodo es con p´erdida, pero proporciona ciertas garant´ıas sobre la diferencia absoluta m´axima, la diferencia RMS y la diferencia promedio entre la imagen comprimida y la imagen original; tales garant´ıas implican que el m´etodo es adecuado para comprimir im´agenes en bruto.
    [Show full text]
  • Linux-Cookbook.Pdf
    LINUX COOKBOOK ™ Other Linux resources from O’Reilly Related titles Linux Device Drivers Exploring the JDS Linux Linux in a Nutshell Desktop Running Linux Learning Red Hat Enterprise Building Embedded Linux Linux and Fedora Systems Linux Pocket Guide Linux Security Cookbook Understanding the Linux Kernel Linux Books linux.oreilly.com is a complete catalog of O’Reilly’s books on Resource Center Linux and Unix and related technologies, including sample chapters and code examples. ONLamp.com is the premier site for the open source web plat- form: Linux, Apache, MySQL, and either Perl, Python, or PHP. Conferences O’Reilly brings diverse innovators together to nurture the ideas that spark revolutionary industries. We specialize in document- ing the latest tools and systems, translating the innovator’s knowledge into useful skills for those in the trenches. Visit conferences.oreilly.com for our upcoming events. Safari Bookshelf (safari.oreilly.com) is the premier online refer- ence library for programmers and IT professionals. Conduct searches across more than 1,000 books. Subscribers can zero in on answers to time-critical questions in a matter of seconds. Read the books on your Bookshelf from cover to cover or sim- ply flip to the page you need. Try it today with a free trial. LINUX COOKBOOK ™ Carla Schroder Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo Linux Cookbook™ by Carla Schroder Copyright © 2005 O’Reilly Media, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use.
    [Show full text]
  • Pipenightdreams Osgcal-Doc Mumudvb Mpg123-Alsa Tbb
    pipenightdreams osgcal-doc mumudvb mpg123-alsa tbb-examples libgammu4-dbg gcc-4.1-doc snort-rules-default davical cutmp3 libevolution5.0-cil aspell-am python-gobject-doc openoffice.org-l10n-mn libc6-xen xserver-xorg trophy-data t38modem pioneers-console libnb-platform10-java libgtkglext1-ruby libboost-wave1.39-dev drgenius bfbtester libchromexvmcpro1 isdnutils-xtools ubuntuone-client openoffice.org2-math openoffice.org-l10n-lt lsb-cxx-ia32 kdeartwork-emoticons-kde4 wmpuzzle trafshow python-plplot lx-gdb link-monitor-applet libscm-dev liblog-agent-logger-perl libccrtp-doc libclass-throwable-perl kde-i18n-csb jack-jconv hamradio-menus coinor-libvol-doc msx-emulator bitbake nabi language-pack-gnome-zh libpaperg popularity-contest xracer-tools xfont-nexus opendrim-lmp-baseserver libvorbisfile-ruby liblinebreak-doc libgfcui-2.0-0c2a-dbg libblacs-mpi-dev dict-freedict-spa-eng blender-ogrexml aspell-da x11-apps openoffice.org-l10n-lv openoffice.org-l10n-nl pnmtopng libodbcinstq1 libhsqldb-java-doc libmono-addins-gui0.2-cil sg3-utils linux-backports-modules-alsa-2.6.31-19-generic yorick-yeti-gsl python-pymssql plasma-widget-cpuload mcpp gpsim-lcd cl-csv libhtml-clean-perl asterisk-dbg apt-dater-dbg libgnome-mag1-dev language-pack-gnome-yo python-crypto svn-autoreleasedeb sugar-terminal-activity mii-diag maria-doc libplexus-component-api-java-doc libhugs-hgl-bundled libchipcard-libgwenhywfar47-plugins libghc6-random-dev freefem3d ezmlm cakephp-scripts aspell-ar ara-byte not+sparc openoffice.org-l10n-nn linux-backports-modules-karmic-generic-pae
    [Show full text]
  • Is It Time to Replace Gzip?
    bioRxiv preprint doi: https://doi.org/10.1101/642553; this version posted May 20, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Is it time to replace gzip? Comparison of modern compressors for molecular sequence databases Kirill Kryukov*, Mahoko Takahashi Ueda, So Nakagawa, Tadashi Imanishi Department of Molecular Life Science, Tokai University School of Medicine, Isehara, Kanagawa 259-1193, Japan. *Correspondence: [email protected] Abstract Nearly all molecular sequence databases currently use gzip for data compression. Ongoing rapid accumulation of stored data calls for more efficient compression tool. We systematically benchmarked the available compressors on representative DNA, RNA and Protein datasets. We tested specialized sequence compressors 2bit, BLAST, DNA-COMPACT, DELIMINATE, Leon, MFCompress, NAF, UHT and XM, and general-purpose compressors brotli, bzip2, gzip, lz4, lzop, lzturbo, pbzip2, pigz, snzip, xz, zpaq and zstd. Overall, NAF and zstd performed well in terms of transfer/decompression speed. However, checking benchmark results is necessary when choosing compressor for specific data type and application. Benchmark results database is available at: http://kirr.dyndns.org/sequence-compression-benchmark/. Keywords: compression; benchmark; DNA; RNA; protein; genome; sequence; database. Molecular sequence databases store and distribute DNA, RNA and protein sequences as compressed FASTA files. Currently, nearly all databases universally depend on gzip for compression. This incredible longevity of the 26-year-old compressor probably owes to multiple factors, including conservatism of database operators, wide availability of gzip, and its generally acceptable performance.
    [Show full text]
  • Rule Base with Frequent Bit Pattern and Enhanced K-Medoid Algorithm for the Evaluation of Lossless Data Compression
    Volume 3, No. 1, Jan-Feb 2012 ISSN No. 0976-5697 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info Rule Base with Frequent Bit Pattern and Enhanced k-Medoid Algorithm for the Evaluation of Lossless Data Compression. Nishad P.M.* Dr. N. Nalayini Ph.D Scholar, Department Of Computer Science Associate professor, Department of computer science NGM NGM College, Pollachi, India College Pollachi, Coimbatore, India [email protected] [email protected] Abstract: This paper presents a study of various lossless compression algorithms; to test the performance and the ability of compression of each algorithm based on ten different parameters. For evaluation the compression ratios of each algorithm on different parameters are processed. To classify the algorithms based on the compression ratio, rule base is constructed to mine with frequent bit pattern to analyze the variations in various compression algorithms. Also, enhanced K- Medoid clustering is used to cluster the various data compression algorithms based on various parameters. The cluster falls dissentingly high to low after the enhancement. The framed rule base consists of 1,048,576 rules, which is used to evaluate the compression algorithm. Two hundred and eleven Compression algorithms are used for this study. The experimental result shows only few algorithm satisfies the range “High” for more number of parameters. Keywords: Lossless compression, parameters, compression ratio, rule mining, frequent bit pattern, K–Medoid, clustering. I. INTRODUCTION the maximum shows the peek compression ratio of algorithms on various parameters, for example 19.43 is the Data compression is a method of encoding rules that minimum compression ratio and the 76.84 is the maximum allows substantial reduction in the total number of bits to compression ratio for the parameter EXE shown in table-1 store or transmit a file.
    [Show full text]
  • Basic Guide to the Command Line
    Basic guide to the command line Outline Introduction Definitions Special Characters Filenames Pathnames and the Path Variable (Search Path) Wildcards Standard in, standards out, standard error, and redirections Here document (heredoc) Owners, groups, and permissions Commands Regular Expressions Introduction To utilize many of the programs in NMRbox you will need to at least have a rudimentary understanding of how to navigate around the command line. With some practice you may even find that the command line is easier than using a GUI based file browser – it is certainly much more powerful. In addition to navigating from the command line you will also need a rudimentary understanding of how to create and execute a shell script. This Guide may take a little bit to get through, but if you know most of the contents in this document you will be well served. (top) Definitions Terminal Emulator – A terminal emulator is a program that makes your computer act like an old fashion computer terminal. When running inside a graphical environment it is often called a “terminal window”, “terminal”, “term”, or “shell” (although it is actually not a shell – read on). I will refer to a “Terminal Emulator” as “Terminal” in this document. Shell – A shell is a program that acts as a command language interpreter that executes the commands sent to it from the keyboard or a file. It is not part of the operating system kernel, but acts to pass commands to the kernel and receives the output from those commands. When you run a Terminal Emulator from a graphical interface a shell is generally run from within the Terminal so your interaction is with the shell and the Terminal simply presents a place for the shell to run in the graphical interface.
    [Show full text]
  • Comparison and Model of Compression Techniques for Smart Cloud Log File Handling
    Copyright IEEE. The final publication is available at IEEExplore via https://doi.org/10.1109/CCCI49893.2020.9256609. Comparison and Model of Compression Techniques for Smart Cloud Log File Handling Josef Spillner Zurich University of Applied Sciences Winterthur, Switzerland [email protected] Abstract—Compression as data coding technique has seen tight pricing of offered cloud services. Increasing diversity and approximately 70 years of research and practical innovation. progress in generic compression tools and log-specific algo- Nowadays, powerful compression tools with good trade-offs exist rithms [4], [5], [6] leaves many operators without a systematic for a range of file formats from plain text to rich multimedia. Yet in the dilemma of cloud providers to reduce log data sizes as much framework to choose suitable and economic log compression as possible while having to keep as much as possible around for tools and respective configurations. This prevents a systematic regulatory reasons and compliance processes, many companies solution to exploit cost tradeoffs, such as increasing investment are looking for smarter solutions beyond brute compression. into better compression levels while saving long-term storage In this paper, comprehensive applied research setting around cost. In this paper, such a framework is constructed by giving network and system logs is introduced by comparing text com- pression ratios and performance. The benchmark encompasses a comprehensive overview with benchmark results of 30 to- 13 tools and 30 tool-configuration-search combinations. The tool tal combinations of compression tools, decompression/search and algorithm relationships as well as benchmark results are tools and associated configurations. modelled in a graph.
    [Show full text]
  • Vorlage Für Dokumente Bei AI
    OSS Disclosure Document Date: 27-Apr-2018 CM-AI/PJ-CC OSS Licenses used in Suzuki Project Page 1 Table of content 1 Overview ..................................................................................................................... 11 2 OSS Licenses used in the project ................................................................................... 11 3 Package details for OSS Licenses usage ........................................................................ 12 3.1 7 Zip - LZMA SDK ............................................................................................................. 12 3.2 ACL 2.2.51 ...................................................................................................................... 12 3.3 Alsa Libraries 1.0.27.2 ................................................................................................... 12 3.4 AES - Advanced Encryption Standard 1.0 ......................................................................... 12 3.5 Alsa Plugins 1.0.26 ........................................................................................................ 13 3.6 Alsa Utils 1.0.27.2 ......................................................................................................... 13 3.7 APMD 3.2.2 ................................................................................................................... 13 3.8 Atomic_ops .................................................................................................................... 13 3.9 Attr 2.4.46 ...................................................................................................................
    [Show full text]
  • Gnome-Session Gzip Less 1.63424124514 Lzop Lib64gpgme
    lib64boost_serialization1.42.0 lib64boost_regex1.42.0 lib64boost_wave1.42.0 lib64boost_math_tr1f1.42.0 popt-data rpm-helper 2.75282526804 1.05413105413 lib64boost_iostreams1.42.0 1.68067226891 chkconfig lib64boost_math_c99l1.42.0 1.1120615911 dmidecode 1.622718052740. 0.0562113546936 perl-File-FnMatch 0.0570287995438 3.04259634888 1.0152284264 tcb perl-String-ShellQuote 0.869313242538 1.33903133903 0. 0.507614213198 libdrakx-net 1.39720558882 1.0152284264 1.1120615911 0.855431993157 1.62271805274 lib64popt0 0. 0.0570287995438 lib64ldetect0.11 0. 3.64583333333 2.60416666667 1.62271805274 0. 3.125 1.1120615911 0. 0. 0.0570287995438 2.604166666670. lib64boost_wserialization1.42.0 3.64583333333 crda 0.0. 0.0. lib64dbnss4.8 1.39720558882 0. 0. 0. 0. 0.726712177934 lib64boost_program_options1.42.0 0. 1.65169516082 0.0440431622991 0. 0.0. 0.855431993157 1.39720558882 0.0285143997719 2.57896261953 drakxtools-backend 0. 0. 3.7037037037 shadow-utils0.880863245981 2.03045685279 0. 0.0. 0.855431993157 0.940975192472 1.62271805274 0. 0.0570613409415 1.1697574893 0. 0. 0.114122681883 wireless-regdb 0.660647434486 2.60416666667 1.62271805274 0. 0. 0.0570613409415 xz pam_tcb 0.0285143997719 0.0. 1.79658070125 0. 2.53807106599 1.45506419401 4.6875 0. 0.388632499393 0.0. 0. lib64cap2 1.1120615911 perl-Curses 1.62271805274 0.855920114123 lib64acl1 0.684764000978 lib64lzma2 lib64boost_math_tr1l1.42.0 0.0285143997719 1.60757542392 0. 0.940975192472 1.16714380092 2.60416666667 3.04568527919 0.0570287995438 1.05160185865 lib64gamin-1_0 0.171086398631 1.1120615911 1.62271805274 0.040338846309 0. 0.0220215811495 lib64boost_graph1.42.0 0.940975192472 0. 0.811359026369 0. coreutils 0.855188141391 0. 0.0570287995438 0.0570613409415findutils 1.92605711756 0.526955816782 0.
    [Show full text]
  • Model 1.1 User's Guide
    Model 1.1 User’s Guide Second Release – December 2015 TABLE OF CONTENTS Section 1: Introduction .................................................................................................................................. 1 About this User’s Guide .............................................................................................................................. 1 About ImageCue™ ....................................................................................................................................... 1 Connections and Controls ........................................................................................................................... 2 Control Channel Descriptions ..................................................................................................................... 4 Section 2: Setup ............................................................................................................................................. 7 Setting the DMX512 Starting Address ......................................................................................................... 7 Selecting the DMX Personality .................................................................................................................... 8 Setting the LCD Display Backlight Level and Disabling the LEDs ................................................................. 8 Changing the HDMI/DVI Output Resolution ............................................................................................... 9 Selecting
    [Show full text]
  • Performance and Energy Consumption of Lossless Compression/Decompression Utilities on Mobile Computing Platforms
    2013 IEEE 21st International Symposium on Modelling, Analysis & Simulation of Computer and Telecommunication Systems Performance and Energy Consumption of Lossless Compression/Decompression Utilities on Mobile Computing Platforms Aleksandar Milenkovi, Armen Dzhagaryan Martin Burtscher Department of Electrical and Computer Engineering Department of Computer Science The University of Alabama in Huntsville Texas State University-San Marcos Huntsville, AL, U.S.A. San Marcos, TX, U.S.A. {milenka, aad002}@uah.edu [email protected] Abstract—Data compression and decompression utilities can be are in use, including dictionaries of expected or recently critical in increasing communication throughput, reducing encountered “words,” sliding windows that assume that communication latencies, achieving energy-efficient communi- recently seen data patterns will repeat, which are used in the cation, and making effective use of available storage. This Lempel-Ziv approach [3], as well as reversibly sorting data paper experimentally evaluates several such utilities for mul- to bring similar values close together, which is the approach tiple compression levels on systems that represent current taken by the Burrows and Wheeler transform [4]. The data mobile platforms. We characterize each utility in terms of its compression algorithms used in practice combine different compression ratio, compression and decompression through- models and coders, thereby favoring different types of inputs put, and energy efficiency. We consider different use cases that and representing different tradeoffs between speed and com- are typical for modern mobile environments. We find a wide pression ratio. Moreover, they typically allow the user to variety of energy costs associated with data compression and decompression and provide practical guidelines for selecting select the dictionary, window, or block size through a com- the most energy efficient configurations for each use case.
    [Show full text]