Analysis and Representation of Dataflows and Performance

Total Page:16

File Type:pdf, Size:1020Kb

Analysis and Representation of Dataflows and Performance ISTE Master Nr. 3101854 Analysis and representation of dataflows and performance influences in NoSQL systems Alexander Szilagyi Course of Study: Software Engineering Examiner: Prof. Dr. rer. nat. Stefan Wagner Supervisor: Dr. Daniel Graziotin, Christian Heß Commenced: 2017-03-03 Completed: 2017-09-20 CR-Classification: I.7.2 Affirmation Hereby I ensure that this thesis I have worked on, was completely written by myself indepen- dently and that I only used those resources which are denoted in this work. Adoptions from other works like books, papers or web-sources are indicated by references. Stuttgart, September 22, 2017 Alexander Szilagyi III Foreword This thesis was written in cooperation with a famous company in the automotive sector and the University of Stuttgart. Three different departments, within the company, were involved in the acquisition of the relevant information, the access to the needed systems, the technical support and help for additional questions. Therefore I would like to thank them all for the support. Special thanks goes to my advisor Christian Heß, which is working at the company and Dr. Daniel Graziotin, adviser of the institute ISTE at the University in Stuttgart. Both gave me important tips and advices, which helped me a lot during the conduction of this thesis. IV Abstract The dispersion of digitalization is gaining more and more attention in the industry. Trends like industry 4.0 results in an enormous growth of data. But with the abundance of data, new opportunities are possible, that could lead to a market advantage against competitors. In order to deal with the huge pool of information, also referred as “Big Data”, companies are facing new challenges. One challenge relies in the fact, that ordinary relational database solutions are unable to cope with the enormous amount of data, hence a system must be provided, that is able to store and process the data in an acceptable time. Because of that, a well known company in the automotive sector established a distributed NoSQL system, based on Hadoop, to deal with this challenge. But the processes and interactions behind such a Hadoop system are more complex than with a relational database solution, hence wrong configurations leads to an ineffective operation. Therefore this thesis deals with the question, how the interactions between the applications are realized, and how possible lacks in the performance can be prevented. This work also presents a new approach in dataflow modeling, because an ordinary dataflow meta description has not enough syntax elements to model the entire NoSQL system in de- tail. With different use cases, the dataflow present interactions within the Hadoop system, which provide first valuations about performance dependencies. Characteristics like file for- mat, compression codec and execution engine could be identified, and afterwards combined into different configuration sets. With this sets, randomized test were executed to measure their influence in the execution time. The results show, that a combination of an ORC file format with a Zlib file compression leads to the best performance regarding the execution time and disk utilization of the implemented system at the company. But the results also show, that configurations on the system could be very dependent on the actual use case. Beside the performance measurements, the thesis provide additional insights. For example, it could be observed, that an increase of the file size leads to a growth of the variance. However, the study, that was conducted in this work leads to the assumption, that performance tests should be repeatedly investigated. V Contents 1 Introduction 1 1.1 Context . 1 1.2 Problem Statement and Motivation . 1 2 Background 3 2.1 Big Data . 3 2.2 Data Lake . 3 2.3 NoSQL . 4 2.4 Virtualization . 5 2.5 Hadoop Environment . 6 2.6 Qlik Sense - Data Visualization Software . 14 2.7 Data types . 15 2.8 File formats . 16 2.9 Compression . 20 3 Related work 22 4 The challenge of dataflow modelling in a distributed NoSQL system 25 4.1 Approaches in dataflow . 25 4.2 Dataflow in Hadoop Environment . 27 5 Evaluation of performance influences 35 5.1 Possible sources of performance influence factors . 35 5.2 Evaluation of performance indicators . 40 6 Methodologies 46 6.1 Methodology: Testing Strategy . 46 6.2 Methodology: Testing Preparation . 49 7 Results 55 8 Discussion 73 9 Conclusion 77 10 Appendix 78 10.1 Additional Hadoop applications . 78 10.2 Additional file formats . 79 10.3 Additional compression codecs . 80 Bibliography 81 VI Acronyms ACID - Atomicity, Consistency, Isolation and Durability API - Application programming interface BPMN - Business process management notation CEO - Center of excellence CPU - Central processing unit CSV - Comma separated values DAG - Directed acyclic graph DB - Data base DMBS - Database management systems DTD - Document type definition HAR - Hadoop archives file HDFS - Hadoop file system HDP - Hadoop distributed package IS - Intelligente Schnittstelle (smart interface) ITO - IT-operations JSON - JavaScript object notation JVM - Java virtual machine KPI - Key performance indicator LLAP - Live long and process MSA - Measurement system analysis MSB - Manufacturing service bus MSR - Manufacturing reporting system ODBC - Open database connectivity OEP - Oozie Eclipse Plugin OS - Operating system OSV - Operating system virtualization ORC - Optimized row columnar QFD - Quality function development RAM - Random access memory VII RD - Research and development SQL - Structured query language TA - Transaction TEZ - Hindi for ‘speed‘ XML - Extensible markup language YARN - Yet another resource negotiator VIII List of Figures 2.1 Five V’s . 3 2.2 Data Lake * http://www.autosarena.com/wp-content/uploads/2016/04/Auto-apps.jpg . 4 2.3 Concept of Hypervisor - Type 2 . 5 2.4 Node types . 6 2.5 Hadoop V1.0 . 8 2.6 Architecture of HDFS - source: https://www.linkedin.com/pulse/hadoop- jumbo-vishnu-saxena . 8 2.7 Hadoop V2.0 . 10 2.8 Architecture of flume . 12 2.9 Nifi process flow - source: https://blogs.apache.org/nifi/ . 12 2.10 Oozie workflow - source: https://i.stack.imgur.com/7CtlR.png . 13 2.11 Example of Qlik - Sense Dashboard . 14 4.1 Classic dataflow graph . 25 4.2 Syntax table . 26 4.3 HDFS Filesystem . 27 4.4 IS-Server Dataflow . 28 4.5 Push data on edge node . 29 4.6 Store data redundant on HDFS . 30 4.7 MRS-Server Dataflow . 31 4.8 Route data from edge node . 32 4.9 Store Hive table persistent in HDFS . 32 4.10 Processing dataflow . 33 4.11 Create and send HIVE query . 34 4.12 Execute MapReduce Algorithm . 34 5.1 QFD - Matrix: Performance influence evaluation . 42 6.1 Measurement system analysis type 1 of the Hadoop test system . 48 6.2 File format - size . 50 6.3 CPU utilization . 53 6.4 RAM utilization . 54 7.1 MapReduce total timing boxplot . 56 7.2 TEZ total timing boxplot . 57 7.3 Standard deviation total timings - unadjusted . 57 7.4 Total timings MapReduce . 58 7.5 Total timings TEZ . 58 7.6 Total timings MapReduce with 10% lower quantile . 59 7.7 Total timings MapReduce with 10% higher quantile . 60 7.8 Total timings TEZ with 10% higher quantile . 60 7.9 Total timings TEZ with 10% higher quantile . 61 7.10 Histogram MapReduce Count . 63 7.11 Histogram TEZ Join . 63 7.12 Performance values, MapReduce - Count . 64 7.13 Performance values, MapReduce - Join . 64 IX 7.14 Performance values, MapReduce - Math . 65 7.15 Performance values, TEZ - Count . 65 7.16 Performance values, TEZ - Join . 66 7.17 Performance values, TEZ - Math . 66 7.18 Total performance values - MapReduce . 67 7.19 Total performance values - TEZ . 68 7.20 Standard deviation total timings - adjusted . 68 7.21 File size of AVRO tables . 69 7.22 Boxplot AVRO with MapReduce . 70 7.23 Standard deviation AVRO timings - adjusted . 70 7.24 Boxplot AVRO with TEZ . 71 7.25 Processing of data files . 71 8.1 Morphologic box: Use case examples . 73 8.2 Trend: TEZ execution time dependent on data volume . 74 8.3 Trend: Reduced file size . 75 X List of Tables 2.1 Chart: Compression codecs . 21 5.1 Evaluation of QFD - Matrix, Part 1 . 44 5.2 Evaluation of QFD - Matrix, Part 2 . 45 6.1 File formats with compression . 46 6.2 Testset . 47 6.3 Factor of file size reduction . 51 XI Chapter 1 Introduction 1.1 Context This thesis was written within a department of a famous car manufacturer. The core func- tion of this department is to plan the mounting process for new car models, as well as to maintain and provide test benches for vehicles after their initiation. Thereby the vehicle tests include wheel alignment on chassis dynamometers or calibrations for driver assistance systems. Because of that, the department works closely together with other departments and all vehicle production lines of all factories all over the world. 1.2 Problem Statement and Motivation In these days, the flow of information grows bigger and bigger. On the one hand, is the daily produced amount of data on the internet, whereby social media plays an import role. On the other hand, a lot of data are also created in modern companies [68]. In order to manage this huge amount of data, the term big data was established, whereby the term unites two different problem states: To provide a system, which is able to handle the enormous amount of data and the possibility to create benefits, offered.
Recommended publications
  • Metadefender Core V4.12.2
    MetaDefender Core v4.12.2 © 2018 OPSWAT, Inc. All rights reserved. OPSWAT®, MetadefenderTM and the OPSWAT logo are trademarks of OPSWAT, Inc. All other trademarks, trade names, service marks, service names, and images mentioned and/or used herein belong to their respective owners. Table of Contents About This Guide 13 Key Features of Metadefender Core 14 1. Quick Start with Metadefender Core 15 1.1. Installation 15 Operating system invariant initial steps 15 Basic setup 16 1.1.1. Configuration wizard 16 1.2. License Activation 21 1.3. Scan Files with Metadefender Core 21 2. Installing or Upgrading Metadefender Core 22 2.1. Recommended System Requirements 22 System Requirements For Server 22 Browser Requirements for the Metadefender Core Management Console 24 2.2. Installing Metadefender 25 Installation 25 Installation notes 25 2.2.1. Installing Metadefender Core using command line 26 2.2.2. Installing Metadefender Core using the Install Wizard 27 2.3. Upgrading MetaDefender Core 27 Upgrading from MetaDefender Core 3.x 27 Upgrading from MetaDefender Core 4.x 28 2.4. Metadefender Core Licensing 28 2.4.1. Activating Metadefender Licenses 28 2.4.2. Checking Your Metadefender Core License 35 2.5. Performance and Load Estimation 36 What to know before reading the results: Some factors that affect performance 36 How test results are calculated 37 Test Reports 37 Performance Report - Multi-Scanning On Linux 37 Performance Report - Multi-Scanning On Windows 41 2.6. Special installation options 46 Use RAMDISK for the tempdirectory 46 3. Configuring Metadefender Core 50 3.1. Management Console 50 3.2.
    [Show full text]
  • Redalyc.A Lossy Method for Compressing Raw CCD Images
    Revista Mexicana de Astronomía y Astrofísica ISSN: 0185-1101 [email protected] Instituto de Astronomía México Watson, Alan M. A Lossy Method for Compressing Raw CCD Images Revista Mexicana de Astronomía y Astrofísica, vol. 38, núm. 2, octubre, 2002, pp. 233-249 Instituto de Astronomía Distrito Federal, México Available in: http://www.redalyc.org/articulo.oa?id=57138212 How to cite Complete issue Scientific Information System More information about this article Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal Journal's homepage in redalyc.org Non-profit academic project, developed under the open access initiative Revista Mexicana de Astronom´ıa y Astrof´ısica, 38, 233{249 (2002) A LOSSY METHOD FOR COMPRESSING RAW CCD IMAGES Alan M. Watson Instituto de Astronom´ıa Universidad Nacional Aut´onoma de M´exico, Campus Morelia, M´exico Received 2002 June 3; accepted 2002 August 7 RESUMEN Se presenta un m´etodo para comprimir las im´agenes en bruto de disposi- tivos como los CCD. El m´etodo es muy sencillo: cuantizaci´on con p´erdida y luego compresi´on sin p´erdida con herramientas de uso general como gzip o bzip2. Se convierten los archivos comprimidos a archivos de FITS descomprimi´endolos con gunzip o bunzip2, lo cual es una ventaja importante en la distribuci´on de datos com- primidos. El grado de cuantizaci´on se elige para eliminar los bits de bajo orden, los cuales sobre-muestrean el ruido, no proporcionan informaci´on, y son dif´ıciles o imposibles de comprimir. El m´etodo es con p´erdida, pero proporciona ciertas garant´ıas sobre la diferencia absoluta m´axima, la diferencia RMS y la diferencia promedio entre la imagen comprimida y la imagen original; tales garant´ıas implican que el m´etodo es adecuado para comprimir im´agenes en bruto.
    [Show full text]
  • Linux-Cookbook.Pdf
    LINUX COOKBOOK ™ Other Linux resources from O’Reilly Related titles Linux Device Drivers Exploring the JDS Linux Linux in a Nutshell Desktop Running Linux Learning Red Hat Enterprise Building Embedded Linux Linux and Fedora Systems Linux Pocket Guide Linux Security Cookbook Understanding the Linux Kernel Linux Books linux.oreilly.com is a complete catalog of O’Reilly’s books on Resource Center Linux and Unix and related technologies, including sample chapters and code examples. ONLamp.com is the premier site for the open source web plat- form: Linux, Apache, MySQL, and either Perl, Python, or PHP. Conferences O’Reilly brings diverse innovators together to nurture the ideas that spark revolutionary industries. We specialize in document- ing the latest tools and systems, translating the innovator’s knowledge into useful skills for those in the trenches. Visit conferences.oreilly.com for our upcoming events. Safari Bookshelf (safari.oreilly.com) is the premier online refer- ence library for programmers and IT professionals. Conduct searches across more than 1,000 books. Subscribers can zero in on answers to time-critical questions in a matter of seconds. Read the books on your Bookshelf from cover to cover or sim- ply flip to the page you need. Try it today with a free trial. LINUX COOKBOOK ™ Carla Schroder Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo Linux Cookbook™ by Carla Schroder Copyright © 2005 O’Reilly Media, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use.
    [Show full text]
  • Pipenightdreams Osgcal-Doc Mumudvb Mpg123-Alsa Tbb
    pipenightdreams osgcal-doc mumudvb mpg123-alsa tbb-examples libgammu4-dbg gcc-4.1-doc snort-rules-default davical cutmp3 libevolution5.0-cil aspell-am python-gobject-doc openoffice.org-l10n-mn libc6-xen xserver-xorg trophy-data t38modem pioneers-console libnb-platform10-java libgtkglext1-ruby libboost-wave1.39-dev drgenius bfbtester libchromexvmcpro1 isdnutils-xtools ubuntuone-client openoffice.org2-math openoffice.org-l10n-lt lsb-cxx-ia32 kdeartwork-emoticons-kde4 wmpuzzle trafshow python-plplot lx-gdb link-monitor-applet libscm-dev liblog-agent-logger-perl libccrtp-doc libclass-throwable-perl kde-i18n-csb jack-jconv hamradio-menus coinor-libvol-doc msx-emulator bitbake nabi language-pack-gnome-zh libpaperg popularity-contest xracer-tools xfont-nexus opendrim-lmp-baseserver libvorbisfile-ruby liblinebreak-doc libgfcui-2.0-0c2a-dbg libblacs-mpi-dev dict-freedict-spa-eng blender-ogrexml aspell-da x11-apps openoffice.org-l10n-lv openoffice.org-l10n-nl pnmtopng libodbcinstq1 libhsqldb-java-doc libmono-addins-gui0.2-cil sg3-utils linux-backports-modules-alsa-2.6.31-19-generic yorick-yeti-gsl python-pymssql plasma-widget-cpuload mcpp gpsim-lcd cl-csv libhtml-clean-perl asterisk-dbg apt-dater-dbg libgnome-mag1-dev language-pack-gnome-yo python-crypto svn-autoreleasedeb sugar-terminal-activity mii-diag maria-doc libplexus-component-api-java-doc libhugs-hgl-bundled libchipcard-libgwenhywfar47-plugins libghc6-random-dev freefem3d ezmlm cakephp-scripts aspell-ar ara-byte not+sparc openoffice.org-l10n-nn linux-backports-modules-karmic-generic-pae
    [Show full text]
  • Is It Time to Replace Gzip?
    bioRxiv preprint doi: https://doi.org/10.1101/642553; this version posted May 20, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Is it time to replace gzip? Comparison of modern compressors for molecular sequence databases Kirill Kryukov*, Mahoko Takahashi Ueda, So Nakagawa, Tadashi Imanishi Department of Molecular Life Science, Tokai University School of Medicine, Isehara, Kanagawa 259-1193, Japan. *Correspondence: [email protected] Abstract Nearly all molecular sequence databases currently use gzip for data compression. Ongoing rapid accumulation of stored data calls for more efficient compression tool. We systematically benchmarked the available compressors on representative DNA, RNA and Protein datasets. We tested specialized sequence compressors 2bit, BLAST, DNA-COMPACT, DELIMINATE, Leon, MFCompress, NAF, UHT and XM, and general-purpose compressors brotli, bzip2, gzip, lz4, lzop, lzturbo, pbzip2, pigz, snzip, xz, zpaq and zstd. Overall, NAF and zstd performed well in terms of transfer/decompression speed. However, checking benchmark results is necessary when choosing compressor for specific data type and application. Benchmark results database is available at: http://kirr.dyndns.org/sequence-compression-benchmark/. Keywords: compression; benchmark; DNA; RNA; protein; genome; sequence; database. Molecular sequence databases store and distribute DNA, RNA and protein sequences as compressed FASTA files. Currently, nearly all databases universally depend on gzip for compression. This incredible longevity of the 26-year-old compressor probably owes to multiple factors, including conservatism of database operators, wide availability of gzip, and its generally acceptable performance.
    [Show full text]
  • Rule Base with Frequent Bit Pattern and Enhanced K-Medoid Algorithm for the Evaluation of Lossless Data Compression
    Volume 3, No. 1, Jan-Feb 2012 ISSN No. 0976-5697 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info Rule Base with Frequent Bit Pattern and Enhanced k-Medoid Algorithm for the Evaluation of Lossless Data Compression. Nishad P.M.* Dr. N. Nalayini Ph.D Scholar, Department Of Computer Science Associate professor, Department of computer science NGM NGM College, Pollachi, India College Pollachi, Coimbatore, India [email protected] [email protected] Abstract: This paper presents a study of various lossless compression algorithms; to test the performance and the ability of compression of each algorithm based on ten different parameters. For evaluation the compression ratios of each algorithm on different parameters are processed. To classify the algorithms based on the compression ratio, rule base is constructed to mine with frequent bit pattern to analyze the variations in various compression algorithms. Also, enhanced K- Medoid clustering is used to cluster the various data compression algorithms based on various parameters. The cluster falls dissentingly high to low after the enhancement. The framed rule base consists of 1,048,576 rules, which is used to evaluate the compression algorithm. Two hundred and eleven Compression algorithms are used for this study. The experimental result shows only few algorithm satisfies the range “High” for more number of parameters. Keywords: Lossless compression, parameters, compression ratio, rule mining, frequent bit pattern, K–Medoid, clustering. I. INTRODUCTION the maximum shows the peek compression ratio of algorithms on various parameters, for example 19.43 is the Data compression is a method of encoding rules that minimum compression ratio and the 76.84 is the maximum allows substantial reduction in the total number of bits to compression ratio for the parameter EXE shown in table-1 store or transmit a file.
    [Show full text]
  • Basic Guide to the Command Line
    Basic guide to the command line Outline Introduction Definitions Special Characters Filenames Pathnames and the Path Variable (Search Path) Wildcards Standard in, standards out, standard error, and redirections Here document (heredoc) Owners, groups, and permissions Commands Regular Expressions Introduction To utilize many of the programs in NMRbox you will need to at least have a rudimentary understanding of how to navigate around the command line. With some practice you may even find that the command line is easier than using a GUI based file browser – it is certainly much more powerful. In addition to navigating from the command line you will also need a rudimentary understanding of how to create and execute a shell script. This Guide may take a little bit to get through, but if you know most of the contents in this document you will be well served. (top) Definitions Terminal Emulator – A terminal emulator is a program that makes your computer act like an old fashion computer terminal. When running inside a graphical environment it is often called a “terminal window”, “terminal”, “term”, or “shell” (although it is actually not a shell – read on). I will refer to a “Terminal Emulator” as “Terminal” in this document. Shell – A shell is a program that acts as a command language interpreter that executes the commands sent to it from the keyboard or a file. It is not part of the operating system kernel, but acts to pass commands to the kernel and receives the output from those commands. When you run a Terminal Emulator from a graphical interface a shell is generally run from within the Terminal so your interaction is with the shell and the Terminal simply presents a place for the shell to run in the graphical interface.
    [Show full text]
  • Comparison and Model of Compression Techniques for Smart Cloud Log File Handling
    Copyright IEEE. The final publication is available at IEEExplore via https://doi.org/10.1109/CCCI49893.2020.9256609. Comparison and Model of Compression Techniques for Smart Cloud Log File Handling Josef Spillner Zurich University of Applied Sciences Winterthur, Switzerland [email protected] Abstract—Compression as data coding technique has seen tight pricing of offered cloud services. Increasing diversity and approximately 70 years of research and practical innovation. progress in generic compression tools and log-specific algo- Nowadays, powerful compression tools with good trade-offs exist rithms [4], [5], [6] leaves many operators without a systematic for a range of file formats from plain text to rich multimedia. Yet in the dilemma of cloud providers to reduce log data sizes as much framework to choose suitable and economic log compression as possible while having to keep as much as possible around for tools and respective configurations. This prevents a systematic regulatory reasons and compliance processes, many companies solution to exploit cost tradeoffs, such as increasing investment are looking for smarter solutions beyond brute compression. into better compression levels while saving long-term storage In this paper, comprehensive applied research setting around cost. In this paper, such a framework is constructed by giving network and system logs is introduced by comparing text com- pression ratios and performance. The benchmark encompasses a comprehensive overview with benchmark results of 30 to- 13 tools and 30 tool-configuration-search combinations. The tool tal combinations of compression tools, decompression/search and algorithm relationships as well as benchmark results are tools and associated configurations. modelled in a graph.
    [Show full text]
  • Vorlage Für Dokumente Bei AI
    OSS Disclosure Document Date: 27-Apr-2018 CM-AI/PJ-CC OSS Licenses used in Suzuki Project Page 1 Table of content 1 Overview ..................................................................................................................... 11 2 OSS Licenses used in the project ................................................................................... 11 3 Package details for OSS Licenses usage ........................................................................ 12 3.1 7 Zip - LZMA SDK ............................................................................................................. 12 3.2 ACL 2.2.51 ...................................................................................................................... 12 3.3 Alsa Libraries 1.0.27.2 ................................................................................................... 12 3.4 AES - Advanced Encryption Standard 1.0 ......................................................................... 12 3.5 Alsa Plugins 1.0.26 ........................................................................................................ 13 3.6 Alsa Utils 1.0.27.2 ......................................................................................................... 13 3.7 APMD 3.2.2 ................................................................................................................... 13 3.8 Atomic_ops .................................................................................................................... 13 3.9 Attr 2.4.46 ...................................................................................................................
    [Show full text]
  • Gnome-Session Gzip Less 1.63424124514 Lzop Lib64gpgme
    lib64boost_serialization1.42.0 lib64boost_regex1.42.0 lib64boost_wave1.42.0 lib64boost_math_tr1f1.42.0 popt-data rpm-helper 2.75282526804 1.05413105413 lib64boost_iostreams1.42.0 1.68067226891 chkconfig lib64boost_math_c99l1.42.0 1.1120615911 dmidecode 1.622718052740. 0.0562113546936 perl-File-FnMatch 0.0570287995438 3.04259634888 1.0152284264 tcb perl-String-ShellQuote 0.869313242538 1.33903133903 0. 0.507614213198 libdrakx-net 1.39720558882 1.0152284264 1.1120615911 0.855431993157 1.62271805274 lib64popt0 0. 0.0570287995438 lib64ldetect0.11 0. 3.64583333333 2.60416666667 1.62271805274 0. 3.125 1.1120615911 0. 0. 0.0570287995438 2.604166666670. lib64boost_wserialization1.42.0 3.64583333333 crda 0.0. 0.0. lib64dbnss4.8 1.39720558882 0. 0. 0. 0. 0.726712177934 lib64boost_program_options1.42.0 0. 1.65169516082 0.0440431622991 0. 0.0. 0.855431993157 1.39720558882 0.0285143997719 2.57896261953 drakxtools-backend 0. 0. 3.7037037037 shadow-utils0.880863245981 2.03045685279 0. 0.0. 0.855431993157 0.940975192472 1.62271805274 0. 0.0570613409415 1.1697574893 0. 0. 0.114122681883 wireless-regdb 0.660647434486 2.60416666667 1.62271805274 0. 0. 0.0570613409415 xz pam_tcb 0.0285143997719 0.0. 1.79658070125 0. 2.53807106599 1.45506419401 4.6875 0. 0.388632499393 0.0. 0. lib64cap2 1.1120615911 perl-Curses 1.62271805274 0.855920114123 lib64acl1 0.684764000978 lib64lzma2 lib64boost_math_tr1l1.42.0 0.0285143997719 1.60757542392 0. 0.940975192472 1.16714380092 2.60416666667 3.04568527919 0.0570287995438 1.05160185865 lib64gamin-1_0 0.171086398631 1.1120615911 1.62271805274 0.040338846309 0. 0.0220215811495 lib64boost_graph1.42.0 0.940975192472 0. 0.811359026369 0. coreutils 0.855188141391 0. 0.0570287995438 0.0570613409415findutils 1.92605711756 0.526955816782 0.
    [Show full text]
  • Model 1.1 User's Guide
    Model 1.1 User’s Guide Second Release – December 2015 TABLE OF CONTENTS Section 1: Introduction .................................................................................................................................. 1 About this User’s Guide .............................................................................................................................. 1 About ImageCue™ ....................................................................................................................................... 1 Connections and Controls ........................................................................................................................... 2 Control Channel Descriptions ..................................................................................................................... 4 Section 2: Setup ............................................................................................................................................. 7 Setting the DMX512 Starting Address ......................................................................................................... 7 Selecting the DMX Personality .................................................................................................................... 8 Setting the LCD Display Backlight Level and Disabling the LEDs ................................................................. 8 Changing the HDMI/DVI Output Resolution ............................................................................................... 9 Selecting
    [Show full text]
  • Performance and Energy Consumption of Lossless Compression/Decompression Utilities on Mobile Computing Platforms
    2013 IEEE 21st International Symposium on Modelling, Analysis & Simulation of Computer and Telecommunication Systems Performance and Energy Consumption of Lossless Compression/Decompression Utilities on Mobile Computing Platforms Aleksandar Milenkovi, Armen Dzhagaryan Martin Burtscher Department of Electrical and Computer Engineering Department of Computer Science The University of Alabama in Huntsville Texas State University-San Marcos Huntsville, AL, U.S.A. San Marcos, TX, U.S.A. {milenka, aad002}@uah.edu [email protected] Abstract—Data compression and decompression utilities can be are in use, including dictionaries of expected or recently critical in increasing communication throughput, reducing encountered “words,” sliding windows that assume that communication latencies, achieving energy-efficient communi- recently seen data patterns will repeat, which are used in the cation, and making effective use of available storage. This Lempel-Ziv approach [3], as well as reversibly sorting data paper experimentally evaluates several such utilities for mul- to bring similar values close together, which is the approach tiple compression levels on systems that represent current taken by the Burrows and Wheeler transform [4]. The data mobile platforms. We characterize each utility in terms of its compression algorithms used in practice combine different compression ratio, compression and decompression through- models and coders, thereby favoring different types of inputs put, and energy efficiency. We consider different use cases that and representing different tradeoffs between speed and com- are typical for modern mobile environments. We find a wide pression ratio. Moreover, they typically allow the user to variety of energy costs associated with data compression and decompression and provide practical guidelines for selecting select the dictionary, window, or block size through a com- the most energy efficient configurations for each use case.
    [Show full text]