“Freedom” Koan-Sin Tan [email protected] OSDC.Tw, Taipei Apr 11Th, 2014

Total Page:16

File Type:pdf, Size:1020Kb

“Freedom” Koan-Sin Tan Freedom@Computer.Org OSDC.Tw, Taipei Apr 11Th, 2014 Understanding Android Benchmarks “freedom” koan-sin tan [email protected] OSDC.tw, Taipei Apr 11th, 2014 1 disclaimers • many of the materials used in this slide deck are from the Internet and textbooks, e.g., many of the following materials are from “Computer Architecture: A Quantitative Approach,” 1st ~ 5th ed • opinions expressed here are my personal one, don’t reflect my employer’s view 2 who am i • did some networking and security research before • working for a SoC company, recently on • big.LITTLE scheduling and related stuff • parallel construct evaluation • run benchmarking from time to time • for improving performance of our products, and • know what our colleagues' progress 3 • Focusing on CPU and memory parts of benchmarks • let’s ignore graphics (2d, 3d), storage I/O, etc. 4 Blackbox ! • google image search “benchmark”, you can find many of them are Android-related benchmarks • Similar to recently Cross-Strait Trade in Services Agreement (TiSA), most benchmarks on Android platform are kinda blackbox 5 Is Apple A7 good? • When Apple released the new iPhone 5s, you saw many technical blog showed some benchmarks for reviews they came up • commonly used ones: • GeekBench • JavaScript benchmarks • Some graphics benchmarks • Why? Are they right ones? etc. e.g., http://www.anandtech.com/show/7335/the-iphone-5s-review 6 open blackbox 7 Android Benchmarks 8 http:// www.anandtech.com /show/7384/state-of- cheating-in-android- benchmarks No, not improvement in this way 9 Assuming there is not cheating, what we we can do? Outline • Performance benchmark review • Some Android benchmarks • What we did and what still can be done • Future 11 To quote what Prof. Raj Jain quoted • Benchmark v. trans. To subject (a system) to a series of tests in order to obtain prearranged results not available on competitive systems From: “The Devil’s DP Dictionary” S. Kelly-Bootle 12 Why benchmarking • We did something good, let check if we did it right • comparing with own previous results to see if we break anything • We want to know how good our colleagues in other places are 13 What to report? • Usually, what we mean by “benchmarking” is to measure performance • What to report? • intuitive answer: how many things we do in certain period of time • yes, time. E.g., MIPS, MFLOPS, MiB/s, bps 14 MIPS and MFLOPS • MIPS (Million Instruc0ons per Second), MFLOPS (Million Floa0ng-Point Opera0ons per Second) • All instruc0ons are not created equal – CISC machine instruc0ons usually accomplish a lot more than those of RISC machines, comparing the instruc0ons of a CISC machine and a RISC machine is similar to comparing La0n and Greek 15 MIPS and what’s wrong with them • MIPS is instruc0on set dependent, making it difficult to compare MIPS of one computers with different ISA • MIPS varies between programs on the same computers; and most importantly, • MIPS can vary inversely to performance –w/ hardware FP, generally, MIPS is smaller 16 MFLOPS and what’s wrong with them • Applied only to programs with floa0ng-point operaons • Opera0ons instead of instruc0ons, but s0ll –floa0ng-point instruc0ons are different on machines different ISAs –Fast and slow floa0ng-point opera0ons • Possible solu0on: weight and source code level count –ADD, SUB, COMPARE : 1 –DIVIDE, SQRT: 2 –EXP, SIN: 4 17 • The best choice of benchmarks to measure performance is real applica0ons 18 Problema0c benchmarks • Kernel: small, key pieces of real applica0ons, e.g., linpack • Toy programs: 100-line programs from beginning programming assignments, e.g., quicksort • Synthe0c benchmarks: fake programs invented to try to match the profile and behavior of really applica0ons, e.g., Dhrystone 19 Why they are disreputed? • Small, fit in cache • Obsolete instruc0on mix • Uncontrolled source code • Prone to compiler tricks • Short run0mes on modern machines • Single-number performance characteriza0on with a single benchmark • Difficult to reproduce results (short run0me and low-precision UNIX 0mer) 20 Dhrystone • Source –hhp://homepages.cwi.nl/~steven/dry.c • < 1000 LoC –Size of CA15 binary compiled with bionic • Instruc0ons: ~ 14 KiB text data bss dec 13918 467 10266 24660 21 Whetstone Test MFLOPS MOPS ms • Dhrystone is a pun on N1 float 119.78 0.16 N2 float 171.98 0.78 Whetstone N3 if 154.25 0.67 N4 fixpt 397.48 0.79 N5 cos 19.08 4.36 • Source code: hp:// N6 float 84.22 6.41 N7 equal 86.84 2.13 www.netlib.org/ N8 exp 5.95 6.26 benchmark/whetstone.c MWIPS 463.97 21.55 22 More on Synthe0c benchmarks • The best known examples of synthe0c benchmarks are Whetstone and Dhrystone • Problems: – Compiler and hardware op0miza0ons can ar0ficially inflate performance of these benchmarks but not of real programs – The other side of the coin is that because these benchmarks are not natural programs, they don’t reward op0miza0ons of behaviors that occur in real programs • Examples: – Op0mizing compilers can discard 25% of the Dhrystone code; examples include loops that are only executed once, making the loop overhead instruc0ons unnecessary – Most Whetstone floa0ng-point loops execute small numbers of 0mes or include calls inside the loop. These characteris0cs are different from many real programs – Some more discussion in 1st edi0on of the textbook 23 LINPACK • LINPACK: a floa0ng point benchmark from the manual of LINPACK library • Source –hhp://www.netlib.org/benchmark/linpackc –hhp://www.netlib.org/benchmark/linpackc.new • 883 LoC –Size of CA15 binary compiled with bionic • Instruc0ons: ~ 13 KiB text data bss dec 12670 408 0 13086 24 25 CoreMark (1/2) • CoreMark is a benchmark that aims to measure the performance of central processing units (CPU) used in embedded systems. It was developed in 2009 by Shay Gal-On at EEMBC and is intended to become an industry standard, replacing the an0quated Dhrystone benchmark • The code is wrien in C code and contains implementa0ons of the following algorithms: – Linked list processing. – Matrix (mathema0cs) manipula0on (common matrix opera0ons), – state machine (determine if an input stream contains valid numbers), and – CRC • from wikipedia 26 CoreMark (2/2) • CoreMark vs. Dhrystone name LoC core_list_join.c 496 –Repor0ng rule –Use of library calls, e.g., core_matrix.c 308 malloc() is avoided core_stat.c 277 –CRC to make sure data are core_util.c 210 corrected • However, CoreMark is a kernel + synthe0c benchmark, s0ll quite small footprint text data bss dec 18632 456 20 19108 27 So? • Too overcome the danger of placing eggs in one basket, collec0ons of benchmark applica0ons, called benchmark suites, are popular measure of performance of processors with variety of applica0ons • Standard Performance Evalua0on Corpora0on (SPEC) 28 29 Why CPU2000 in 2010s? • Why ARM s0cks with SPEC CPU2000 instead of CPU2006 –1999 q4 results, earliest available CPU2000 results (hp:// www.spec.org/cpu2000/results/res1999q4/) • CINT2000 base: 133 – 424 • CFP2000 base: 126 – 514 name CA9 CA7 CA15 Krait SPECint 200 356 320 537 326 SPECfp 2000 298 236 567 350 –2005 Opteron 144, 1.8 GHz All normalized to 1.0 GHz • 1,440 (CA15 1.9 GHz reported nVidia is 1,168) –CPU2006 requires much more DRAM, 1 GiB DRAM is not enough 30 SPEC numbers from Quan0ta0ve Approach 5th Edion 31 How long does SPEC CPU2000 take? Reference Base Base Benchmark Time Runtime Ratio 164.gzip 1400 215 652 • About 1 hrs to compile 175.vpr 1400 198 707 176.gcc 1100 94.8 1161 181.mcf 1800 266 677 • Run0me: Sum of base 186.crafty 1000 118 850 197.parser 1800 291 619 252.eon 1300 87.8 1480 run0me mul0plied by 3 253.perlbmk 1800 172 1045 254.gap 1100 107 1026 255.vortex 1900 211 899 – E.g., 1.7 GHz CA15, 256.bzip2 1500 203 740 300.twolf 3000 399 752 (2256+3229) x 3 = 16,455 s ~= SPECint_base2000 2256 854 4.57 hr Reference Base Base Benchmark Time Runtime Ratio – For 1.0 GHz: 4.57 x 1.7 = 7.77 68.wupwise 1600 162 991 171.swim 3100 389 797 hr 172.mgrid 1800 339 532 173.applu 2100 241 870 177.mesa 1400 112 1254 – For CA7 assuming twice slower: 178.galgel 2900 201 1444 179.art 2600 195 1332 183.equake 1300 157 828 7.77 * 2 = 15.54 hr 187.facerec 1900 183 1036 188.ammp 2200 353 623 189.lucas 2000 134 1491 191.fma3d 2100 212 988 200.sixtrack 1100 241 456 301.apsi 2600 310 839 SPECfp_base2000 435 3229 909.6 32 Figure 1.16 SPEC2006 programs and the evolu0on of the SPEC benchmarks over 0me, with integer programs above the line and floa0ng-point programs below the line. Of the 12 SPEC2006 integer programs, 9 are wrihen in C, and the rest in C++. For the floa0ng-point programs, the split is 6 in Fortran, 4 in C++, 3 in C, and 4 in mixed C and Fortran. The figure shows all 70 of the programs in the 1989, 1992, 1995, 2000, and 2006 releases. The benchmark descrip0ons on the les are for SPEC2006 only and do not apply to earlier versions. Programs in the same row from different genera0ons of SPEC are generally not related; for example, fpppp is not a CFD code like bwaves. Gcc is the senior ci0zen of the group. Only 3 integer programs and 3 floa0ng-point programs survived three or more genera0ons. Note that all the floa0ng-point programs are new for SPEC2006. Although a few are carried over from genera0on to genera0on, the version of the program changes and either the input or the size of the benchmark is osen changed to increase its running 0me and to avoid perturba0on in measurement or domina0on of the execu0on 0me by some factor other than CPU 0me.
Recommended publications
  • Metrics for Performance Advertisement
    Metrics for Performance Advertisement Vojtěch Horký Peter Libič Petr Tůma 2010 – 2021 This work is licensed under a “CC BY-NC-SA 3.0” license. Created to support the Charles University Performance Evaluation lecture. See http://d3s.mff.cuni.cz/teaching/performance-evaluation for details. Contents 1 Overview 1 2 Operation Frequency Related Metrics 2 3 Operation Duration Related Metrics 3 4 Benchmark Workloads 4 1 Overview Performance Advertisement Measuring for the purpose of publishing performance information. Requirements: – Well defined meaning. – Simple to understand. – Difficult to game. Pitfalls: – Publication makes results subject to pressure. – Often too simple to convey meaningful information. – Performance in contemporary computer systems is never simple. Speed Related Metrics Responsiveness Metrics – Time (Task Response Time) – How long does it take to finish the task ? Productivity Metrics – Rate (Task Throughput) – How many tasks can the system complete per time unit ? Utilization Metrics – Resource Use (Utilization) – How much is the system loaded when working on a task ? – Share of time a resource is busy or over given load level. – Helps identify bottlenecks (most utilized resources in the system). 1 Metric for Webmail Performance What metric would you choose to characterize performance of a web mail site ? – User oriented metric would be end-to-end operation time. – Server oriented metric would be request processing time. – How about metrics in between ? – And would you include mail delivery time ? How is throughput related to latency ? How is utilization defined for various resources ? 2 Operation Frequency Related Metrics Clock Rate Clock rate (frequency) of the component (CPU, bus, memory) in MHz. Most often we talk about CPU frequency.
    [Show full text]
  • The Nutanix Design Guide
    The FIRST EDITION Nutanix Design Guide Edited by AngeloLuciani,VCP By Nutanix, RenévandenBedem, incollaborationwith NPX,VCDX4,DECM-EA RoundTowerTechnologies Table of Contents 1 Foreword 3 2 Contributors 5 3 Acknowledgements 7 4 Using This Book 7 5 Introduction To Nutanix 9 6 Why Nutanix? 17 7 The Nutanix Eco-System 37 8 Certification & Training 43 9 Design Methodology 49 & The NPX Program 10 Channel Charter 53 11 Mission-Critical Applications 57 12 SAP On Nutanix 69 13 Hardware Platforms 81 14 Sizer & Collector 93 15 IBM Power Systems 97 16 Remote Office, Branch Office 103 17 Xi Frame & EUC 113 Table of Contents 18 Xi IoT 121 19 Xi Leap, Data Protection, 133 DR & Metro Availability 20 Cloud Management 145 & Automation: Calm, Xi Beam & Xi Epoch 21 Era 161 22 Karbon 165 23 Acropolis Security & Flow 169 24 Files 193 25 Volumes 199 26 Buckets 203 27 Prism 209 28 Life Cycle Manager 213 29 AHV 217 30 Move 225 31 X-Ray 229 32 Foundation 241 33 Data Center Facilities 245 34 People & Process 251 35 Risk Management 255 The Nutanix Design Guide 1 Foreword Iamhonoredtowritethisforewordfor‘TheNutanixDesignGuide’. Wehavealwaysbelievedthatcloudwillbemorethanjustarented modelforenterprises.Computingwithintheenterpriseisnuanced, asittriestobalancethefreedomandfriction-freeaccessofthe publiccloudwiththesecurityandcontroloftheprivatecloud.The privateclouditselfisspreadbetweencoredatacenters,remoteand branchoffices,andedgeoperations.Thetrifectaofthe3laws–(a) LawsoftheLand(dataandapplicationsovereignty),(b)Lawsof Physics(dataandmachinegravity),and(c)LawsofEconomics
    [Show full text]
  • Hypervisors Vs. Lightweight Virtualization: a Performance Comparison
    2015 IEEE International Conference on Cloud Engineering Hypervisors vs. Lightweight Virtualization: a Performance Comparison Roberto Morabito, Jimmy Kjällman, and Miika Komu Ericsson Research, NomadicLab Jorvas, Finland [email protected], [email protected], [email protected] Abstract — Virtualization of operating systems provides a container and alternative solutions. The idea is to quantify the common way to run different services in the cloud. Recently, the level of overhead introduced by these platforms and the lightweight virtualization technologies claim to offer superior existing gap compared to a non-virtualized environment. performance. In this paper, we present a detailed performance The remainder of this paper is structured as follows: in comparison of traditional hypervisor based virtualization and Section II, literature review and a brief description of all the new lightweight solutions. In our measurements, we use several technologies and platforms evaluated is provided. The benchmarks tools in order to understand the strengths, methodology used to realize our performance comparison is weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. introduced in Section III. The benchmark results are presented Our results show that containers achieve generally better in Section IV. Finally, some concluding remarks and future performance when compared with traditional virtual machines work are provided in Section V. and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance II. BACKGROUND AND RELATED WORK difference with other technologies is in many cases relatively small. In this section, we provide an overview of the different technologies included in the performance comparison.
    [Show full text]
  • Red Hat Enterprise Virtualization Performance: Specvirt™ Benchmark
    RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT™ BENCHMARK DATASHEET AT A GLANCE OVERVIEW ® • The performance of Red Hat Red Hat Enterprise Virtualization (RHEV) is the strategic virtualization alternative for organizations Enterprise Virtualization can looking for better total cost of ownership, faster return on investment, accelerated break-even, and be compared to other freedom from vendor lock-in. virtualization platforms RHEV consists of both a hypervisor technology and an enterprise virtualization manager. The RHEV using the SPECvirt_sc2010 hypervisor, based on the Red Hat Enterprise Linux® kernel and the Kernel-based Virtual Machine benchmark. (KVM) hypervisor technology, offers the high performance and scalability that Red Hat is known for. • SPECvirt_sc2010 measures Hypervisor performance is a key factor in deciding which virtualization platform to implement, and hypervisor performance using Red Hat Enterprise Virtualization performance, as measured by the SPECvirt_sc2010® benchmark, realistic workloads. leads the industry both in terms of highest overall performance and highest number of performant As of March 1, 2013, RHEV virtual machines on a single server. leads VMware in performance on comparable WHAT IS SPECVIRT_SC2010®? servers defined by CPU count. SPECvirt_sc2010 is the first vendor-neutral benchmark designed to measure the performance of • The RHEV subscription model datacenter servers that are used for server virtualization. The benchmark was developed by the provides high performance non-profit Standard Performance Evaluation Corporation (SPEC) virtualization subcommittee, at a lower cost-per-SPECvirt_ whose members and contributors include AMD, Dell, Fujitsu, HP, IBM, Intel, Oracle, Red Hat, Unisys sc2010 than VMware. and VMware. SPECvirt_sc2010 uses realistic workloads and SPEC’s proven performance measurement method- ologies to enable vendors, users and researchers to compare system performance across multiple hardware, virtualization platforms, and applications.
    [Show full text]
  • On Benchmarking Intrusion Detection Systems in Virtualized Environments
    Technical Report: SPEC-RG-2013-002 Version: 1.0 On Benchmarking Intrusion Detection Systems in Virtualized Environments SPEC RG IDS Benchmarking Working Group Aleksandar Milenkoski Samuel Kounev Alberto Avritzer Institute for Program Structures and Data Institute for Program Structures and Data Siemens Corporate Research Organization Organization Princeton, NJ USA Karlsruhe Institute of Technology Karlsruhe Institute of Technology [email protected] Karlsruhe, Germany Karlsruhe, Germany [email protected] [email protected] Nuno Antunes Marco Vieira CISUC, Department of Informatics CISUC, Department of Informatics Engineering Engineering University of Coimbra University of Coimbra Coimbra, Portugal Coimbra, Portugal [email protected] [email protected] arXiv:1410.1160v1 [cs.CR] 5 Oct 2014 ® Research ℠ June 26, 2013 research.spec.org www.spec.org Contents 1 Introduction.......................................1 2 Intrusion Detection in Virtualized Environments . .2 2.1 VMM-Based Intrusion Detection Systems . .2 2.2 Intrusion Detection Techniques . .4 Misuse-based Intrusion Detection . .4 Anomaly-based Intrusion Detection . .5 3 Requirements and Challenges for Benchmarking VMM-based IDSes . .7 3.1 Workloads . .7 Benign Workloads . .7 Malicious Workloads . 10 3.2 Metrics ..................................... 14 4 Conclusion . 17 References . 18 i Executive Summary Modern intrusion detection systems (IDSes) for virtualized environments are deployed in the virtualization layer with components inside the virtual machine monitor (VMM) and the trusted host virtual machine (VM). Such IDSes can monitor at the same time the network and host activities of all guest VMs running on top of a VMM being isolated from malicious users of these VMs. We refer to IDSes for virtualized environments as VMM-based IDSes. In this work, we analyze state-of-the-art intrusion detection techniques applied in virtualized environments and architectures of VMM-based IDSes.
    [Show full text]
  • Getting Started with Blackfin Processors, Revision 6.0, September 2010
    Getting Started With Blackfin® Processors Revision 6.0, September 2010 Part Number 82-000850-01 Analog Devices, Inc. One Technology Way Norwood, Mass. 02062-9106 a Copyright Information ©2010 Analog Devices, Inc., ALL RIGHTS RESERVED. This document may not be reproduced in any form without prior, express written consent from Analog Devices. Printed in the USA. Disclaimer Analog Devices reserves the right to change this product without prior notice. Information furnished by Analog Devices is believed to be accurate and reliable. However, no responsibility is assumed by Analog Devices for its use; nor for any infringement of patents or other rights of third parties which may result from its use. No license is granted by implication or oth- erwise under the patent rights of Analog Devices. Trademark and Service Mark Notice The Analog Devices logo, Blackfin, the Blackfin logo, CROSSCORE, EZ-Extender, EZ-KIT Lite, and VisualDSP++ are registered trademarks of Analog Devices. EZ-Board is a trademark of Analog Devices. All other brand and product names are trademarks or service marks of their respective owners. CONTENTS PREFACE Purpose of This Manual .................................................................. xi Intended Audience ......................................................................... xii Manual Contents ........................................................................... xii What’s New in This Manual ........................................................... xii Technical or Customer Support ....................................................
    [Show full text]
  • IADIS Conference Template
    BENCHMARKING OF BARE METAL VIRTUALIZATION PLATFORMS ON COMMODITY HARDWARE Duarte Pousa*, José Rufino*† *Polytechnic Institute of Bragança, 5300-253 Bragança, Portugal †Laboratory of Instrumentation and Experimental Particle Physics, University of Minho, 4710-057 Braga, Portugal ([email protected], [email protected]) ABSTRACT In recent years, System Virtualization became a fundamental IT tool, whether it is type-2/hosted virtualization, mostly exploited by end-users in their personal computers, or type-1/bare metal, well established in IT departments and thoroughly used in modern datacenters as the very foundation of cloud computing. Though bare metal virtualization is meant to be deployed on server-grade hardware (for performance, stability and reliability reasons), properly configured desktop-class systems are often used as virtualization “servers”, due to their attractive performance/cost ratio. This paper presents the results of a study conducted on such systems, about the performance of Windows 10 and Ubuntu Server 16.04 guests, when deployed in what we believe are the type-1 platforms most in use today: VMware ESXi, Citrix XenServer, Microsoft Hyper-V, and KVM-based (represented by oVirt and Proxmox). Performance is measured using three synthetic benchmarks: PassMark for Windows, UnixBench for Ubuntu Server, and the cross-platform Flexible I/O Tester. The benchmarks results may be used to choose the most adequate type-1 platform (performance-wise), depending on guest OS, its performance requisites (CPU-bound, IO-bound, or balanced) and its storage type (local/remote) used. KEYWORDS Bare Metal Virtualization, Synthetic Benchmarking, Performance Assessment, Commodity Hardware 1. INTRODUCTION System Virtualization allows a physical host to run (simultaneously) many guest operating systems in self- contained software-defined virtual machines, sharing the host hardware, under control of a hypervisor.
    [Show full text]
  • Benchmarking-HOWTO.Pdf
    Linux Benchmarking HOWTO Linux Benchmarking HOWTO Table of Contents Linux Benchmarking HOWTO.........................................................................................................................1 by André D. Balsa, [email protected] ..............................................................................................1 1.Introduction ..........................................................................................................................................1 2.Benchmarking procedures and interpretation of results.......................................................................1 3.The Linux Benchmarking Toolkit (LBT).............................................................................................1 4.Example run and results........................................................................................................................2 5.Pitfalls and caveats of benchmarking ..................................................................................................2 6.FAQ .....................................................................................................................................................2 7.Copyright, acknowledgments and miscellaneous.................................................................................2 1.Introduction ..........................................................................................................................................2 1.1 Why is benchmarking so important ? ...............................................................................................3
    [Show full text]
  • TPC-V: a Benchmark for Evaluating the Performance of Database Applications in Virtual Environments Priya Sethuraman and H
    TPC-V: A Benchmark for Evaluating the Performance of Database Applications in Virtual Environments Priya Sethuraman and H. Reza Taheri, VMWare, Inc. {psethuraman, rtaheri}@vmware.com TPCTC 2010 Singapore © 2010 VMware Inc. All rights reserved Agenda/Topics Introduction to virtualization Existing benchmarks Genesis of TPC-V But what is TPC-E??? TPC-V design considerations Set architectures, variability, and elasticity Benchmark development status Answers to some common questions 2 TPC TC 2010 What is a Virtual Machine? A (VM) is a software computer that, like a physical computer, runs an operating system and applications. An operating system installed on a virtual machine is called a guest operating system. Virtual machines run on host servers. The same server can run many virtual machines. Every VM runs in an isolated environment. Started out with IBM VM in the 60s Also on Sun Solaris, HP Itanium, IBM Power/AIX, others A new wave started in the late 90s on X86 • Initially, enthusiasts ran Windows and Linux VMs on their PCs Traditional Architecture Virtual Architecture 3 TPC TC 2010 Why virtualize a server ? Server consolidation • The vast majority of server are grossly underutilized • Reduces both CapEx and OpEx Migration of VMs (both storage and CPU/memory) • Enables live load balancing • Facilitates maintenance High availability • Allows a small number of generic servers to back up all servers Fault tolerance • Lock-step execution of two VMs Cloud computing! Utility computing was finally enabled by • Ability to consolidate
    [Show full text]
  • Toward a Trustable, Self-Hosting Computer System
    Toward a Trustable, Self-Hosting Computer System Gabriel L. Somlo CERT – SEI Carnegie Mellon University Pittsburgh, PA 15213 Email: [email protected] Abstract—Due to the extremely rapid growth of the the software could, in theory, be presumed to be perfectly computing and IT technology market, commercial hard- secure and free of bugs. ware made for the civilian, consumer sector is increasingly We propose to build trustable computer systems on (and inevitably) deployed in security-sensitive environ- top of Field Programmable Gate Arrays (FPGA), which, ments. With the growing threat of hardware Trojans and due to their generic nature, make it qualitatively harder backdoors, an adversary could perpetrate a full system compromise, or privilege escalation attack, even if the to conceal intentional backdoors implemented in silicon. software is presumed to be perfectly secure. We propose a From there, we leverage the transparency and auditability method of field stripping a computer system by empirically of Free and Open Source (FOSS) hardware, software, proving an equivalence between the trustability of the and compiler toolchains to configure FPGAs to act fielded system on one hand, and its comprehensive set as Linux-capable Systems-on-Chip (SoC) that are as of sources (including those of all toolchains used in its trustworthy as the comprehensive sources used in both construction) on the other. In the long run, we hope their specification and construction. to facilitate comprehensive verification and validation of The remainder of this paper is laid out as follows: fielded computer systems from fully self-contained hard- ware+software sources, as a way of mitigating against Section II presents a brief overview of the hardware the lack of control over (and visibility into) the hardware development lifecycle, pointing out similarities and con- supply chain.
    [Show full text]
  • KVM Virtualization Roadmap and Technology Update
    KVM Virtualization Roadmap and Technology Update Karen Noel Bhavna Sarathy Senior Eng Manager Senior Product Manager Red Hat, Inc. Red Hat, Inc. June 13, 2013 Why we believe KVM is the best virtualization platform Performance Lower Cost Cross Platform KVM holds the Top 6/11 customers report up to Support and certification for virtual machine consolidation 70% savings by using leading x86_64 operating systems scores on SPECvirt (1) KVM (2) including RHEL and Microsoft Windows (4) Security Cloud & Virtualization Management Certification (3) EAL4+ Red Hat Open Stack for Cloud plus SE Linux enabling Virtualization and Red Hat Mandatory Access Control Enterprise Virtualization for data- between virtual machines center Virtualization (1) Source: SpecVirt_sc2010 results: http://www.spec.org/virt_sc2010/results/specvirt_sc2010_perf.html (2) Source: Case study on Canary Islands Government migration from VMware to RHEV: http://www.redhat.com/resourcelibrary/case-studies/canary-islands-government-migrates-telecommunications-platform-from- vmware-to-red-hat (3) Source: http://www.redhat.com/solutions/industry/government/certifications.html (4) Source: http://www.redhat.com/resourcelibrary/articles/enterprise-linux-virtualization-support KVM hypervisor in multiple Red Hat products KVM is the foundation Virtualization technology in multiple Red Hat products Red Hat Enterprise Virtualization – Hypervisor Derived from Red Hat Enterprise Linux SMALL FORM FACTOR, SCALABLE, ● RHEV Hypervisor HIGH PERFORMANCE ● Prebuilt binary (ISO) with 300+ packages derived
    [Show full text]
  • Android Benchmarking for Architectural Research Ira Ray Jenkins
    Florida State University Libraries Electronic Theses, Treatises and Dissertations The Graduate School 2012 Android Benchmarking for Architectural Research Ira Ray Jenkins Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY COLLEGE OF ARTS AND SCIENCES ANDROID BENCHMARKING FOR ARCHITECTURAL RESEARCH By IRA RAY JENKINS A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science Degree Awarded: Summer Semester, 2012 Ira Ray Jenkins defended this thesis on July 2, 2012. The members of the supervisory committee were: Gary Tyson Professor Directing Thesis David Whalley Committee Member Piyush Kumar Committee Member The Graduate School has verified and approved the above-named committee members, and certifies that the thesis has been approved in accordance with the university requirements. ii To Nana Bo, a woman among women, the grandest of grandmothers, the surest of hind catchers and my eternal best friend. iii TABLE OF CONTENTS ListofTables........................................ v ListofFigures ....................................... vi List of Abbreviations . vii Abstract........................................... viii 1 Introduction 1 1.1 Benchmarking................................... 1 1.2 MobileComputing ................................ 2 1.3 Goal........................................ 2 2 Challenges 4 2.1 Environment ................................... 4 2.2 Android Limitations . 5 3 Design 7 3.1 Proposed Solution . 7 3.2 BenchmarkFramework.............................. 8 3.3 Initial Analysis . 10 4 Implementation 12 4.1 Benchmarks.................................... 12 4.1.1 Micro-benchmarks . 12 4.1.2 API Demos . 14 4.2 EventRecording ................................. 16 4.3 Packaging . 17 5 Conclusion 18 A Benchmark Profiles 19 Bibliography ........................................ 24 Biographical Sketch . 28 iv LIST OF TABLES 3.1 ProgressBar Profile .
    [Show full text]