Are We Witnessing the Spectre of an HPC Meltdown?

Total Page:16

File Type:pdf, Size:1020Kb

Are We Witnessing the Spectre of an HPC Meltdown? Are We Witnessing the Spectre of an HPC Meltdown? Veronica´ G. Vergara Larrea, Michael J. Brim, Wayne Joubert, Swen Boehm, Matthew Baker, Oscar Hernandez, Sarp Oral, James Simmons, and Don Maxwell Oak Ridge Leadership Computing Facility Oak Ridge National Laboratory, Oak Ridge, TN 37830 Email: fvergaravg,brimmj,joubert,boehms,bakermb,oscar,oralhs,simmonsja,[email protected] Abstract—We measure and analyze the performance ob- applications that perform a large number of system calls served when running applications and benchmarks before and (e.g., I/O operations). Furthermore, the performance impacts after the Meltdown and Spectre fixes have been applied to the are expected to vary depending on the parallel file system Cray supercomputers and supporting systems at the Oak Ridge Leadership Computing Facility (OLCF). Of particular interest used (e.g., GPFS vs Lustre). is the effect of these fixes on applications selected from the In this work, performance results obtained from user OLCF portfolio when running at scale. This comprehensive facing systems, including Cray supercomputers at the Oak study presents results from experiments run on Titan, Eos, Ridge Leadership Computing Facility (OLCF) are presented. Cumulus, and Percival supercomputers at the OLCF. The This study includes experiments conducted on Titan (Cray results from this study are useful for HPC users running on Cray supercomputers, and serve to better understand the XK7), Eos (Intel Sandy Bridge Cray XC30), Cumulus (Intel impact that these two vulnerabilities have on diverse HPC Broadwell Cray XC40), and Percival (Intel Knights Landing workloads at scale. Cray XC40) supercomputers, as well as supporting systems. Keywords-high performance computing; performance analy- The applications and benchmarks used for these experiments sis; security vulnerabilities; Spectre; Meltdown; are selected from the acceptance test plans for each system. In order to provide a comprehensive view of the performance I. INTRODUCTION impact on HPC workloads, the test set was expanded to In early January 2018, two major security vulnerabil- include applications representative of the current OLCF ities present in the majority of modern processors were portfolio. We measure and analyze the performance observed announced. The Spectre [1] and Meltdown [2] vulnerabilities when running applications and benchmarks before and after take advantage of hardware speculative execution to expose the Meltdown and Spectre mitigations have been applied. Of kernel or application data stored in memory or caches to particular interest to this work is the effect of these fixes on other applications running on the same system, bypassing applications running at scale. security privileges and permissions. Fixes have started to If confirmed, a significant performance hit would have be released from the different vendors as microcode and enormous impact to the high performance computing (HPC) firmware updates. Mitigations have also been released in community. Since the majority of OLCF’s users are fo- the form of operating system kernel patches. cused on scientific simulation and modeling, a performance Current reports indicate that processors from Intel, ARM, degradation equates to reduced scientific output for a given IBM, Qualcomm, and AMD are impacted to various degrees. allocation of computing time. As a result, users may not Beyond posing a serious security risk, initial reports show reach their planned research milestones. Furthermore, a that available fixes for Meltdown can impact performance widespread performance impact may justify reexamining anywhere from just a few percent to nearly 50% depending community measures of performance, such as the well on the application. The highest impacts are expected in known Top500 list [3] which ranks supercomputers all over the world, after patches are applied. Notice of Copyright. This manuscript has been authored by UT- Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. II. BACKGROUND Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the A. Understanding Spectre and Meltdown United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this Processors can at times be natural procrastinators. Over manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these the years techniques were developed to limit the impact results of federally sponsored research in accordance with the DOE Public when the processor stalls. All processors are governed by Access Plan (http://energy.gov/downloads/doe-public-access-plan). a clock which dictates how much of an instruction can be CCPE Special Issue. This paper has been accepted and will be published as an article in a special issue of Concurrency and Computation Practice completed. Since most instructions cannot be completed in and Experience on the Cray User Group 2018. a single clock cycle, the instruction is broken into different independent stages that can each be executed in one clock Meltdown and Spectre exploit the fact that L3 cache is cycle. That is referred to as instruction pipelining. not correctly cleaned up after misprediction. Using L3 cache To maximize the processor’s performance, several mi- as an exploit vector was popularized by Yuval Yarom and croinstructions are handled in parallel. The depth of the Katrina Falkner [6]. Spectre and Meltdown do not rely on pipeline tells us how many microinstructions are handled misbehavior of the L3 cache. Rather, they use it as a side in parallel. Typically x86 processor can have pipelines that channel to exfiltrate data. They leverage the fact that data are 32 deep. This approach has a great performance benefit may be loaded into L3 cache based on bad predictions and for the processor as long as the instruction is completed not cleared from the cache upon misprediction. Thus, a before the next one. When this condition is not met, it is separate process may use processor-specific instructions to referred to as a hazard. Speculative execution is one class access the cache contents directly to find which memory of hazard. The exploits covered in this paper take advantage values are present and in turn deduce the values used in the of overlooked design details of this optimization. bad speculation. Spectre and Meltdown are different variants of a specu- lative execution vulnerability that impacts the majority of B. Spectre V1: Condition misprediction modern computer processors on the market [4]. Speculation Spectre V1 relies on mispredicting a conditional branch. occurs when the information needed for the next instruction The example given in the Spectre paper uses an array is not known, but the processor makes an educated guess. boundary check. In normal execution, the array boundary For example, branch speculation occurs when the processor will not be exceeded, thus the processor will speculate that guesses the result of a branch condition and speculatively the branch doing the boundary checking will be true. executes future instructions at the predicted branch target. /∗ In this example array a is an array that is cache hot. ∗ a r r a y a does not contain the data targeted for leaking, but Another form of branch speculation is when the processor ∗ index+array a will point to the memory the attacker wishes to exfiltrate. ∗ The variable index and array b are attacker controlled; guesses the target address of an indirect branch. If the target ∗ a r r a y a size and all indexes of array b are flushed from cache; is correctly predicted, performance can be greatly improved ∗/ i f ( i n de x < a r r a y a s i z e ) f by speculative execution. When an incorrect prediction (i.e., a r r a y value = array b [ a r r a y a [ i nd e x ]∗256]; mispredict) happens, the processor needs to roll back the g execution state. The processor discards the speculated pro- When executing the above code with the cache in the gram state to prevent user-visible effects of the speculative state as described, the processor will stall, waiting for execution. Vulnerabilities occur when speculated program array a size. The processor will then speculatively fetch state is not properly discarded. Exploit vectors include any array a[index] on the assumption that index < array a size processor hardware that is not fully and securely restored to evaluates to true. Index is an offset large enough to point its preprediction state, with typical targets being for example to another array, one with secret data (e.g., an encryption L3 caches (cache pollution) and I/O ports. private key). The speculative execution engine will then load There are three documented vulnerability variants, each into L3 cache a value from array b based on a calculation having been assigned a separate Common Vulnerabilities using values from array a. and Exposures (CVEs) number. CVE-2017-5753 (V1) and After speculative results are discarded, an attacker can- CVE-2017-5715 (V2) are the two variants known as Spectre, not directly inspect results of access to array b. However, and CVE-2017-5754 (V3) is the variant known as Meltdown. array b had a memory load performed, making its value In January 2018, Kocher et al. described Spectre [1], a new present in L3 cache. The attacker can then scan array b, time type of microarchitecture attack that exploits speculative the access of each index, and reconstruct the value from the execution by forcing the processor to execute instructions out-of-bounds memory access to array a based on whether outside the correct path of execution of a program. Melt- the access to array b was from main memory (slow) or L3 down [2] does not rely on branch prediction, but exploits cache (fast). Example code is available in Spectre paper [1]. some modern processors that perform multiple memory references without page table permission while executing C.
Recommended publications
  • HDF-EOS Interface Based on HDF5, Volume 2: Function Reference Guide Was Prepared Under the EOSDIS Evolution and Development-2 Contract (NNG15HZ39C)
    This page intentionally left blank. Preface This document is a Users Guide for HDF-EOS (Hierarchical Data Format - Earth Observing System) library tools. The version described in this document is HDF-EOS Version 5.1.16. The software is based on HDF5, a new version of HDF provided by by The HDF Group. HDF5 is a complete rewrite of the earlier HDF4 version, containing a different data model and user interface. HDF-EOS V5.1.16 incorporates HDF5, and keeps the familier HDF4-based interface. There are a few exceptions and these exceptions are described in this document. Note that the major functional difference is that Version 5.1.16 of the HDF-EOS library is a thread-safe. HDF is the scientific data format standard selected by NASA as the baseline standard for EOS. This Users Guide accompanies Version 5.1.16 software, which is available to the user community on the EDHS1 server. This library is aimed at EOS data producers and consumers, who will develop their data into increasingly higher order products. These products range from calibrated Level 1 to Level 4 model data. The primary use of the HDF-EOS library will be to create structures for associating geolocation data with their associated science data. This association is specified by producers through use of the supplied library. Most EOS data products which have been identified, fall into categories of Point, Grid, Swath or Zonal Average structures, the latter two of which are implemented in the current version of the library. Services based on geolocation information will be built on HDF-EOS structures.
    [Show full text]
  • Cielo Computational Environment Usage Model
    Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Version 1.0: Bob Tomlinson, John Cerutti, Robert A. Ballance (Eds.) Version 1.1: Manuel Vigil, Jeffrey Johnson, Karen Haskell, Robert A. Ballance (Eds.) Prepared by the Alliance for Computing at Extreme Scale (ACES), a partnership of Los Alamos National Laboratory and Sandia National Laboratories. Approved for public release, unlimited dissemination LA-UR-12-24015 July 2012 Los Alamos National Laboratory Sandia National Laboratories Disclaimer Unless otherwise indicated, this information has been authored by an employee or employees of the Los Alamos National Security, LLC. (LANS), operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The U.S. Government has rights to use, reproduce, and distribute this information. The public may copy and use this information without charge, provided that this Notice and any statement of authorship are reproduced on all copies. Neither the Government nor LANS makes any warranty, express or implied, or assumes any liability or responsibility for the use of this information. Bob Tomlinson – Los Alamos National Laboratory John H. Cerutti – Los Alamos National Laboratory Robert A. Ballance – Sandia National Laboratories Karen H. Haskell – Sandia National Laboratories (Editors) Cray, LibSci, and PathScale are federally registered trademarks. Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray XE, Cray XE6, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, Gemini, Libsci and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]
  • EOS: a Project to Investigate the Design and Construction of Real-Time Distributed Embedded Operating Systems
    c EOS: A Project to Investigate the Design and Construction of Real-Time Distributed Embedded Operating Systems. * (hASA-CR-18G971) EOS: A PbCJECZ 10 187-26577 INVESTIGATE TEE CESIGI AND CCES!I&CCIXOti OF GEBL-1IIBE DISZEIEOTEO EWBEECIC CEERATIN6 SPSTEI!!S Bid-Year lieport, 1QE7 (Illinois Unclas Gniv.) 246 p Avail: AlIS BC All/!!P A01 63/62 00362E8 Principal Investigator: R. H. Campbell. Research Assistants: Ray B. Essick, Gary Johnston, Kevin Kenny, p Vince Russo. i Software Systems Research Group University of Illinois at Urbana-Champaign Department of Computer Science 1304 West Springfield Avenue Urbana, Illinois 61801-2987 (217) 333-0215 TABLE OF CONTENTS 1. Introduction. ........................................................................................................................... 1 2. Choices .................................................................................................................................... 1 3. CLASP .................................................................................................................................... 2 4. Path Pascal Release ................................................................................................................. 4 5. The Choices Interface Compiler ................................................................................................ 4 8. Summary ................................................................................................................................. 5 ABSTRACT: Project EOS is studying the problems
    [Show full text]
  • An HDF-EOS and Data Formatting Primer for the ECS Project
    175-WP-001-002 An HDF-EOS and Data Formatting Primer for the ECS Project White Paper March 2001 Prepared Under Contract NAS5-60000 RESPONSIBLE ENGINEER E. M. Taaheri /s/ 3-1-2001 David Wynne, Abe Taaheri Date EOSDIS Core System Project SUBMITTED BY Larry Klein /s/ 2-28-2001 Darryl Arrington, Manager Toolkit Date Larry Klein, Manager Toolkit EOSDIS Core System Project Raytheon Company Upper Marlboro, Maryland This page intentionally left blank. Abstract This document is a release of a primer manual for using the Hierarchical Data Format (HDF) and extensions (HDF-EOS) in Earth Observing System (EOS), Version 1 and later. We confine our discussion in this document to HDF V4 and the versions of HDF-EOS (V2), which are based on that version of HDF. A new version of HDF (V5) has been released. Additional material for HDF5 and versions of HDF-EOS, based on HDF5 will be provided in a separate document. Comments and suggestions should be sent to: Larry Klein Emergent Information Technologies 9315 Largo Dr. West, Suite 250 Largo, MD 20774 USA Email: [email protected] Phone: (301) 925-0764 Fax: (301) 925-0321 Keywords: HDF, HDF-EOS, Swath, Grid, Point, Data Formats, Metadata, Standard Data Products, Disk Formats, Browse iii 175-WP-001-002 This page intentionally left blank. iv 175-WP-001-002 Table of Contents Abstract 1. Introduction 1.1 Purpose of this Document.......................................................................................... 1-1 1.2 Organization of this Document .................................................................................. 1-1 1.3 HDF and EOS: Introduction ...................................................................................... 1-1 1.4 HDF and EOS: General Philosophy........................................................................... 1-3 2.
    [Show full text]
  • The Intel Paragon
    Operating system and network support for high-performance computing Item Type text; Dissertation-Reproduction (electronic) Authors Guedes Neto, Dorgival Olavo Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 30/09/2021 20:24:35 Link to Item http://hdl.handle.net/10150/298757 INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy.
    [Show full text]
  • The Role of Computers in Research and Development at Langley WU 505-90-53 Research Center
    NASA Conference Publication 10159 3E_c97_ The Role of Computers in Research and W'h Development at Langley Research Center 0 Compiled by u_ I aO ,4" I ,_ o3 Carol D. Wieseman e_ N Langley Research Center • Hampton, Virginia I I I ,*" u'_ I,-tt_ U 0,, i _, t* 0 Z I Z_ 0 e,.l xO 0 UJ _) U.l CI ua t_U ;.3 ..I _,- I-- _[ Z C 0 _1" _ 0 ZZ_D I_. E IE I I.-- E) C_: L _E)Z C Z C)_ U_ Proceedings of a workshop sponsored by the National Aeronautics and Space Administration, Washington, D.C., and held at Langley Research Center, Hampton, Virginia June 15-16, 1994 National Aeronautics and Space Administration Langley Research Center • Hampton, Virginia 23681-0001 I I October 1994 INTRODUCTION On June 15 - 16, 1994, the Computer Systems Technical Committee presented a workshop, "The Role of Computers in LARC R&D", at NASA Langley Research Center. The objectives of the 1994 Workshop were to inform the LARC community about the current software system and software practices being used at LARC. To meet these objectives, there were talks presented by members of the Langley community, Naval Surface Warfare Center, Old Dominion University and Hampton University. The workshop was organized in 10 sessions as follows: Software Engineering Software Engineering Standards, Methods, and CASE Tools Solutions of Equations Automatic Differentiation Mosaic and the World Wide Web Graphics & Image Processing System Design and Integration CAE Tools Languages Advanced Topics This document is a compilation of the presentations given at the workshop.
    [Show full text]
  • 19990019831.Pdf
    NASA/CP-l999-208757 January, 1999 NASA Ames Research Center orkshop Author: Catherine Schulbach NASA Ames Research Center Editors: Catherine Schulbach NASA Ames Research Center and Ellen Mata, Ratheon ITSS TABLE OF CONTENTS NASA HPCCPICAS Workshop Proceedings HPCCPEAS Ove~ew...................................................................................................... vii Catherine Schulbach, Manager, Computational Aerosciences Project Session I: Advanced Computer Algorithms and Methodology ........................................ 1 Adaptive Computation of Rotor-Blades in Hover ............................................................... 3 -8 Mustafa Dindar and David Kenwright, Renssalaer Polytechnic Institute Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics.. ................................................................................................................ .9 v John Goodrich and Rodger Dyson, NASA Lewis Research Center Pe$ormance Analysis of large-Scale Applications Based on Wavefront Algorithms.. ................15 -3 Adolfy Hoisie, Olaf Lubeck and Harvey Wasserrnan, Los Alarnos National Laboratory Virtual PetaJlop Simulation: Parallel Potential Solvers and New Integrators for Gravitational Systems.. ....................................................................................................................... 21 George Lake, Thomas Quinn, Derek C. Richardson and Joachim Stadel, Department of Astronomy, University of Washington The Kalman Filter and High
    [Show full text]
  • Large Scale Computing in Science and Engineering
    ^I| report of the panel on Large Scale Computing in Science and Engineering Peter D. Lax, Chairman December 26,1982 REPORT OF THE PANEL ON LARGE SCALE COMPUTING IN SCIENCE AND ENGINEERING Peter D. Lax, Chairman fVV C- -/c..,,* V cui'-' -ku .A rr Under the sponsorship of I Department of Defense (DOD) National Science Foundation (NSF) rP TC i rp-ia In cooperation with Department of Energy (DOE) National Aeronautics and Space Adrinistration (NASA) December 26, 1982 This report does not necessarily represent either the policies or the views of the sponsoring or cooperating agencies. r EXECUTIVE SUMMARY Report of the Panel on Large Scale Computing in Science'and Engineering Large scale computing is a vital component of science, engineering, and modern technology, especially those branches related to defense, energy, and aerospace. In the 1950's and 1960's the U. S. Government placed high priority on large scale computing. The United States became, and continues to be, the world leader in the use, development, and marketing of "supercomputers," the machines that make large scale computing possible. In the 1970's the U. S. Government slackened its support, while other countries increased theirs. Today there is a distinct danger that the U.S. will fail to take full advantage of this leadership position and make the needed investments to secure it for the future. Two problems stand out: Access. Important segments of the research and defense communities lack effective access to supercomputers; and students are neither familiar with their special capabilities nor trained in their use. Access to supercomputers is inadequate in all disciplines.
    [Show full text]
  • The New SU User's Manual
    The New SU User's Manual John W. Stockwell, Jr. & Jack K. Cohen Version 3.2: August 2002 The Seismic Unix project is supported by these organizations: The Society of Exploration Geophysicists Center for Wave Phenomena Colorado School of Mines Golden, CO 80401, USA Past support was received from: The Gas Research Institute The CWP/SU Free Software Package: Legal Matters The following items pertain to the CWP/SU software available from the Colorado School of Mines anonymous ftp site at ftp.cwp.mines.edu(138.67.12.4), and the web site www.cwp.mines.edu/cwpcodes. Disclaimer: There are no guarantees, explicit or implicit, made by the Center for Wave Phenomena, the Colorado School of Mines, or any member of the aforesaid organizations, or by any contributor to this package, past, present, or future, regarding the accuracy, safety, usefulness, or any other quality or aspect of this software or any software derived from it. Copyright: Copyright (c) Colorado School of Mines, 1992-2002. All rights reserved. The CWP/SU Seismic Unix package is not public domain software, but it is available free under the following license: License: Permission to use, copy, and modify this software for any purpose within the guidelines set below, and without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies, and the name of the Colorado School of Mines (CSM) not be used in advertising or publicity pertaining to this software without specific, written prior permission from CSM. CSM makes no representations about the suitability of this software for any purpose.
    [Show full text]
  • Ebook - Informations About Operating Systems Version: September 3, 2016 | Download
    eBook - Informations about Operating Systems Version: September 3, 2016 | Download: www.operating-system.org AIX Operating System (Unix) Internet: AIX Operating System (Unix) AmigaOS Operating System Internet: AmigaOS Operating System Android operating system Internet: Android operating system Aperios Operating System Internet: Aperios Operating System AtheOS Operating System Internet: AtheOS Operating System BeIA Operating System Internet: BeIA Operating System BeOS Operating System Internet: BeOS Operating System BSD/OS Operating System Internet: BSD/OS Operating System CP/M, DR-DOS Operating System Internet: CP/M, DR-DOS Operating System Darwin Operating System Internet: Darwin Operating System Debian Linux Operating System Internet: Debian Linux Operating System eComStation Operating System Internet: eComStation Operating System Symbian (EPOC) Operating System Internet: Symbian (EPOC) Operating System FreeBSD Operating System (BSD) Internet: FreeBSD Operating System (BSD) Gentoo Linux Operating System Internet: Gentoo Linux Operating System Haiku Operating System Internet: Haiku Operating System HP-UX Operating System (Unix) Internet: HP-UX Operating System (Unix) GNU/Hurd Operating System Internet: GNU/Hurd Operating System Inferno Operating System Internet: Inferno Operating System IRIX Operating System (Unix) Internet: IRIX Operating System (Unix) JavaOS Operating System Internet: JavaOS Operating System LFS Operating System (Linux) Internet: LFS Operating System (Linux) Linspire Operating System (Linux) Internet: Linspire Operating
    [Show full text]
  • Case of Windows, UNIX, Linux, Mac, Android and Ios
    International Journal of Computer Applications (0975 – 8887) Volume 176 – No. 39, July 2020 A Comparative Study of Operating Systems: Case of Windows, UNIX, Linux, Mac, Android and iOS Akinlolu Adekotujo Adedoyin Odumabo Ademola Adedokun Olukayode Aiyeniko PhD Student PhD Student PhD Student Lecture II Computer Science Dept., Computer Science Dept., Computer Science Dept., Computer Science Dept., Lagos State University, Lagos State University, Lagos State University, Lagos State University, Nigeria Nigeria Nigeria Nigeria ABSTRACT held and even video game consoles use some type of Varieties of operating systems (OS) have emerged over the operating system. There are numerous types of operating years having different features and functionalities. systems in today’s ICT world. Mac Operating System Understanding the functionalities of each OS guides users’ designed and owned by Apple Inc., Windows by Microsoft decisions about the OS to install on their computers. In view Inc., Linux by Community, likewise Android by Google Inc. of this, the comparative analysis of different OS is needed to and others. provide details on the similarities and difference in recent Varieties of Operating Systems have emerged over the years types of OS vis-à-vis their strengths and weaknesses. This having different features and functionalities. Understanding paper focus on the comparative analysis of Windows, Unix, the functionalities of each OS guides users’ decisions about Linux, Mac, Android and iOS operating systems based on the the OS to install on their computers. In view of this the OS features and their strengths and weaknesses. A qualitative comparative analysis of different OS becomes inevitable. analysis of six different operating systems and result showed Thus the need arises for a comparative analysis that will give that Windows 10 had 0.04 malware file present while an overview of the similarities and difference in different Windows 7 machine was 0.08.
    [Show full text]
  • 00375150.Pdf
    LA-6456-MS InformalReport UC-32 CIC-14 REPORT COLLECTION Issued: December 1976 C3● REPRODUCTION COPY . m CRAY-I Evaluation Final Report by T. W. Keller scientific laboratory of the University of California LOS ALAMOS. NEW MEXICO 87545 An Affirmative Action/Equal Opportunity Employer UNITED STATES ENERGV RESEARCU AND DEvELOPMENT ADMINISTRATION CONTRACT W-740B-CNG. S6 * ? Publicationofthisreportdoesnotimplyendorsementoftheproductsdescribedherein. Printed in the United States of America. Available from National Technical Information Service U.S. Department of Commerce 5285 Port Royal Road Springfield, VA 22161 Prim: Printed Copy $5.50 Microfiche $3.00 ACKNOWLEDGEMENTS Much of Chapter III is drawn from reports by P. Iwanchuk and L. Rudsinski, who performed the scalar performance evaluation with the able assis- tance of T. Montoya and J. Miller. J. Moore per- formed the disk studies and much of Chapter VI is extracted from his reports. The discussion on Computation and 1/0 Interference in the sane chap- ter closely follows a report by J. C. Browne (Visiting Staff Member). R. Tanner edited the manuscript with contributions from many readers and with the aid of the word processing section of group C-4. The vector performance studies (Chapter IV) were conducted by A. McKnight. Data reported in Chapter V was collected by N. Nagy and D. Buckner. T. Jordan, K. Witte, and A. Marusak performed the preliminary scalar per- formance test referenced in Chapter VIII. Many people reviewed the coding of the scalar perform- ance modules (Chapter III) , with special acknowl- ~—--- edgement due D. Lindsay of FEDSIM. Discussions _ ~th B. Buzbee, ”J. Worlton, “an~ F. Dorr helped >~m, deffne the scope of the evaluation under the time ~~~~...constraints~~ e -.
    [Show full text]