Making a Case for Distributed File Systems at Exascale

Total Page:16

File Type:pdf, Size:1020Kb

Making a Case for Distributed File Systems at Exascale Making a Case for Distributed File Systems at Exascale 1,2Ioan Raicu, 2,3,4Ian T. Foster, 2,3,5Pete Beckman 1 Department of Computer Science, Illinois Institute of Technology, Chicago, IL, USA 2 Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL, USA 3 Computation Institute, University of Chicago, Chicago, IL, USA 4 Department of Computer Science, University of Chicago, Chicago, IL, USA 5 Exascale Technology and Computing Institute, Argonne National Laboratory, Argonne, IL, USA [email protected], [email protected], [email protected] ABSTRACT 1. INTRODUCTION Exascale computers will enable the unraveling of significant Today’s science is generating datasets that are increasing scientific mysteries. Predictions are that 2019 will be the year of exponentially in both complexity and volume, making their exascale, with millions of compute nodes and billions of threads analysis, archival, and sharing one of the grand challenges of the of execution. The current architecture of high-end computing 21st century. Seymour Cray once said – “a supercomputer is a systems is decades-old and has persisted as we scaled from device for turning compute-bound problems into I/O-bound gigascales to petascales. In this architecture, storage is problems” – which addresses the fundamental shift in bottlenecks completely segregated from the compute resources and are as supercomputers gain more parallelism at exponential rates [1], connected via a network interconnect. This approach will not the storage infrastructure performance is increasing at a scale several orders of magnitude in terms of concurrency and significantly lower rate. This implies that the data management throughput, and will thus prevent the move from petascale to and data flow between the storage and compute resources is exascale. At exascale, basic functionality at high concurrency becoming the new bottleneck for large-scale applications. The levels will suffer poor performance, and combined with system support for data intensive computing [2] is critical to advancing mean-time-to-failure in hours, will lead to a performance collapse modern science as storage systems have experienced a gap for large-scale heroic applications. Storage has the potential to be between capacity and bandwidth that increased more than 10-fold the Achilles heel of exascale systems. We propose that future over the last decade. There is an emerging need for advanced high-end computing systems be designed with non-volatile techniques to manipulate, visualize and interpret large datasets. memory on every compute node, allowing every compute node to Many domains (e.g. astronomy, medicine, bioinformatics) share actively participate in the metadata and data management and these data management challenges, strengthening the potential leveraging many-core processors high bisection bandwidth in impact from generic solutions. torus networks. This position paper discusses this revolutionary Exascale computers (e.g. capable of 1018 ops/sec) [3], with a new distributed storage architecture that will make exascale processing capability similar to that of the human brain, will computing more tractable, touching virtually all disciplines in enable the unraveling of significant scientific mysteries and high-end computing and fueling scientific discovery. present new challenges and opportunities. The US President made the building of exascale systems a top national priority, stating [4] Categories and Subject Descriptors that it will "dramatically increasing our ability to understand the C.2.4 [Distributed Systems]; C.5.1 [Large and Medium world around us through simulation and slashing the time needed (``Mainframe'') Computers]; D.4.2 [Storage Management]; to design complex products such as therapeutics, advanced D.4.3 [File Systems Management]; D.4.5 [Reliability]; D.4.7 materials, and highly-efficient autos and aircraft". Major [Organization and Design]; D.4.8 [Performance]; H.3.4 scientific opportunities arise in many fields (such as weather [Systems and Software] modeling, understanding global warming, national security, drug General Terms discovery, and economics) and may rely on revolutionary advances that will enable exascale computing. [1] Many experts Management, Performance, Reliability predict [3] that exascale computing will be common by 2019: Keywords millions of nodes, billions of threads of execution, hundreds of Exascale computing, distributed file systems, storage architecture, petabytes of memory, and exabytes of persistent storage. many-task computing The current architecture of high-end computing (HEC) systems is decades-old and has persisted as we scaled from gigascales to petascales. In this architecture, storage is completely segregated from the compute resources and are connected via a network interconnect (e.g. parallel file systems running on network Permission to make digital or hard copies of all or part of this work for attached storage, such as GPFS [5], PVFS [6], and Lustre [7]). personal or classroom use is granted without fee provided that copies are This approach will not scale several orders of magnitude in terms not made or distributed for profit or commercial advantage and that of concurrency and throughput, and will thus prevent the move copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, from petascales to exascale. Unless significant research is requires prior specific permission and/or a fee. invested to revolutionize the storage hardware architecture and LSAP’11, June 8, 2011, San Jose, California, USA. parallel/distributed file system implementations, we will not be Copyright 2011 ACM 978-1-4503-0703-1/11/06...$10.00. able to build capability exascale systems. Supercomputers are generally designed for either capability or capacity computing, as Directory Create (single dir) 10000 Irving Wladawsky-Berger wrote in 2010. File Create (single dir) “Capability supercomputers dedicate the whole 1000 machine to solve a very large problem in the shortest amount of time. Capacity supercomputers, on the other hand, support large numbers of users solving different 100 kinds of problems simultaneously. While both kinds of supercomputing are very important, initiatives designed 10 to push the envelope, like DOE’s exascale project, tend to focus on the development of capability machines to address Grand Challenge problems that could not be 1 solved in any other way. Capability computing has been (sec)per Operation Time primarily applied to what is sometimes referred to as 0.1 heroic computations, where just about the whole 4 256 4096 8192 16384 machine is applied to a single task.” [8] Number of Processors 2. ACHILLES HEEL Figure 2: File/dir create time on GPFS; 16K-cores [10] We believe storage systems in future exascale systems will be its Read/Write: Reading performance of common datasets (e.g. Achilles heel [9]), unless storage systems are re-architected to application binaries, read-only databases) is challenging. The ensure scalability to millions of nodes and potentially billions of experiment in Figure 3 shows the distribution of data from the concurrent I/O requests. We believe that the system mean-time-to- GPFS file system to the compute nodes with two approaches: 1) failure (MTTF) at exascale will be of the order of a few hours. pushing out the data over a spanning tree (CIO), and 2) pulling the However, based on current trends, even basic functionality (e.g. data from each compute node independently (GPFS). At the booting - Figure 1, metadata operations - Figure 2, read/write relatively modest scale of 4k-cores, the CIO approach operations - Figure 3 and Figure 4, loading applications - Figure outperforms GPFS by a factor of five, due to better utilization of 3, and check-pointing - Figure 5) at significant concurrency levels the bi-section bandwidth of the torus network. Writing from many are expected to take more time than the expected exascale compute nodes directly to parallel file systems is also challenging; machine's MTTF, which will result in complete performance Figure 4 shows the poor efficiency achieved (15%-70%) with 16 collapse for grand challenge applications. second tasks producing 1KB to 1MB output. [10, 11] Booting: For example, booting the Blue Gene/P (see Figure 1) is 14000 CIO an expensive operation at 0.5 petaflops, as measured on the real GPFS machine. On 256 processors, it takes 85 seconds to boot the 12000 allocation; on the full 160K processors, it takes 1090 seconds. Unfortunately, it appears that the machine boot time grows 10000 linearly with the number of nodes, translating to potentially over 8000 25K seconds (7+ hours) boot-time at 1M node scales. [10] 6000 Booting Partitions 1320 1200 (MB/s) Throughput 4000 1080 960 2000 840 0 720 4 256 512 1024 2048 4096 600 Number of Processors 480 Time (sec) Time 360 Figure 3: Data read; GPFS vs. spanning tree (CIO) [11] 240 100% 120 90% 0 80% 70% Number of Processors 60% Figure 1: Booting a BlueGene/P; 160K-cores [10] 50% Metadata operations on parallel file systems can be inefficient at 40% large scale. Early experiments on the BlueGene/P system at 16K- Efficiency 30% 16sec+GPFS(1KB) core scales (see Figure 2) shows the various costs (wall-clock time 20% 16sec+GPFS(16KB) measured at remote processor) for file/directory create on GPFS. 16sec+GPFS(128KB) Ideal performance would be to have a flat line. If care is not taken 10% 16sec+GPFS(1MB) to avoid lock contention, performance degrades rapidly,
Recommended publications
  • 2020 ALCF Science Report
    ARGONNE LEADERSHIP 2020 COMPUTING FACILITY Science On the cover: A snapshot of a visualization of the SARS-CoV-2 viral envelope comprising 305 million atoms. A multi-institutional research team used multiple supercomputing resources, including the ALCF’s Theta system, to optimize codes in preparation for large-scale simulations of the SARS-CoV-2 spike protein that were recognized with the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. Image: Rommie Amaro, Lorenzo Casalino, Abigail Dommer, and Zied Gaieb, University of California San Diego 2020 SCIENCE CONTENTS 03 Message from ALCF Leadership 04 Argonne Leadership Computing Facility 10 Advancing Science with HPC 06 About ALCF 12 ALCF Resources Contribute to Fight Against COVID-19 07 ALCF Team 16 Edge Services Propel Data-Driven Science 08 ALCF Computing Resources 18 Preparing for Science in the Exascale Era 26 Science 28 Accessing ALCF GPCNeT: Designing a Benchmark 43 Materials Science 51 Physics Resources for Science Suite for Inducing and Measuring Constructing and Navigating Hadronic Light-by-Light Scattering Contention in HPC Networks Polymorphic Landscapes of and Vacuum Polarization Sudheer Chunduri Molecular Crystals Contributions to the Muon 30 2020 Science Highlights Parallel Relational Algebra for Alexandre Tkatchenko Anomalous Magnetic Moment Thomas Blum 31 Biological Sciences Logical Inferencing at Scale Data-Driven Materials Sidharth Kumar Scalable Reinforcement-Learning- Discovery for Optoelectronic The Last Journey Based Neural Architecture Search Applications
    [Show full text]
  • Gluster Roadmap: Recent Improvements and Upcoming Features
    Gluster roadmap: Recent improvements and upcoming features Niels de Vos GlusterFS co-maintainer [email protected] Agenda ● Introduction into Gluster ● Quick Start ● Current stable releases ● History of feature additions ● Plans for the upcoming 3.8 and 4.0 release ● Detailed description of a few select features FOSDEM, 30 January 2016 2 What is GlusterFS? ● Scalable, general-purpose storage platform ● POSIX-y Distributed File System ● Object storage (swift) ● Distributed block storage (qemu) ● Flexible storage (libgfapi) ● No Metadata Server ● Heterogeneous Commodity Hardware ● Flexible and Agile Scaling ● Capacity – Petabytes and beyond ● Performance – Thousands of Clients FOSDEM, 30 January 2016 3 Terminology ● Brick ● Fundamentally, a filesystem mountpoint ● A unit of storage used as a capacity building block ● Translator ● Logic between the file bits and the Global Namespace ● Layered to provide GlusterFS functionality FOSDEM, 30 January 2016 4 Terminology ● Volume ● Bricks combined and passed through translators ● Ultimately, what's presented to the end user ● Peer / Node ● Server hosting the brick filesystems ● Runs the Gluster daemons and participates in volumes ● Trusted Storage Pool ● A group of peers, like a “Gluster cluster” FOSDEM, 30 January 2016 5 Scale-out and Scale-up FOSDEM, 30 January 2016 6 Distributed Volume ● Files “evenly” spread across bricks ● Similar to file-level RAID 0 ● Server/Disk failure could be catastrophic FOSDEM, 30 January 2016 7 Replicated Volume ● Copies files to multiple bricks ● Similar to file-level
    [Show full text]
  • Storage Virtualization for KVM – Putting the Pieces Together
    Storage Virtualization for KVM – Putting the pieces together Bharata B Rao – [email protected] Deepak C Shettty – [email protected] M Mohan Kumar – [email protected] (IBM Linux Technology Center, Bangalore) Balamurugan Aramugam - [email protected] Shireesh Anjal – [email protected] (RedHat, Bangalore) Aug 2012 LPC2012 Linux is a registered trademark of Linus Torvalds. Agenda ● Problems around storage in virtualization ● GlusterFS as virt-ready file system – QEMU-GlusterFS integration – GlusterFS – Block device translator ● Virtualization management - oVirt and VDSM – VDSM-GlusterFS integration ● Storage integration – libstoragemgmt Problems in storage/FS in KVM virtualization ● Multiple choices for file system and virtualization management ● Lack of virtualization aware file systems ● File systems/storage functionality implemented in other layers of virtualization stack – Snapshots, block streaming, image formats in QEMU ● No well defined interface points in the virtualization stack for storage integration ● No standard interface/APIs available for services like backup and restore ● Need for a single FS/storage solution that works for local, SAN and NAS storage – Mixing different types of storage into a single filesystem namespace GlusterFS ● User space distributed file system that scales to several petabytes ● Aggregates storage resources from multiple nodes and presents a unified file system namespace GlusterFS - features ● Replication ● Striping ● Distribution ● Geo-replication/sync ● Online volume extension
    [Show full text]
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • Glusterfs Documentation Release 3.8.0
    GlusterFS Documentation Release 3.8.0 Gluster Community Aug 10, 2016 Contents 1 Quick Start Guide 3 1.1 Single Node Cluster...........................................3 1.2 Multi Node Cluster............................................4 2 Overview and Concepts 7 2.1 Volume Types..............................................7 2.2 FUSE................................................... 10 2.3 Translators................................................ 12 2.4 Geo-Replication............................................. 17 2.5 Terminologies.............................................. 19 3 Installation Guide 23 3.1 Getting Started.............................................. 23 3.2 Configuration............................................... 24 3.3 Installing Gluster............................................. 26 3.4 Overview................................................. 27 3.5 Quick Start Guide............................................ 28 3.6 Setup Baremetal............................................. 29 3.7 Deploying in AWS............................................ 30 3.8 Setting up in virtual machines...................................... 31 4 Administrator Guide 33 5 Upgrade Guide 35 6 Contributors Guide 37 6.1 Adding your blog............................................. 37 6.2 Bug Lifecycle.............................................. 37 6.3 Bug Reporting Guidelines........................................ 38 6.4 Bug Triage Guidelines.......................................... 41 7 Changelog 47 8 Presentations 49 i ii GlusterFS
    [Show full text]
  • Using the Andrew File System on *BSD [email protected], Bsdcan, 2006 Why Another Network Filesystem
    Using the Andrew File System on *BSD [email protected], BSDCan, 2006 why another network filesystem 1-slide history of Andrew File System user view admin view OpenAFS Arla AFS on OpenBSD, FreeBSD and NetBSD Filesharing on the Internet use FTP or link to HTTP file interface through WebDAV use insecure protocol over vpn History of AFS 1984: developed at Carnegie Mellon 1989: TransArc Corperation 1994: over to IBM 1997: Arla, aimed at Linux and BSD 2000: IBM releases source 2000: foundation of OpenAFS User view <1> global filesystem rooted at /afs /afs/cern.ch/... /afs/cmu.edu/... /afs/gorlaeus.net/users/h/hugo/... User view <2> authentication through Kerberos #>kinit <username> obtain krbtgt/<realm>@<realm> #>afslog obtain afs@<realm> #>cd /afs/<cell>/users/<username> User view <3> ACL (dir based) & Quota usage runs on Windows, OS X, Linux, Solaris ... and *BSD Admin view <1> <cell> <partition> <server> <volume> <volume> <server> <partition> Admin view <2> /afs/gorlaeus.net/users/h/hugo/presos/afs_slides.graffle gorlaeus.net /vicepa fwncafs1 users hugo h bram <server> /vicepb Admin view <2a> /afs/gorlaeus.net/users/h/hugo/presos/afs_slides.graffle gorlaeus.net /vicepa fwncafs1 users hugo /vicepa fwncafs2 h bram Admin view <3> servers require KeyFile ~= keytab procedure differs for Heimdal: ktutil copy MIT: asetkey add Admin view <4> entry in CellServDB >gorlaeus.net #my cell name 10.0.0.1 <dbserver host name> required on servers required on clients without DynRoot Admin view <5> File locking no databases on AFS (requires byte range locking)
    [Show full text]
  • Biology at the Exascale
    Biology at the Exascale Advances in computational hardware and algorithms that have transformed areas of physics and engineering have recently brought similar benefits to biology and biomedical research. Contributors: Laura Wolf and Dr. Gail W. Pieper, Argonne National Laboratory Biological sciences are undergoing a revolution. High‐performance computing has accelerated the transition from hypothesis‐driven to design‐driven research at all scales, and computational simulation of biological systems is now driving the direction of biological experimentation and the generation of insights. As recently as ten years ago, success in predicting how proteins assume their intricate three‐dimensional forms was considered highly unlikely if there was no related protein of known structure. For those proteins whose sequence resembles a protein of known structure, the three‐dimensional structure of the known protein can be used as a “template” to deduce the unknown protein structure. At the time, about 60 percent of protein sequences arising from the genome sequencing projects had no homologs of known structure. In 2001, Rosetta, a computational technique developed by Dr. David Baker and colleagues at the Howard Hughes Medical Institute, successfully predicted the three‐dimensional structure of a folded protein from its linear sequence of amino acids. (Baker now develops tools to enable researchers to test new protein scaffolds, examine additional structural hypothesis regarding determinants of binding, and ultimately design proteins that tightly bind endogenous cellular proteins.) Two years later, a thirteen‐year project to sequence the human genome was declared a success, making available to scientists worldwide the billions of letters of DNA to conduct postgenomic research, including annotating the human genome.
    [Show full text]
  • Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO
    Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO..........................................................................................................................................1 Martin Hinner < [email protected]>, http://martin.hinner.info............................................................1 1. Introduction..........................................................................................................................................1 2. Volumes...............................................................................................................................................1 3. DOS FAT 12/16/32, VFAT.................................................................................................................2 4. High Performance FileSystem (HPFS)................................................................................................2 5. New Technology FileSystem (NTFS).................................................................................................2 6. Extended filesystems (Ext, Ext2, Ext3)...............................................................................................2 7. Macintosh Hierarchical Filesystem − HFS..........................................................................................3 8. ISO 9660 − CD−ROM filesystem.......................................................................................................3 9. Other filesystems.................................................................................................................................3
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Texttext Introduction Into Glusterfs Martin Alfke <Martin.Alfke@Buero20
    Introduction into GlusterFS Martin Alfke <[email protected]> TextText Wednesday, July 7, 2010 1 Agenda • General Information on GlusterFS • Architecture Overview • GlusterFS Translators • GlusterFS Configuration Wednesday, July 7, 2010 2 General Information Wednesday, July 7, 2010 3 General Information I File System Solutions Shared Disk File System •San or Block Access •Mostly used in HA setup (e.g. DRBD) Distributed File System •Network File System •NFS •SMB/CIFS •9P Distributed replicated File System •Replication •HA and offline operation •Coda •MS DFS •MooseFS 4 Wednesday, July 7, 2010 4 General Information II File System Solutions Distributed parallel File System •Setup across multiple servers •HPC Distributed replicated parallel File System •HPC and HA •Cosmos •MogileFS •GPFS (IBM) •GFS (Google) •Hadoop •GlusterFS 5 Wednesday, July 7, 2010 5 Customer Platform Shared Storage Issues Bad performance of NFS kernel stack Limitations of concurrent NFS accesses Customer data already on NFS system Environment and Requirements Debian GNU/Linux Version 4 (etch) – 3 years old Most D f-t p FS need complex data migration Solution has to be expandable 6 Wednesday, July 7, 2010 6 Why GlusterFS ? Decision basics Possibility to run NFS and GlusterFS in parallel No data migration necessary Easy setup Extendable (e.g. new storage nodes) min. Kernel 2.6.3x --> optimization for FUSE context switches ! 7 Wednesday, July 7, 2010 7 Architecture Overview Wednesday, July 7, 2010 8 GlusterFS Basics Hardware Any x86 Hardware Direct attached storage, RAID FC, Infiniband or iSCSI SAN Gigabit or 10 Gigabit network or Infiniband OS Linux, Solaris, OpenSolaris, OS X, FreeBSD ext3 or ext4 Filesystem (tested) Other POSIX compliant filesystems should also work Architecture No Meta-Data Server (fully distributed architecture - Elastic Hash) Replication (RAID 1) Distribution (RAID 0) FUSE (Standard) NFS (unfs3 - depreciated) SMB/CIFS DAV 9 Wednesday, July 7, 2010 9 GlusterFS Architecture Overview I Distribution Replication Images Copyright by Gluster.Inc.
    [Show full text]
  • Red Hat Virtualization 4.3 Planning and Prerequisites Guide
    Red Hat Virtualization 4.3 Planning and Prerequisites Guide Planning for the Installation and Configuration of Red Hat Virtualization 4.3 Last Updated: 2019-05-09 Red Hat Virtualization 4.3 Planning and Prerequisites Guide Planning for the Installation and Configuration of Red Hat Virtualization 4.3 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Legal Notice Copyright © 2019 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
    [Show full text]
  • ECP Software Technology Capability Assessment Report
    ECP-RPT-ST-0001-2018 ECP Software Technology Capability Assessment Report Michael A. Heroux, Director ECP ST Jonathan Carter, Deputy Director ECP ST Rajeev Thakur, Programming Models & Runtimes Lead Jeffrey Vetter, Development Tools Lead Lois Curfman McInnes, Mathematical Libraries Lead James Ahrens, Data & Visualization Lead J. Robert Neely, Software Ecosystem & Delivery Lead July 1, 2018 DOCUMENT AVAILABILITY Reports produced after January 1, 1996, are generally available free via US Department of Energy (DOE) SciTech Connect. Website http://www.osti.gov/scitech/ Reports produced before January 1, 1996, may be purchased by members of the public from the following source: National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 Telephone 703-605-6000 (1-800-553-6847) TDD 703-487-4639 Fax 703-605-6900 E-mail [email protected] Website http://www.ntis.gov/help/ordermethods.aspx Reports are available to DOE employees, DOE contractors, Energy Technology Data Exchange representatives, and International Nuclear Information System representatives from the following source: Office of Scientific and Technical Information PO Box 62 Oak Ridge, TN 37831 Telephone 865-576-8401 Fax 865-576-5728 E-mail [email protected] Website http://www.osti.gov/contact.html This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.
    [Show full text]