Cluster Computing with Openhpc

Total Page:16

File Type:pdf, Size:1020Kb

Cluster Computing with Openhpc http://openhpc.community Cluster Computing with OpenHPC Karl W. Schulz, Ph.D. Technical Project Lead, OpenHPC Community Scalable Datacenter Solutions Group, Intel HPCSYSPROS16 Workshop, SC’16 November 14 ! Salt Lake City, Utah Acknowledgements • Co-Authors: Reese Baird, David Brayford, Yiannis Georgiou, Gregory Kurtzer, Derek Simmel, Thomas Sterling, Nirmala Sundararajan, Eric Van Hensbergen • OpenHPC Technical Steering Committee (TSC) • Linux Foundation and all of the project members • Intel, Cavium, and Dell for hardware donations to support community testing efforts • Texas Advanced Computing Center for hosting support 2 Outline • Community project overview - mission/vision - members - governance • Stack overview • Infrastructure: build/test • Summary 3 OpenHPC: Mission and Vision • Mission: to provide a reference collection of open-source HPC software components and best practices, lowering barriers to deployment, advancement, and use of modern HPC methods and tools. • Vision: OpenHPC components and best practices will enable and accelerate innovation and discoveries by broadening access to state-of-the-art, open-source HPC methods and tools in a consistent environment, supported by a collaborative, worldwide community of HPC users, developers, researchers, administrators, and vendors. 4 OpenHPC: Project Members Argonne6 National6 Laboratory CEA Mixture of Academics, Labs, Project6member6participation6interest? Please6contact6 OEMs, and ISVs/OSVs Jeff6ErnstFriedman [email protected] OpenHPC Technical Steering Committee (TSC) Role Overview OpenHPC Technical6Steering6Committee6(TSC) Integration6 Upstream6Component6 Project6 EndUUser6/6Site6 Testing6 Development6 Maintainers Leader Representative(s) Coordinator(s) Representative(s) https://github.com/openhpc/ohpc/wiki/Governance7Overview 6 Stack Overview • Packaging efforts have HPC in mind and include compatible modules (for use with Lmod) with development libraries/tools • Endeavoring to provide hierarchical development environment that is cognizant of different compiler and MPI families • Intent is to manage package dependencies so they can be used as building blocks (e.g. deployable with multiple provisioning systems) • Include common conventions for env variables • Development library install example: # yum install petsc-gnu-mvapich2-ohpc • End user interaction example with above install: (assume we are a user wanting to build a PETSC hello world in C) $ module load petsc $ mpicc -I$PETSC_INC petsc_hello.c -L$PETSC_LIB –lpetsc Typical Cluster ArchitectureInstall Guide - CentOS7.1 Version (v1.0) Lustre* storage system • Install guides walk thru high speed network bare-metal install • Leverages image-based Master compute (SMS) nodes provisioner (Warewulf) Install Guide - CentOS7.1 Version (v1.0) Data - PXE boot (stateless) Center eth0 eth1 Network to compute eth interface to compute BMC interface - that while the example definitions above correspondtcp networking to a small 4-node compute subsystem, the compute optionally connect parameters are defined in array format to accommodate logical extension to larger node counts. external Lustre file Figure 1: Overview of physical cluster architecture. • $ ohpc repo # OpenHPC repo location system { } • $ sms name # Hostname for SMS server { } the local• $ datasms ip center network and eth1 used to provision# Internal and manage IP address the on cluster SMS server backend (note that these { } • interface• $ namessms eth areinternal examples and may be di↵erent depending# Internal on Ethernet local settings interface and on SMS OS conventions). Two Obviously need { } logical• IP$ eth interfacesprovision are expected to each compute node:# Provisioning the first is interface the standard for computes Ethernet interface that { } hardware-specific will• be used$ internal for provisioningnetmask and resource management.# SubnetThe second netmask is used for internal to connect network to each host’s BMC { } and• is used$ ntp forserver power management and remote console# access.Local ntp Physical server for connectivity time synchronization for these two logical { } information to support IP networks• $ bmc isusername often accommodated via separate cabling# andBMC switching username infrastructure;for use by IPMI however, an alternate { } configuration• $ bmc canpassword also be accommodated via the use of# BMCa shared password NIC, for which use by runs IPMI a packet filter to divert (remote) bare-metal { } management• $ c ip[0] packets, between$ c ip[1] the,... host and BMC. # Desired compute node addresses { } { } In• addition$ c bmc[0] to the, $ IPc networking,bmc[1] ,... there is a high-speed# BMC network addresses (InfiniBand for computes in this recipe) that is also provisioning { } { } connected• $ c tomac[0] each of, the$ c hosts.mac[1] This,... high speed network# is MAC used foraddresses application for computes message passing and optionally { } { } for Lustre• $ compute connectivityregex as well. # Regex for matching compute node names (e.g. c*) { } Optional: 1.3• Bring$ mgs fs yourname own license # Lustre MGS mount name { } • $ sms ipoib # IPoIB address for SMS server OpenHPC{ provides a} variety of integrated, pre-packaged elements from the Intel® Parallel Studio XE soft- • $ ipoib netmask # Subnet netmask for internal IPoIB ware suite.{ Portions of the} runtimes provided by the included compilers and MPI components are freely • $ c ipoib[0] , $ c ipoib[1] ,... # IPoIB addresses for computes usable. However,{ in} order{ to compile} new binaries using these tools (or access other analysis tools like Intel® Trace Analyzer and Collector, Intel® Inspector, etc), you will need to provide your own valid license and2 OpenHPC Install adopts Base a bring-your-own-license Operating Systemmodel. (BOS) Note that licenses are provided free of charge for many categories of use. In particular, licenses for compilers and developments tools are provided at no cost8 toIn academic an external researchers setting, installing or developers the desired contributing BOS on to a master open-sourceSMS hostsoftware typically projects. involves More booting information from a on DVD ISO image on a new server. With this approach, insert the CentOS7.1 DVD, power cycle the host, and this program can be found at: follow the distro provided directions to install the BOS on your chosen master host. Alternatively, if choosing to use a pre-installed server,https://software.intel.com/en-us/qualify-for-free-software please verify that it is provisioned with the required CentOS7.1 distribution. 1.4Tip Inputs As thisWhile recipe it is details theoretically installing possible a cluster to provision starting a Warewulf from bare-metal, cluster with there SELinux is a requirementenabled, doing to so define is be- IP ad- dressesyond and the gather scope of hardware this document. MAC addresses Even the use in orderof permissive to support mode a can controlled be problematic provisioning and we process. therefore These recommend disabling SELinux on the master SMS host. If SELinux components are installed locally, the valuesselinuxenabled are necessarilycommand unique can to be the used hardware to determine being if SELinux used, and is currently this document enabled. If uses enabled, variable consult substitution the ($ variabledistro documentation) in the command-line for information examples on how to that disable. follow to highlight where local site inputs are required. A{ summary of} the required and optional variables used throughout this recipe are presented below. Note 3 Install OpenHPC Components6 Rev: bf8c471 With the BOS installed and booted, the next step is to add desired OpenHPC packages onto the master server in order to provide provisioning and resource management services for the rest of the cluster. The following subsections highlight this process. 3.1 Enable OpenHPC repository for local use To begin, enable use of the OpenHPC repository by adding it to the local list of available package repositories. Note that this requires network access from your master server to the OpenHPC repository, or alternatively, 7 Rev: bf8c471 OpenHPC v1.2 - Current S/W components Functional*Areas Components Base6OS CentOS 7.2, SLES126SP1 new6with6v1.2 Architecture x86_64,6aarch64 (Tech6Preview) Conman,6Ganglia, Lmod,6LosF,6Nagios,6pdsh,6prun,6EasyBuild,6ClusterShell,6 Administrative6Tools mrsh,6Genders,6Shine,6Spack Provisioning6 Warewulf Notes: • Additional dependencies Resource6Mgmt. SLURM,6Munge,6PBS6Professional that are not provided by the BaseOS or community Runtimes OpenMP, OCR repos (e.g. EPEL) are also included I/O6Services Lustre client (community6version) • 3rd Party libraries are built Numerical/Scientific6 Boost,6GSL,6FFTW,6Metis,6PETSc,6Trilinos,6Hypre,6SuperLU,6SuperLU_Dist,6 for each compiler/MPI Libraries Mumps, OpenBLAS,6Scalapack family (8 combinations typically) I/O6Libraries HDF56(pHDF5),6NetCDF (including6C++6and6Fortran6interfaces),6Adios • Resulting repositories currently comprised of Compiler6Families GNU6(gcc,6g++,6gfortran) ~300 RPMs MPI6Families MVAPICH2,6OpenMPI,6MPICH Development6Tools Autotools (autoconf,6automake,6libtool),6Valgrind,R,6SciPy/NumPy Performance6Tools PAPI,6IMB, mpiP,6pdtoolkit TAU,6Scalasca,6ScoreP,6SIONLib 9 Hierarchical Overlay for OpenHPC software Centos 7 Distro Repo General Tools lmod slurm munge losf and System Services warewulf lustre client ohpc prun pdsh Development Environment Compilers gcc Intel Composer Serial Apps/Libs hdf5-gnu hdf5-intel OHPC Repo MPI MVAPICH2 IMPI OpenMPI MVAPICH2
Recommended publications
  • Literature Review and Implementation Overview: High Performance
    LITERATURE REVIEW AND IMPLEMENTATION OVERVIEW: HIGH PERFORMANCE COMPUTING WITH GRAPHICS PROCESSING UNITS FOR CLASSROOM AND RESEARCH USE APREPRINT Nathan George Department of Data Sciences Regis University Denver, CO 80221 [email protected] May 18, 2020 ABSTRACT In this report, I discuss the history and current state of GPU HPC systems. Although high-power GPUs have only existed a short time, they have found rapid adoption in deep learning applications. I also discuss an implementation of a commodity-hardware NVIDIA GPU HPC cluster for deep learning research and academic teaching use. Keywords GPU · Neural Networks · Deep Learning · HPC 1 Introduction, Background, and GPU HPC History High performance computing (HPC) is typically characterized by large amounts of memory and processing power. HPC, sometimes also called supercomputing, has been around since the 1960s with the introduction of the CDC STAR-100, and continues to push the limits of computing power and capabilities for large-scale problems [1, 2]. However, use of graphics processing unit (GPU) in HPC supercomputers has only started in the mid to late 2000s [3, 4]. Although graphics processing chips have been around since the 1970s, GPUs were not widely used for computations until the 2000s. During the early 2000s, GPU clusters began to appear for HPC applications. Most of these clusters were designed to run large calculations requiring vast computing power, and many clusters are still designed for that purpose [5]. GPUs have been increasingly used for computations due to their commodification, following Moore’s Law (demonstrated arXiv:2005.07598v1 [cs.DC] 13 May 2020 in Figure 1), and usage in specific applications like neural networks.
    [Show full text]
  • Arm HPC Software Stack and HPC Tools
    Arm HPC Software Stack and HPC Tools Centre for Development of Advanced Computing (C-DAC) / National Supercomputing Mission (NSM) Arm in HPC Course Phil Ridley [email protected] 2nd March 2021 © 2021 Arm Agenda • HPC Software Ecosystem • Overview • Applications • Community Support • Compilers, Libraries and Tools • Open-source and Commercial – Compilers – Maths Libraries • Profiling and Debugging 2 © 2021 Arm Software Ecosystem © 2021 Arm HPC Ecosystem Applications Performance Open-source, owned, coMMercial ISV codes, … Engineering … PBS Pro, Altair LSF, IBM SLURM, ArM Forge (DDT, MAP), Rogue Wave, HPC Toolkit, Schedulers Containers, Interpreters, etc. Management Cluster Scalasca, VaMpir, TAU, … Singularity, PodMan, Docker, Python, … HPE CMU, Bright, Middleware Mellanox IB/OFED/HPC-X, OpenMPI, MPICH, MVAPICH2, OpenSHMEM, OpenUCX, HPE MPI xCat Compilers Libraries Filesystems , Warewulf OEM/ODM’s ArM, GNU, LLVM, Clang, Flang, ArMPL, FFTW, OpenBLAS, BeeGFS, Lustre, ZFS, Cray-HPE, ATOS-Bull, Cray, PGI/NVIDIA, Fujitsu, … NuMPy, SciPy, Trilinos, PETSc, HDF5, NetCDF, GPFS, … Fujitsu, Gigabyte, … Hypre, SuperLU, ScaLAPACK, … , … Silicon OS RHEL, SUSE, CentOS, Ubuntu, … Suppliers Marvell, Fujitsu, Arm Server Ready Platform Mellanox, NVIDIA, … Standard firMware and RAS 4 © 2021 Arm General Software Ecosystem Workloads Drone.io Language & Library CI/CD Container & Virtualization Operating System Networking 5 © 2021 Arm – Easy HPC stack deployment on Arm https://developer.arm.com/solutions/hpc/hpc-software/openhpc Functional Areas Components include OpenHPC is a community effort to provide a Base OS CentOS 8.0, OpenSUSE Leap 15 common, verified set of open source packages for Administrative Tools Conman, Ganglia, Lmod, LosF, Nagios, pdsh, pdsh-mod- slurm, prun, EasyBuild, ClusterShell, mrsh, Genders, HPC deployments Shine, test-suite Arm and partners actively involved: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • Schedule: Sunday
    Schedule: Sunday page 2 Main tracks and lightning talks page 3 Devrooms in AW building page 4-5 Devrooms in H building page 5-6 Devrooms in K building page 6-7 Devrooms in U building Latest schedule and mobile apps on https://fosdem.org/schedule/ Compiled on 26/01/2016, see fosdem.org for updates. Page 1 of 12, Sunday Main tracks Main tracks Lightning Talks J.Janson K.1.105 (La Fontaine) H.2215 (Ferrer) 10:00 A New Patchwork Stephen Finucane 10:15 Re-thinking Linux Distributions Free communications with Free Software Buildtime Trend : visualise what’s trending in 10:30 Langdon White Daniel Pocock your build process – Dieter Adriaenssens Learning about software development with 10:45 Kibana dashboards – Jesus M. Gonzalez- Barahona 11:00 coala – Code Analysis Made Simple Lasse Schuirmann 11:15 Building a peer-to-peer network for Real-Time Beyond reproducible builds Communication How choosing the Raft consensus algorithm 11:30 Holger Levsen saved us 3 months of development time – Adrien Béraud, Guillaume Roguez Robert Wojciechowski Keeping your files safe in the post-Snowden 11:45 era with SXFS – Robert Wojciechowski 12:00 Spiffing – Military grade security Dave Cridland 12:15 illumos at 5 Mainflux Layers Box 12:30 Dan McDonald Drasko Draskovic Istvàn Koren FAI – The Universal Installation Tool 12:45 Thomas Lange 13:00 Knot DNS Resolver Ondrejˇ Surý 13:15 RocksDB Storage Engine for MySQL How containers work in Linux Prometheus – A Next Generation Monitoring 13:30 Yoshinori Matsunobu James Bottomley System Brian Brazil Going cross-platform – how htop was made 13:45 portable Hisham Muhammad 14:00 Ralph – Data Center Asset Management Sys- tem and DCIM, 100% Open Source.
    [Show full text]
  • Presence of Europe in HPC-HPDA Standardisation and Recommendations to Promote European Technologies
    Ref. Ares(2020)1227945 - 27/02/2020 H2020-FETHPC-3-2017 - Exascale HPC ecosystem development EXDCI-2 European eXtreme Data and Computing Initiative - 2 Grant Agreement Number: 800957 D5.3 Presence of Europe in HPC-HPDA standardisation and recommendations to promote European technologies Final Version: 1.0 Authors: JF Lavignon, TS-JFL; Marcin Ostasz, ETP4HPC; Thierry Bidot, Neovia Innovation Date: 21/02/2020 D5.3 Presence of Europe in HPC-HPDA standardisation Project and Deliverable Information Sheet EXDCI Project Project Ref. №: FETHPC-800957 Project Title: European eXtreme Data and Computing Initiative - 2 Project Web Site: http://www.exdci.eu Deliverable ID: D5.3 Deliverable Nature: Report Dissemination Level: Contractual Date of Delivery: PU * 29 / 02 / 2020 Actual Date of Delivery: 21/02/2020 EC Project Officer: Evangelia Markidou * - The dissemination level are indicated as follows: PU – Public, CO – Confidential, only for members of the consortium (including the Commission Services) CL – Classified, as referred to in Commission Decision 2991/844/EC. Document Control Sheet Title: Presence of Europe in HPC-HPDA standardisation and Document recommendations to promote European technologies ID: D5.3 Version: <1.0> Status: Final Available at: http://www.exdci.eu Software Tool: Microsoft Word 2013 File(s): EXDCI-2-D5.3-V1.0.docx Written by: JF Lavignon, TS-JFL Authorship Marcin Ostasz, ETP4HPC Thierry Bidot, Neovia Innovation Contributors: Reviewed by: Elise Quentel, GENCI John Clifford, PRACE Approved by: MB/TB EXDCI-2 - FETHPC-800957 i 21.02.2020 D5.3 Presence of Europe in HPC-HPDA standardisation Document Status Sheet Version Date Status Comments 0.1 03/02/2020 Draft To collect internal feedback 0.2 07/02/2020 Draft For internal reviewers 1.0 21/02/2020 Final version Integration of reviewers’ comments Document Keywords Keywords: PRACE, , Research Infrastructure, HPC, HPDA, Standards Copyright notices 2020 EXDCI-2 Consortium Partners.
    [Show full text]
  • SUSE Aligns with Intel® for High Performance Computing Stack
    For Immediate Release SUSE Aligns with Intel® for High Performance Computing Stack SUSE Linux Enterprise Server for High Performance Computing is included and supported as an option by new Intel® HPC Orchestrator NUREMBERG, Germany – June 20, 2016 – SUSE® today announced Intel will distribute SUSE Linux Enterprise Server for High Performance Computing as an option for the recently introduced Intel HPC Orchestrator, an Intel-supported HPC system software stack. SUSE Linux Enterprise Server is the first commercially supported Linux to be included as an operating system option with Intel HPC Orchestrator system software stack, and the companies will offer joint support for the combined solution. As a result, enterprise customers can accelerate innovation with faster access to relevant, business-changing data. Intel HPC Orchestrator, powered by SUSE Linux Enterprise Server for HPC, reduces the burden of integrating and validating an HPC software stack. It greatly simplifies ongoing maintenance and support by aligning new components and optimizations driven by the OpenHPC community. “HPC is no longer just for specialized scientific projects,” said Michael Miller, president of strategy, alliances and marketing for SUSE. “Today HPC is converging with big data analytics, IoT, machine learning and robotics to open up a whole new era of computing potential. The SUSE and Intel collaboration on Intel HPC Orchestrator and OpenHPC puts this power within reach of a whole new range of industries and enterprises that need data-driven insights to compete and advance. This is an industry-changing approach that will rapidly accelerate HPC innovation and advance the state of the art in a way that creates real-world benefits for our customers and partners.” Charles Wuischpard, vice president, Data Center Group, general manager, High Performance Computing Platform Group, at Intel, said, “Intel HPC Orchestrator was developed to help organizations simplify the deployment, maintenance and support of their HPC systems.
    [Show full text]
  • Red Hat Community and Social Responsibility 2020
    Red Hat Community and Social Responsibility 2020 Community and Social Responsibility 2020 This report covers Red Hat’s fiscal year from March 1, 2019 – February 29, 2020. 1 Contents 2 Letter from Paul Cormier 4 Company profile and performance summary 5 Champions of software freedom 7 Leaders in open source Open source community catalysts Corporate citizenship 13 Philanthropy Volunteering Primary and secondary education Higher education People and company culture 27 Red Hat associates Red Hat culture Benefits Training and development Diversity and inclusion Diversity at a glance Environment and sustainability 43 Spaces Programs Governance 47 Privacy and security Ethics Supply chain Endnotes 50 Moving forward 3 Community and social responsibility Letter from Paul Cormier Red Hat culture is rooted in values designed to create better software and drive innovation. These values were born in open source communities, are reflected in our development model, and inform how we engage with the world around us. We recognize that collaborating this way is key to unlocking the potential of the communities where we live, work, and play. While Red Hat as a company, and the world as a whole, have experienced significant change this past year, we remain as committed as ever to open source, both as a technology and a philosophy. Our programs are driven by Red Hat’s unique, open approach—whether focused externally, like volunteer efforts and science, technology, engineering, and mathematics (STEM) education; or internally, like aiming to enhance our associate experience and inclusive culture. This report showcases how special our community is and highlights some of the ways that Red Hatters embrace opportunities to do good together.
    [Show full text]
  • Architecture, Stratégie D'organisation Cluster: Installation
    Architecture d’un cluster Cluster: architecture, stratégie d’organisation Cluster: installation Christophe PERA,Emmanuel QUEMENER, Lois TAULEL, Hervé Gilquin et Dan CALIGARU Fédération Lyonnaise de Modélisation et Sciences Numériques Formation Cluster 2016 christophe.pera(AT)univ-lyon1.fr 19/01/2016 Sommaire. De quoi parlons nous ? Historique. Definitions Un modèle d’architecture pour référence. Que veut on faire et pour qui ? Composants de l’architecture. Sécurité et bonnes pratiques. Créons notre cluster -> adaptation de notre modèle théorique. Installation (BareMetal, Provisioning, Bootstrap, Image, Config Management). Processus d’installation/démarrage d’un OS linux. Mecanisme d’une description d’installation - kickstart/RHEL Installation par réseau. Sans installation - diskless Automatisation d’une installation. Exemples Systemes Imager/Clonezilla. Solution intallation - Cobbler. Solution Diskless Sidus. Rocks. ET le CLOUD ? Référence Cluster Architecture et exploitation. Quelques infos. Qui suis je ? Un ingénieur qui essai de survivre aux milieu des utilisateurs hyperactifs du monde de la simulation numérique. Ma fonction ? Responsable d’un des centres de calcul d’une Structure Fédérative de Recherche, la Fédération Lyonnaise de Modélisation et Sciences Numériques (École Centrale Lyon, École Nationale Supérieure Lyon, Université Lyon 1). Nos Objectifs Définir ce qu’est un cluster de calcul (HPC ?), son architecture, ses principaux composants et analyser les étapes théoriques et pratiques de sa construction. L’histoire d’une démocratisation. Dans les années 90s, apparition d’un environnement riche pour l’émergence du cluster : I A l’origine, des Unix propriétaires, supercalculateurs ! I 1977 Premier cluster produit par DataNet. I 1980 VAXcluster produit par DEC. I Début des années 1990, la FSF créé un certain nombre d’outils de programmation libres, le PC se diversifie.
    [Show full text]
  • Openhpc (V1.0) Cluster Building Recipes
    OpenHPC (v1.0) Cluster Building Recipes CentOS7.1 Base OS Base Linux* Edition Document Last Update: 2015-11-12 Document Revision: bf8c471 Install Guide - CentOS7.1 Version (v1.0) Legal Notice Copyright © 2015, Intel Corporation. All rights reserved. Redistribution and use of the enclosed documentation (\Documentation"), with or without modification, are permitted provided that the following conditions are met: • Redistributions of this Documentation in whatever format (e.g. PDF, HTML, RTF, shell script, PostScript, etc.), must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. • Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this Documentation without specific prior written permission. Except as expressly provided above, no license (express or implied, by estoppel or otherwise) to any intel- lectual property rights is granted by this document. THIS DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOW- EVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTA- TION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    [Show full text]
  • ECP Software Technology Capability Assessment Report–Public
    ECP-RPT-ST-0002-2020{Public ECP Software Technology Capability Assessment Report{Public Michael A. Heroux, Director ECP ST Jonathan Carter, Deputy Director ECP ST Rajeev Thakur, Programming Models & Runtimes Lead Jeffrey S. Vetter, Development Tools Lead Lois Curfman McInnes, Mathematical Libraries Lead James Ahrens, Data & Visualization Lead Todd Munson, Software Ecosystem & Delivery Lead J. Robert Neely, NNSA ST Lead February 1, 2020 DOCUMENT AVAILABILITY Reports produced after January 1, 1996, are generally available free via US Department of Energy (DOE) SciTech Connect. Website http://www.osti.gov/scitech/ Reports produced before January 1, 1996, may be purchased by members of the public from the following source: National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 Telephone 703-605-6000 (1-800-553-6847) TDD 703-487-4639 Fax 703-605-6900 E-mail [email protected] Website http://www.ntis.gov/help/ordermethods.aspx Reports are available to DOE employees, DOE contractors, Energy Technology Data Exchange representatives, and International Nuclear Information System representatives from the following source: Office of Scientific and Technical Information PO Box 62 Oak Ridge, TN 37831 Telephone 865-576-8401 Fax 865-576-5728 E-mail [email protected] Website http://www.osti.gov/contact.html This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.
    [Show full text]
  • Deployment Guide Deployment Guide SUSE Linux Enterprise Server 12 SP3
    SUSE Linux Enterprise Server 12 SP3 Deployment Guide Deployment Guide SUSE Linux Enterprise Server 12 SP3 Shows how to install single or multiple systems and how to exploit the product- inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xiv 1 Required Background
    [Show full text]
  • Openhpc (V2.0) Cluster Building Recipes
    OpenHPC (v2.0) Cluster Building Recipes OpenSUSE Leap 15.2 Base OS Warewulf/OpenPBS Edition for Linux* (x86 64) Document Last Update: 2020-10-06 Document Revision: 1e970dd16 Install Guide (v2.0): OpenSUSE Leap 15.2/x86 64 + Warewulf + OpenPBS Legal Notice Copyright © 2016-2020, OpenHPC, a Linux Foundation Collaborative Project. All rights reserved. This documentation is licensed under the Creative Commons At- tribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. 2 Rev: 1e970dd16 Install Guide (v2.0): OpenSUSE Leap 15.2/x86 64 + Warewulf + OpenPBS Contents 1 Introduction 5 1.1 Target Audience...........................................5 1.2 Requirements/Assumptions.....................................5 1.3 Inputs.................................................6 2 Install Base Operating System (BOS)7 3 Install OpenHPC Components8 3.1 Enable OpenHPC repository for local use.............................8 3.2 Installation template.........................................8 3.3 Add provisioning services on master node.............................9 3.4 Add resource management services on master node........................9 3.5 Optionally add InfiniBand support services on master node................... 10 3.6 Optionally add Omni-Path support services on master node................... 10 3.7 Complete basic Warewulf setup for master node.......................... 11 3.8 Define compute image for provisioning............................... 11 3.8.1 Build initial BOS image................................... 11 3.8.2 Add OpenHPC components................................. 12 3.8.3 Customize system configuration............................... 13 3.8.4 Additional Customization (optional)...........................
    [Show full text]
  • 0 to 60 with Intel® HPC Orchestrator HPC Advisory Council Stanford Conference — February 7 & 8, 2017
    0 to 60 With Intel® HPC Orchestrator HPC Advisory Council Stanford Conference — February 7 & 8, 2017 Steve Jones Director, High Performance Computing Center Stanford University David N. Lombard Chief Software Architect Intel® HPC Orchestrator Senior Principal Engineer, Extreme Scale Computing, Intel Dellarontay Readus HPC Software Engineer, and CS Undergraduate Stanford University Legal Notices and Disclaimers • Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. • No computer system can be absolutely secure. • Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. • Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. • No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. • Intel, the Intel logo and others are trademarks of Intel Corporation in the U.S. and/or other countries. • *Other names and brands may be claimed as the property of others.
    [Show full text]