0 to 60 with Intel® HPC Orchestrator HPC Advisory Council Stanford Conference — February 7 & 8, 2017

Total Page:16

File Type:pdf, Size:1020Kb

0 to 60 with Intel® HPC Orchestrator HPC Advisory Council Stanford Conference — February 7 & 8, 2017 0 to 60 With Intel® HPC Orchestrator HPC Advisory Council Stanford Conference — February 7 & 8, 2017 Steve Jones Director, High Performance Computing Center Stanford University David N. Lombard Chief Software Architect Intel® HPC Orchestrator Senior Principal Engineer, Extreme Scale Computing, Intel Dellarontay Readus HPC Software Engineer, and CS Undergraduate Stanford University Legal Notices and Disclaimers • Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. • No computer system can be absolutely secure. • Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. • Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. • No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. • Intel, the Intel logo and others are trademarks of Intel Corporation in the U.S. and/or other countries. • *Other names and brands may be claimed as the property of others. • © 2017 Intel Corporation. Agenda • Demo Outline • Intel® HPC Orchestrator and OpenHPC • Installation Walkthrough • Intel® Parallel Studio XE • Modules Support • Management and Maintenance • Workload Management • 3rd-Party Packages • Demo of Built Cluster • Moving Forward Intel® Scalable System Framework (Intel® SSF) A systems approach for Innovation Continue the pace of Develop new technologies to Achieve full potential via Faster Compute Remove Bottlenecks Tighter Integration compute Memory Fabric Cores memory fabric storage system Moore’s Law software PCIe Graphics μArchitecture Better power, performance, Performance | Capability | Performance Time Capability | Performance Time density, scaling & cost Execution of Vision Intel® HPC Orchestrator products are the realization of the software portion of Intel® Scalable System Framework Intel® HPC Orchestrator Intel® Scalable System Framework (SSF) A Holistic Design Solution for All HPC • Small clusters through supercomputers • Compute and data-centric computing • Open Source Community under • Standards-based programmability • Intel’s distribution of OpenHPC Linux Foundation • On-Premise and cloud-based • Intel HW validation • Ecosystem innovation building a • Expose best performance for Intel consistent HPC SW Platform HW • Platform agnostic Intel HW Validation, SW Alignment • Advanced testing & premium features • 30 global members & Technical Support • Product technical support & updates • Multiple distributions What is OpenHPC? OpenHPC is a community effort endeavoring to: • provide collection(s) of pre-packaged components that can be used to help install and manage flexible HPC systems throughout their lifecycle Install Guide - CentOS7.1 Version (v1.0) Typical cluster • leverage standard Linux delivery model architecture to retain admin familiarity (i.e., package repos) • allow and promote multiple system configuration recipes Lustre* storage system that leverage community reference designs and best practices • implement integration testing to high speed network gain validation confidence • provide additional distribution mechanism for groups releasing Master compute open-source software (SMS) nodes • provide a stable platform for Data new R&D initiatives Center eth0 eth1 Network to compute eth interface to compute BMC interface tcp networking Courtesy of OpenHPC* Figure 1: Overview of physical cluster architecture. *Other names and brands may be claimed as the property of others. the local dat a center network and et h1 used to provision and manage the cluster backend (not e that these interface names are examples and may be di↵erent depending on local settings and OS conventions). Two logical IP interfaces are expected to each compute node: the first is the standard Ethernet interface that will be used for provisioning and resource management. The second is used to connect to each host’s BMC and is used for power management and remot e console access. Physical connectivity for these two logical IP networks is often accommodat ed via separat e cabling and switching infrastructure; however, an alternat e configurat ion can also be accommodat ed via the use of a shared NIC, which runs a packet filter to divert management packets between the host and BMC. In addition to the IP networking, there is a high-speed network (InfiniBand in this recipe) that is also connected to each of thehosts. Thishigh speed network isused for applicat ion message passing and optionally for Lustre connectivity as well. 1.3 B ring your own license OpenHPC provides a variety of integrat ed, pre-packaged elements from the Intel® Parallel Studio XE soft- ware suite. Portions of the runtimes provided by the included compilers and MPI components are freely usable. However, in order to compile new binaries using these tools (or access ot her analysis tools like Intel ® Trace Analyzer and Collector, Intel® Inspector, etc), you will need to provide your own valid license and OpenHPC adopts a br ing-your-own-license model. Note that licenses are provided free of charge for many cat egories of use. In particular, licenses for compilers and developments tools are provided at no cost to academic researchers or developers contributing to open-source software projects. More informat ion on this program can be found at : https://software.intel.com/en-us/qualify-for-free-software 1.4 I nputs As this recipe details installing a cluster starting from bare-metal, there is a requirement to define IP ad- dresses and gat her hardware MAC addresses in order to support a controlled provisioning process. These values are necessarily unique to the hardware being used, and this document uses variable substitution (${ var i abl e} ) in the command-line examples that follow to highlight where local site inputs are required. A summary of the required and optional variables used throughout this recipe are presented below. Note 6 Rev: bf8c471 Participation as of January’17 Official Members Mixture of Academics, Labs, OEMs, and ISVs/OSVs Courtesy of OpenHPC* Project member participation interest? Please contact *Other names and brands may be claimed as the property of others. Kevlin Husser or Jeff ErnstFriedman <[email protected]> Individuals • Reese Baird, Intel (Maintainer) • Thomas Sterling, Indiana University (Component Development • Pavan Balaji, Argonne National Laboratory (Maintainer) Representative) • David Brayford, LRZ (Maintainer) • Craig Stewart, Indiana University (End-User/Site Representative) • Todd Gamblin, Lawrence Livermore National Labs (Maintainer) • Scott Suchyta, Altair (Maintainer) • Craig Gardner, SUSE (Maintainer) • Nirmala Sundararajan, Dell (Maintainer) • Yiannis Georgiou, ATOS (Maintainer) • Balazs Gerofi, RIKEN (Component Development Representative) • Jennifer Green, Los Alamos National Laboratory (Maintainer) • Eric Van Hensbergen, ARM (Maintainer, Testing Coordinator) • Douglas Jacobsen, NERSC (End-User/Site Representative) • Chulho Kim, Lenovo (Maintainer) • Greg Kurtzer, Lawrence Berkeley National Labs (Component Development Representative) • Thomas Moschny, ParTec (Maintainer) • Karl W. Schulz, Intel (Project Lead, Testing Coordinator) • Derek Simmel, Pittsburgh Supercomputing Center (End-User/Site Representative) Courtesy of OpenHPC* https://github.com/openhpc/ohpc/wiki/Governance-Overview *Other names and brands may be claimed as the property of others. Intel® HPC Orchestrator Product Family Product Target Customer CUSTOM Highly modular hierarchically High End HPC users configured stack GA today ADVANCED Technical & commercial users Modular stack with a flat configuration with vertical HPC applications TURNKEY Turnkey stack with a flat HPC ISVs, SIs and customers configuration. A variety of stacks looking for a turnkey solution for may be available as offerings On-Prem and CSP access methodologies, appliances OpenHPC Community OEM University Community Community GNU RRV* OEM Linux RRV Intel® HPC Orchestrator University Stack Stack File System RRV Resource RRV Manager Integrates and OpenHPC tests stacks and makes them List of Components from Upstream RRVs Base available as Communities HPC Stack OSS Cadence . Warewulf . Scipy . Latex . Papi 6~12 mo . Ganglia . R-Project . Conduse . Pdsh . Lustre . Spack . Msweet . Pdt Continuous Integration Environment . Munge . Lmod . Losf . Petsc . EasyBuild . Tug . Luajit . Lapack - Build Environment & Source Control . Slurm . Adios . Mpip . SuperLU - Bug Tracking . Nagios . Boost . Mumps . Trilinos - User & Dev Forums . GNU . FFtw . Mcapich . Valgrind . OpenMPI . Hdfgroup . Netcdf - Collaboration tools . NumPy . Hypre . Openblas - Validation Environment *RRV: reliable and relevant version Intel® HPC Orchestrator Value Proposition Intel® HPC
Recommended publications
  • Literature Review and Implementation Overview: High Performance
    LITERATURE REVIEW AND IMPLEMENTATION OVERVIEW: HIGH PERFORMANCE COMPUTING WITH GRAPHICS PROCESSING UNITS FOR CLASSROOM AND RESEARCH USE APREPRINT Nathan George Department of Data Sciences Regis University Denver, CO 80221 [email protected] May 18, 2020 ABSTRACT In this report, I discuss the history and current state of GPU HPC systems. Although high-power GPUs have only existed a short time, they have found rapid adoption in deep learning applications. I also discuss an implementation of a commodity-hardware NVIDIA GPU HPC cluster for deep learning research and academic teaching use. Keywords GPU · Neural Networks · Deep Learning · HPC 1 Introduction, Background, and GPU HPC History High performance computing (HPC) is typically characterized by large amounts of memory and processing power. HPC, sometimes also called supercomputing, has been around since the 1960s with the introduction of the CDC STAR-100, and continues to push the limits of computing power and capabilities for large-scale problems [1, 2]. However, use of graphics processing unit (GPU) in HPC supercomputers has only started in the mid to late 2000s [3, 4]. Although graphics processing chips have been around since the 1970s, GPUs were not widely used for computations until the 2000s. During the early 2000s, GPU clusters began to appear for HPC applications. Most of these clusters were designed to run large calculations requiring vast computing power, and many clusters are still designed for that purpose [5]. GPUs have been increasingly used for computations due to their commodification, following Moore’s Law (demonstrated arXiv:2005.07598v1 [cs.DC] 13 May 2020 in Figure 1), and usage in specific applications like neural networks.
    [Show full text]
  • Arm HPC Software Stack and HPC Tools
    Arm HPC Software Stack and HPC Tools Centre for Development of Advanced Computing (C-DAC) / National Supercomputing Mission (NSM) Arm in HPC Course Phil Ridley [email protected] 2nd March 2021 © 2021 Arm Agenda • HPC Software Ecosystem • Overview • Applications • Community Support • Compilers, Libraries and Tools • Open-source and Commercial – Compilers – Maths Libraries • Profiling and Debugging 2 © 2021 Arm Software Ecosystem © 2021 Arm HPC Ecosystem Applications Performance Open-source, owned, coMMercial ISV codes, … Engineering … PBS Pro, Altair LSF, IBM SLURM, ArM Forge (DDT, MAP), Rogue Wave, HPC Toolkit, Schedulers Containers, Interpreters, etc. Management Cluster Scalasca, VaMpir, TAU, … Singularity, PodMan, Docker, Python, … HPE CMU, Bright, Middleware Mellanox IB/OFED/HPC-X, OpenMPI, MPICH, MVAPICH2, OpenSHMEM, OpenUCX, HPE MPI xCat Compilers Libraries Filesystems , Warewulf OEM/ODM’s ArM, GNU, LLVM, Clang, Flang, ArMPL, FFTW, OpenBLAS, BeeGFS, Lustre, ZFS, Cray-HPE, ATOS-Bull, Cray, PGI/NVIDIA, Fujitsu, … NuMPy, SciPy, Trilinos, PETSc, HDF5, NetCDF, GPFS, … Fujitsu, Gigabyte, … Hypre, SuperLU, ScaLAPACK, … , … Silicon OS RHEL, SUSE, CentOS, Ubuntu, … Suppliers Marvell, Fujitsu, Arm Server Ready Platform Mellanox, NVIDIA, … Standard firMware and RAS 4 © 2021 Arm General Software Ecosystem Workloads Drone.io Language & Library CI/CD Container & Virtualization Operating System Networking 5 © 2021 Arm – Easy HPC stack deployment on Arm https://developer.arm.com/solutions/hpc/hpc-software/openhpc Functional Areas Components include OpenHPC is a community effort to provide a Base OS CentOS 8.0, OpenSUSE Leap 15 common, verified set of open source packages for Administrative Tools Conman, Ganglia, Lmod, LosF, Nagios, pdsh, pdsh-mod- slurm, prun, EasyBuild, ClusterShell, mrsh, Genders, HPC deployments Shine, test-suite Arm and partners actively involved: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • Schedule: Sunday
    Schedule: Sunday page 2 Main tracks and lightning talks page 3 Devrooms in AW building page 4-5 Devrooms in H building page 5-6 Devrooms in K building page 6-7 Devrooms in U building Latest schedule and mobile apps on https://fosdem.org/schedule/ Compiled on 26/01/2016, see fosdem.org for updates. Page 1 of 12, Sunday Main tracks Main tracks Lightning Talks J.Janson K.1.105 (La Fontaine) H.2215 (Ferrer) 10:00 A New Patchwork Stephen Finucane 10:15 Re-thinking Linux Distributions Free communications with Free Software Buildtime Trend : visualise what’s trending in 10:30 Langdon White Daniel Pocock your build process – Dieter Adriaenssens Learning about software development with 10:45 Kibana dashboards – Jesus M. Gonzalez- Barahona 11:00 coala – Code Analysis Made Simple Lasse Schuirmann 11:15 Building a peer-to-peer network for Real-Time Beyond reproducible builds Communication How choosing the Raft consensus algorithm 11:30 Holger Levsen saved us 3 months of development time – Adrien Béraud, Guillaume Roguez Robert Wojciechowski Keeping your files safe in the post-Snowden 11:45 era with SXFS – Robert Wojciechowski 12:00 Spiffing – Military grade security Dave Cridland 12:15 illumos at 5 Mainflux Layers Box 12:30 Dan McDonald Drasko Draskovic Istvàn Koren FAI – The Universal Installation Tool 12:45 Thomas Lange 13:00 Knot DNS Resolver Ondrejˇ Surý 13:15 RocksDB Storage Engine for MySQL How containers work in Linux Prometheus – A Next Generation Monitoring 13:30 Yoshinori Matsunobu James Bottomley System Brian Brazil Going cross-platform – how htop was made 13:45 portable Hisham Muhammad 14:00 Ralph – Data Center Asset Management Sys- tem and DCIM, 100% Open Source.
    [Show full text]
  • Presence of Europe in HPC-HPDA Standardisation and Recommendations to Promote European Technologies
    Ref. Ares(2020)1227945 - 27/02/2020 H2020-FETHPC-3-2017 - Exascale HPC ecosystem development EXDCI-2 European eXtreme Data and Computing Initiative - 2 Grant Agreement Number: 800957 D5.3 Presence of Europe in HPC-HPDA standardisation and recommendations to promote European technologies Final Version: 1.0 Authors: JF Lavignon, TS-JFL; Marcin Ostasz, ETP4HPC; Thierry Bidot, Neovia Innovation Date: 21/02/2020 D5.3 Presence of Europe in HPC-HPDA standardisation Project and Deliverable Information Sheet EXDCI Project Project Ref. №: FETHPC-800957 Project Title: European eXtreme Data and Computing Initiative - 2 Project Web Site: http://www.exdci.eu Deliverable ID: D5.3 Deliverable Nature: Report Dissemination Level: Contractual Date of Delivery: PU * 29 / 02 / 2020 Actual Date of Delivery: 21/02/2020 EC Project Officer: Evangelia Markidou * - The dissemination level are indicated as follows: PU – Public, CO – Confidential, only for members of the consortium (including the Commission Services) CL – Classified, as referred to in Commission Decision 2991/844/EC. Document Control Sheet Title: Presence of Europe in HPC-HPDA standardisation and Document recommendations to promote European technologies ID: D5.3 Version: <1.0> Status: Final Available at: http://www.exdci.eu Software Tool: Microsoft Word 2013 File(s): EXDCI-2-D5.3-V1.0.docx Written by: JF Lavignon, TS-JFL Authorship Marcin Ostasz, ETP4HPC Thierry Bidot, Neovia Innovation Contributors: Reviewed by: Elise Quentel, GENCI John Clifford, PRACE Approved by: MB/TB EXDCI-2 - FETHPC-800957 i 21.02.2020 D5.3 Presence of Europe in HPC-HPDA standardisation Document Status Sheet Version Date Status Comments 0.1 03/02/2020 Draft To collect internal feedback 0.2 07/02/2020 Draft For internal reviewers 1.0 21/02/2020 Final version Integration of reviewers’ comments Document Keywords Keywords: PRACE, , Research Infrastructure, HPC, HPDA, Standards Copyright notices 2020 EXDCI-2 Consortium Partners.
    [Show full text]
  • SUSE Aligns with Intel® for High Performance Computing Stack
    For Immediate Release SUSE Aligns with Intel® for High Performance Computing Stack SUSE Linux Enterprise Server for High Performance Computing is included and supported as an option by new Intel® HPC Orchestrator NUREMBERG, Germany – June 20, 2016 – SUSE® today announced Intel will distribute SUSE Linux Enterprise Server for High Performance Computing as an option for the recently introduced Intel HPC Orchestrator, an Intel-supported HPC system software stack. SUSE Linux Enterprise Server is the first commercially supported Linux to be included as an operating system option with Intel HPC Orchestrator system software stack, and the companies will offer joint support for the combined solution. As a result, enterprise customers can accelerate innovation with faster access to relevant, business-changing data. Intel HPC Orchestrator, powered by SUSE Linux Enterprise Server for HPC, reduces the burden of integrating and validating an HPC software stack. It greatly simplifies ongoing maintenance and support by aligning new components and optimizations driven by the OpenHPC community. “HPC is no longer just for specialized scientific projects,” said Michael Miller, president of strategy, alliances and marketing for SUSE. “Today HPC is converging with big data analytics, IoT, machine learning and robotics to open up a whole new era of computing potential. The SUSE and Intel collaboration on Intel HPC Orchestrator and OpenHPC puts this power within reach of a whole new range of industries and enterprises that need data-driven insights to compete and advance. This is an industry-changing approach that will rapidly accelerate HPC innovation and advance the state of the art in a way that creates real-world benefits for our customers and partners.” Charles Wuischpard, vice president, Data Center Group, general manager, High Performance Computing Platform Group, at Intel, said, “Intel HPC Orchestrator was developed to help organizations simplify the deployment, maintenance and support of their HPC systems.
    [Show full text]
  • Red Hat Community and Social Responsibility 2020
    Red Hat Community and Social Responsibility 2020 Community and Social Responsibility 2020 This report covers Red Hat’s fiscal year from March 1, 2019 – February 29, 2020. 1 Contents 2 Letter from Paul Cormier 4 Company profile and performance summary 5 Champions of software freedom 7 Leaders in open source Open source community catalysts Corporate citizenship 13 Philanthropy Volunteering Primary and secondary education Higher education People and company culture 27 Red Hat associates Red Hat culture Benefits Training and development Diversity and inclusion Diversity at a glance Environment and sustainability 43 Spaces Programs Governance 47 Privacy and security Ethics Supply chain Endnotes 50 Moving forward 3 Community and social responsibility Letter from Paul Cormier Red Hat culture is rooted in values designed to create better software and drive innovation. These values were born in open source communities, are reflected in our development model, and inform how we engage with the world around us. We recognize that collaborating this way is key to unlocking the potential of the communities where we live, work, and play. While Red Hat as a company, and the world as a whole, have experienced significant change this past year, we remain as committed as ever to open source, both as a technology and a philosophy. Our programs are driven by Red Hat’s unique, open approach—whether focused externally, like volunteer efforts and science, technology, engineering, and mathematics (STEM) education; or internally, like aiming to enhance our associate experience and inclusive culture. This report showcases how special our community is and highlights some of the ways that Red Hatters embrace opportunities to do good together.
    [Show full text]
  • Architecture, Stratégie D'organisation Cluster: Installation
    Architecture d’un cluster Cluster: architecture, stratégie d’organisation Cluster: installation Christophe PERA,Emmanuel QUEMENER, Lois TAULEL, Hervé Gilquin et Dan CALIGARU Fédération Lyonnaise de Modélisation et Sciences Numériques Formation Cluster 2016 christophe.pera(AT)univ-lyon1.fr 19/01/2016 Sommaire. De quoi parlons nous ? Historique. Definitions Un modèle d’architecture pour référence. Que veut on faire et pour qui ? Composants de l’architecture. Sécurité et bonnes pratiques. Créons notre cluster -> adaptation de notre modèle théorique. Installation (BareMetal, Provisioning, Bootstrap, Image, Config Management). Processus d’installation/démarrage d’un OS linux. Mecanisme d’une description d’installation - kickstart/RHEL Installation par réseau. Sans installation - diskless Automatisation d’une installation. Exemples Systemes Imager/Clonezilla. Solution intallation - Cobbler. Solution Diskless Sidus. Rocks. ET le CLOUD ? Référence Cluster Architecture et exploitation. Quelques infos. Qui suis je ? Un ingénieur qui essai de survivre aux milieu des utilisateurs hyperactifs du monde de la simulation numérique. Ma fonction ? Responsable d’un des centres de calcul d’une Structure Fédérative de Recherche, la Fédération Lyonnaise de Modélisation et Sciences Numériques (École Centrale Lyon, École Nationale Supérieure Lyon, Université Lyon 1). Nos Objectifs Définir ce qu’est un cluster de calcul (HPC ?), son architecture, ses principaux composants et analyser les étapes théoriques et pratiques de sa construction. L’histoire d’une démocratisation. Dans les années 90s, apparition d’un environnement riche pour l’émergence du cluster : I A l’origine, des Unix propriétaires, supercalculateurs ! I 1977 Premier cluster produit par DataNet. I 1980 VAXcluster produit par DEC. I Début des années 1990, la FSF créé un certain nombre d’outils de programmation libres, le PC se diversifie.
    [Show full text]
  • Openhpc (V1.0) Cluster Building Recipes
    OpenHPC (v1.0) Cluster Building Recipes CentOS7.1 Base OS Base Linux* Edition Document Last Update: 2015-11-12 Document Revision: bf8c471 Install Guide - CentOS7.1 Version (v1.0) Legal Notice Copyright © 2015, Intel Corporation. All rights reserved. Redistribution and use of the enclosed documentation (\Documentation"), with or without modification, are permitted provided that the following conditions are met: • Redistributions of this Documentation in whatever format (e.g. PDF, HTML, RTF, shell script, PostScript, etc.), must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. • Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this Documentation without specific prior written permission. Except as expressly provided above, no license (express or implied, by estoppel or otherwise) to any intel- lectual property rights is granted by this document. THIS DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOW- EVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTA- TION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    [Show full text]
  • ECP Software Technology Capability Assessment Report–Public
    ECP-RPT-ST-0002-2020{Public ECP Software Technology Capability Assessment Report{Public Michael A. Heroux, Director ECP ST Jonathan Carter, Deputy Director ECP ST Rajeev Thakur, Programming Models & Runtimes Lead Jeffrey S. Vetter, Development Tools Lead Lois Curfman McInnes, Mathematical Libraries Lead James Ahrens, Data & Visualization Lead Todd Munson, Software Ecosystem & Delivery Lead J. Robert Neely, NNSA ST Lead February 1, 2020 DOCUMENT AVAILABILITY Reports produced after January 1, 1996, are generally available free via US Department of Energy (DOE) SciTech Connect. Website http://www.osti.gov/scitech/ Reports produced before January 1, 1996, may be purchased by members of the public from the following source: National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 Telephone 703-605-6000 (1-800-553-6847) TDD 703-487-4639 Fax 703-605-6900 E-mail [email protected] Website http://www.ntis.gov/help/ordermethods.aspx Reports are available to DOE employees, DOE contractors, Energy Technology Data Exchange representatives, and International Nuclear Information System representatives from the following source: Office of Scientific and Technical Information PO Box 62 Oak Ridge, TN 37831 Telephone 865-576-8401 Fax 865-576-5728 E-mail [email protected] Website http://www.osti.gov/contact.html This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.
    [Show full text]
  • Deployment Guide Deployment Guide SUSE Linux Enterprise Server 12 SP3
    SUSE Linux Enterprise Server 12 SP3 Deployment Guide Deployment Guide SUSE Linux Enterprise Server 12 SP3 Shows how to install single or multiple systems and how to exploit the product- inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xiv 1 Required Background
    [Show full text]
  • Openhpc (V2.0) Cluster Building Recipes
    OpenHPC (v2.0) Cluster Building Recipes OpenSUSE Leap 15.2 Base OS Warewulf/OpenPBS Edition for Linux* (x86 64) Document Last Update: 2020-10-06 Document Revision: 1e970dd16 Install Guide (v2.0): OpenSUSE Leap 15.2/x86 64 + Warewulf + OpenPBS Legal Notice Copyright © 2016-2020, OpenHPC, a Linux Foundation Collaborative Project. All rights reserved. This documentation is licensed under the Creative Commons At- tribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. 2 Rev: 1e970dd16 Install Guide (v2.0): OpenSUSE Leap 15.2/x86 64 + Warewulf + OpenPBS Contents 1 Introduction 5 1.1 Target Audience...........................................5 1.2 Requirements/Assumptions.....................................5 1.3 Inputs.................................................6 2 Install Base Operating System (BOS)7 3 Install OpenHPC Components8 3.1 Enable OpenHPC repository for local use.............................8 3.2 Installation template.........................................8 3.3 Add provisioning services on master node.............................9 3.4 Add resource management services on master node........................9 3.5 Optionally add InfiniBand support services on master node................... 10 3.6 Optionally add Omni-Path support services on master node................... 10 3.7 Complete basic Warewulf setup for master node.......................... 11 3.8 Define compute image for provisioning............................... 11 3.8.1 Build initial BOS image................................... 11 3.8.2 Add OpenHPC components................................. 12 3.8.3 Customize system configuration............................... 13 3.8.4 Additional Customization (optional)...........................
    [Show full text]
  • Deployment Guide Deployment Guide SUSE Linux Enterprise Server 12 SP5
    SUSE Linux Enterprise Server 12 SP5 Deployment Guide Deployment Guide SUSE Linux Enterprise Server 12 SP5 Shows how to install single or multiple systems and how to exploit the product- inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xiv 1 Available Documentation
    [Show full text]