Spack Package Repositories

Total Page:16

File Type:pdf, Size:1020Kb

Spack Package Repositories Managing HPC Software Complexity with Spack Full-day Tutorial 1st Workshop on NSF and DOE High Performance Computing Tools July 10, 2019 The most recent version of these slides can be found at: https://spack.readthedocs.io/en/latest/tutorial.html Eugene, Oregon LLNL-PRES-806064 This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. github.com/spack/spack Lawrence Livermore National Security, LLC Tutorial Materials Materials: Download the latest version of slides and handouts at: bit.ly/spack-tutorial Slides and hands-on scripts are in the “Tutorial” Section of the docs. § Spack GitHub repository: http://github.com/spack/spack § Spack Documentation: http://spack.readthedocs.io Tweet at us! @spackpm 2 LLNL-PRES-806064 github.com/spack/spack Tutorial Presenters Todd Gamblin Greg Becker 3 LLNL-PRES-806064 github.com/spack/spack Software complexity in HPC is growing glm suite-sparse yaml-cpp metis cmake ncurses parmetis pkgconf nalu hwloc libxml2 xz bzip2 openssl boost trilinos superlu openblas netlib-scalapack mumps openmpi zlib netcdf hdf5 matio m4 libsigsegv parallel-netcdf Nalu: Generalized Unstructured Massively Parallel Low Mach Flow 4 LLNL-PRES-806064 github.com/spack/spack Software complexity in HPC is growing adol-c automake autoconf perl glm suite-sparse yaml-cpp metis cmake ncurses gmp libtool parmetis pkgconf m4 libsigsegv nalu hwloc libxml2 xz bzip2 openssl p4est pkgconf boost hwloc libxml2 trilinos superlu openblas xz netlib-scalapack netcdf-cxx netcdf mumps openmpi zlib netcdf hdf5 nanoflann matio m4 libsigsegv parallel-netcdf matio hdf5 Nalu: Generalized Unstructured Massively Parallel Low Mach Flow sundials glm netlib-scalapack mumps openmpi trilinos hypre openblas gdbm superlu-dist parmetis sqlite readline zlib petsc python dealii slepc ncurses arpack-ng openssl metis cmake gmsh oce suite-sparse tetgen intel-tbb netgen bzip2 assimp boost gsl muparser dealii: C++ Finite Element Library 5 LLNL-PRES-806064 github.com/spack/spack Software complexity in HPC is growing r-randomforest glm suite-sparse yaml-cpp r-labeling metis cmake ncurses parmetis pkgconf nalu r-lazyeval r-munsell hwloc libxml2 xz r-colorspace bzip2 openssl r-rcolorbrewer boost r-viridislite trilinos superlu openblas r-dichromat netlib-scalapack r-scales mumps r-tibble r-assertthat openmpi zlib netcdf hdf5 r-rlang r-gtable matio m4 libsigsegv parallel-netcdf r-ggplot2 r-plyr r-reshape2 r-rcpp r-modelmetrics Nalu: Generalized Unstructured Massively Parallel Low Mach Flow freetype bzip2 gperf r-r6 r-minqa fontconfig r-caret r-nloptr r-digest cairo libxml2 r-rcppeigen pixman r-lme4 harfbuzz libpng r-praise r-testthat pango adol-c automake sed xz gobject-introspection autoconf perl glib gettext r-mgcv r-crayon tar gmp libtool m4 libsigsegv libffi r-nlme p4est pkgconf libtiff hwloc libxml2 r-pbkrtest help2man xz r-mass zlib r-car r-foreach r-iterators flex netcdf-cxx netcdf r-sparsem curl r python ncurses nanoflann pkgconf openssl readline matio r-quantreg r-matrixmodels pcre hdf5 jdk libjpeg-turbo r-matrix cmake sundials r-lattice icu4c sqlite font-util r-adabag r-glmnet glm tk tcl nasm libxau gdbm xproto netlib-scalapack r-cubist r-mlbench libxdmcp openmpi xextproto mumps r-magrittr perl libpthread-stubs trilinos xtrans r-data-table libxcb util-macros hypre r-stringr xcb-proto r-xgboost libx11 inputproto r-rminer r-e1071 r-class r-codetools openblas kbproto gdbm bison r-kknn r-igraph r-irlba r-stringi r-rpart automake superlu-dist parmetis sqlite autoconf readline r-nnet zlib r-pkgconfig petsc gmp libtool m4 libsigsegv python dealii slepc ncurses arpack-ng openssl r-mda cmake metis r-kernlab gmsh oce suite-sparse r-strucchange r-zoo tetgen r-sandwich intel-tbb r-party netgen r-survival bzip2 r-multcomp r-th-data assimp r-plotrix boost r-coin r-mvtnorm r-modeltools gsl r-pls muparser dealii: C++ Finite Element Library R Miner: R Data Mining Library 6 LLNL-PRES-806064 github.com/spack/spack The complexity of the exascale ecosystem threatens productivity. 5+ target architectures/platforms 15+ applications x 80+ software packages x Xeon Power KNL NVIDIA ARM Laptops? Up to 7 compilers 10+ Programming MoDels 2-3 versions of each package x Intel GCC Clang XL x OpenMPI MPICH MVAPICH OpenMP CUDA x PGI Cray NAG OpenACC Dharma Legion RAJA Kokkos + external DepenDencies = up to 1,260,000 combinations! § Every application has its own stack of dependencies. § Developers, users, and facilities dedicate (many) FTEs to building & porting. § Often trade reuse and usability for performance. We must make it easier to rely on others’ software! 7 LLNL-PRES-806064 github.com/spack/spack What is the “production” environment for HPC? § Someone’s home directory? § LLNL? LANL? Sandia? ANL? LBL? TACC? — Environments at large-scale sites are very different. § Which MPI implementation? § Which compiler? § Which dependencies? § Which versions of dependencies? — Many applications require specific dependency versions. Real answer: there isn’t a single production environment or a standard way to build. Reusing someone else’s software is HARD. 8 LLNL-PRES-806064 github.com/spack/spack Spack is a flexible package manager for HPC § How to install Spack: $ git clone https://github.com/spack/spack $ . spack/share/spack/setup-env.sh § How to install a package: $ spack install hdf5 § HDF5 and its dependencies are installed within the Spack directory. § Unlike typical package managers, Spack can also install Get Spack! many variants of the same build. — Different compilers http://github.com/spack/spack — Different MPI implementations — Different build options @spackpm 9 LLNL-PRES-806064 github.com/spack/spack Who can use Spack? People who want to use or distribute software for HPC! 1. End Users of HPC Software — Install and run HPC applications and tools 2. HPC Application Teams — Manage third-party dependency libraries 3. Package Developers — People who want to package their own software for distribution 4. User support teams at HPC Centers — People who deploy software for users at large HPC sites 10 LLNL-PRES-806064 github.com/spack/spack Spack is used worldwide! Over 3,200 software packages Over 1,100 monthly active users (on docs site) Over 400 contributors from labs, academia, industry Plot shows sessions on spack.readthedocs.io for one month 11 LLNL-PRES-806064 github.com/spack/spack Active Users on the spack.readthedocs.io 12 LLNL-PRES-806064 github.com/spack/spack GitHub Stars: Spack and some other popular HPC projects Stars over time Stars per day (same data, 60-day window) 13 LLNL-PRES-806064 github.com/spack/spack Spack is being used on many of the top HPC systems § At HPC sites for software stack+ modules Summit (ORNL) — Reduced Summit deploy time from 2 weeks to 12 hrs. Sierra (LLNL) — EPFL deploys its software stack with Jenkins + Spack — NERSC, LLNL, ANL, other US DOE sites — SJTU in China § Within ECP as part of their software release process Cori (NERSC) — ECP-wide software distribution — SDK workflows SuperMUC-NG (LRZ) § Within High Energy Physics (HEP) community — HEP (Fermi, CERN) have contributed many features to support their workflow 富岳 Fugaku, formerly Post-K § Many others (RIKEN) 14 LLNL-PRES-806064 github.com/spack/spack Contributions to Spack continue to grow! § In November 2015, LLNL provided most of the contributions to Spack § Since then, we’ve gone from 300 to over 3,200 packages § Most packages are from external contributors! § Many contributions in core, as well. § We are committed to sustaining Spack’s open source ecosystem! 15 LLNL-PRES-806064 github.com/spack/spack Spack v0.12.1 is the latest release § New features: 1. Spack environments (covered today) 2. spack.yaml and spack.lock files for tracking dependencies (covered today) 3. Custom configurations via command line (covered today) 4. Better support for linking Python packages into view directories (pip in views) 5. Support for uploading build logs to CDash 6. Packages have more control over compiler flags via flag handlers 7. Better support for module file generation 8. Better support for Intel compilers, Intel MPI, etc. 9. Many performance improvements, improved startup time § Spack is now permissively licensed under Apache-2.0 or MIT — previously LGPL § Over 2,900 packages (800 added since last year) — This is from November; over 3,200 in latest develop branch 16 LLNL-PRES-806064 github.com/spack/spack Related Work Spack is not the first tool to automate builds — Inspired by copious prior work 1. “Functional” Package Managers — Nix https://nixos.org/ — GNU Guix https://www.gnu.org/s/guix/ 2. Build-from-source Package Managers — Homebrew http://brew.sh — MacPorts https://www.macports.org Other tools in the HPC Space: § Easybuild http://hpcugent.github.io/easybuild/ — An installation tool for HPC — Focused on HPC system administrators – different package model from Spack — Relies on a fixed software stack – harder to tweak recipes for experimentation § Conda https://conda.io — Very popular binary package manager for data science — Not targeted at HPC; generally unoptimized binaries 17 LLNL-PRES-806064 github.com/spack/spack What’s on the road map? § Spack Stacks: better support for facility deployment — Extension of environments to better support facility workflows — Ability to easily specify full facility software stack and module configuration — Build all of it at once with a single command! § Better CI Support — GitLab Integration with environments § Architecture-specific binaries — Better provenance for builds — Better support for matching optimized binary packages to machines § Better concretization — Spack’s dependency resolution can hit snags — We need better support to enable features above More on the Spack Road Map tomorrow Morning! 18 LLNL-PRES-806064 github.com/spack/spack Tutorial Overview (times are estimates) 1. For everyone: Welcome & Overview 9:15 - 9:30 2. For everyone: Basics of Building and Linking 9:30 - 9:45 3.
Recommended publications
  • Red Hat Enterprise Linux 6 Developer Guide
    Red Hat Enterprise Linux 6 Developer Guide An introduction to application development tools in Red Hat Enterprise Linux 6 Dave Brolley William Cohen Roland Grunberg Aldy Hernandez Karsten Hopp Jakub Jelinek Developer Guide Jeff Johnston Benjamin Kosnik Aleksander Kurtakov Chris Moller Phil Muldoon Andrew Overholt Charley Wang Kent Sebastian Red Hat Enterprise Linux 6 Developer Guide An introduction to application development tools in Red Hat Enterprise Linux 6 Edition 0 Author Dave Brolley [email protected] Author William Cohen [email protected] Author Roland Grunberg [email protected] Author Aldy Hernandez [email protected] Author Karsten Hopp [email protected] Author Jakub Jelinek [email protected] Author Jeff Johnston [email protected] Author Benjamin Kosnik [email protected] Author Aleksander Kurtakov [email protected] Author Chris Moller [email protected] Author Phil Muldoon [email protected] Author Andrew Overholt [email protected] Author Charley Wang [email protected] Author Kent Sebastian [email protected] Editor Don Domingo [email protected] Editor Jacquelynn East [email protected] Copyright © 2010 Red Hat, Inc. and others. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
    [Show full text]
  • GNU Build System
    Maemo Diablo Reference Manual for maemo 4.1 GNU Build System December 22, 2008 Contents 1 GNU Build System 2 1.1 Introduction .............................. 2 1.2 GNU Make and Makefiles ...................... 2 1.2.1 Simplest Real Example .................... 3 1.2.2 Anatomy of Makefile ..................... 6 1.2.3 Default Goal .......................... 7 1.2.4 On Names of Makefiles ................... 7 1.2.5 Questions ........................... 8 1.2.6 Adding Make Goals ..................... 8 1.2.7 Making One Target at a Time ................ 9 1.2.8 PHONY Keyword ...................... 9 1.2.9 Specifying Default Goal ................... 10 1.2.10 Other Common Phony Goals ................ 11 1.2.11 Variables in Makefiles .................... 11 1.2.12 Variable Flavors ........................ 11 1.2.13 Recursive Variables ...................... 12 1.2.14 Simple Variables ....................... 13 1.2.15 Automatic Variables ..................... 14 1.2.16 Integrating with Pkg-Config ................ 15 1.3 GNU Autotools ............................ 16 1.3.1 Brief History of Managing Portability ........... 17 1.3.2 GNU Autoconf ........................ 18 1.3.3 Substitutions ......................... 22 1.3.4 Introducing Automake .................... 24 1.3.5 Checking for Distribution Sanity .............. 29 1.3.6 Cleaning up .......................... 29 1.3.7 Integration with Pkg-Config ................ 30 1 Chapter 1 GNU Build System 1.1 Introduction The following code examples are used in this chapter: simple-make-files • autoconf-automake • 1.2 GNU Make and Makefiles The make program from the GNU project is a powerful tool to aid implementing automation in the software building process. Beside this, it can be used to automate any task that uses files and in which these files are transformed into some other form.
    [Show full text]
  • Bench - Benchmarking the State-Of- The-Art Task Execution Frameworks of Many- Task Computing
    MATRIX: Bench - Benchmarking the state-of- the-art Task Execution Frameworks of Many- Task Computing Thomas Dubucq, Tony Forlini, Virgile Landeiro Dos Reis, and Isabelle Santos Illinois Institute of Technology, Chicago, IL, USA {tdubucq, tforlini, vlandeir, isantos1}@hawk.iit.edu Stanford University. Finally HPX is a general purpose C++ Abstract — Technology trends indicate that exascale systems will runtime system for parallel and distributed applications of any have billion-way parallelism, and each node will have about three scale developed by Louisiana State University and Staple is a orders of magnitude more intra-node parallelism than today’s framework for developing parallel programs from Texas A&M. peta-scale systems. The majority of current runtime systems focus a great deal of effort on optimizing the inter-node parallelism by MATRIX is a many-task computing job scheduling system maximizing the bandwidth and minimizing the latency of the use [3]. There are many resource managing systems aimed towards of interconnection networks and storage, but suffer from the lack data-intensive applications. Furthermore, distributed task of scalable solutions to expose the intra-node parallelism. Many- scheduling in many-task computing is a problem that has been task computing (MTC) is a distributed fine-grained paradigm that considered by many research teams. In particular, Charm++ [4], aims to address the challenges of managing parallelism and Legion [5], Swift [6], [10], Spark [1][2], HPX [12], STAPL [13] locality of exascale systems. MTC applications are typically structured as direct acyclic graphs of loosely coupled short tasks and MATRIX [11] offer solutions to this problem and have with explicit input/output data dependencies.
    [Show full text]
  • Embedded Linux Systems with the Yocto Project™
    OPEN SOURCE SOFTWARE DEVELOPMENT SERIES Embedded Linux Systems with the Yocto Project" FREE SAMPLE CHAPTER SHARE WITH OTHERS �f, � � � � Embedded Linux Systems with the Yocto ProjectTM This page intentionally left blank Embedded Linux Systems with the Yocto ProjectTM Rudolf J. Streif Boston • Columbus • Indianapolis • New York • San Francisco • Amsterdam • Cape Town Dubai • London • Madrid • Milan • Munich • Paris • Montreal • Toronto • Delhi • Mexico City São Paulo • Sidney • Hong Kong • Seoul • Singapore • Taipei • Tokyo Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. For information about buying this title in bulk quantities, or for special sales opportunities (which may include electronic versions; custom cover designs; and content particular to your business, training goals, marketing focus, or branding interests), please contact our corporate sales depart- ment at [email protected] or (800) 382-3419. For government sales inquiries, please contact [email protected]. For questions about sales outside the U.S., please contact [email protected]. Visit us on the Web: informit.com Cataloging-in-Publication Data is on file with the Library of Congress.
    [Show full text]
  • Adaptive Data Migration in Load-Imbalanced HPC Applications
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 10-16-2020 Adaptive Data Migration in Load-Imbalanced HPC Applications Parsa Amini Louisiana State University and Agricultural and Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Amini, Parsa, "Adaptive Data Migration in Load-Imbalanced HPC Applications" (2020). LSU Doctoral Dissertations. 5370. https://digitalcommons.lsu.edu/gradschool_dissertations/5370 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. ADAPTIVE DATA MIGRATION IN LOAD-IMBALANCED HPC APPLICATIONS A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Parsa Amini B.S., Shahed University, 2013 M.S., New Mexico State University, 2015 December 2020 Acknowledgments This effort has been possible, thanks to the involvement and assistance of numerous people. First and foremost, I thank my advisor, Dr. Hartmut Kaiser, who made this journey possible with their invaluable support, precise guidance, and generous sharing of expertise. It has been a great privilege and opportunity for me be your student, a part of the STE||AR group, and the HPX development effort. I would also like to thank my mentor and former advisor at New Mexico State University, Dr.
    [Show full text]
  • GNU Guix Cookbook Tutorials and Examples for Using the GNU Guix Functional Package Manager
    GNU Guix Cookbook Tutorials and examples for using the GNU Guix Functional Package Manager The GNU Guix Developers Copyright c 2019 Ricardo Wurmus Copyright c 2019 Efraim Flashner Copyright c 2019 Pierre Neidhardt Copyright c 2020 Oleg Pykhalov Copyright c 2020 Matthew Brooks Copyright c 2020 Marcin Karpezo Copyright c 2020 Brice Waegeneire Copyright c 2020 Andr´eBatista Copyright c 2020 Christine Lemmer-Webber Copyright c 2021 Joshua Branson Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled \GNU Free Documentation License". i Table of Contents GNU Guix Cookbook ::::::::::::::::::::::::::::::: 1 1 Scheme tutorials ::::::::::::::::::::::::::::::::: 2 1.1 A Scheme Crash Course :::::::::::::::::::::::::::::::::::::::: 2 2 Packaging :::::::::::::::::::::::::::::::::::::::: 5 2.1 Packaging Tutorial:::::::::::::::::::::::::::::::::::::::::::::: 5 2.1.1 A \Hello World" package :::::::::::::::::::::::::::::::::: 5 2.1.2 Setup:::::::::::::::::::::::::::::::::::::::::::::::::::::: 8 2.1.2.1 Local file ::::::::::::::::::::::::::::::::::::::::::::: 8 2.1.2.2 `GUIX_PACKAGE_PATH' ::::::::::::::::::::::::::::::::: 9 2.1.2.3 Guix channels ::::::::::::::::::::::::::::::::::::::: 10 2.1.2.4 Direct checkout hacking:::::::::::::::::::::::::::::: 10 2.1.3 Extended example ::::::::::::::::::::::::::::::::::::::::
    [Show full text]
  • Lmod Or How to Protect Your Sanity from Dependency Hell
    Lmod or how to protect your sanity from dependency hell Steffen Müthing Interdisciplinary Center for Scientific Computing Heidelberg University Dune User Meeting Aachen September 26, 2013 1 Steffen Müthing | Lmod or how to protect your sanity from dependency hell ⇒ Have to keep around multiple versions of MPI, BLAS, ParMetis, ALUGrid, UGGrid, . The Issue • DUNE has a number of non-packaged dependencies • Some of those contain libraries that are bound to a compiler / MPI version • You have to support multiple compilers / MPIs • Library developer (we try to be good about this...) • Clusters with different compiler / MPI combinations • Easily switch between release / debug version of MPI (only with dynamic linking) • Use GCC in general, but switch to clang during the “fix compilation errors” stage 2 Steffen Müthing | Lmod or how to protect your sanity from dependency hell The Issue • DUNE has a number of non-packaged dependencies • Some of those contain libraries that are bound to a compiler / MPI version • You have to support multiple compilers / MPIs • Library developer (we try to be good about this...) • Clusters with different compiler / MPI combinations • Easily switch between release / debug version of MPI (only with dynamic linking) • Use GCC in general, but switch to clang during the “fix compilation errors” stage ⇒ Have to keep around multiple versions of MPI, BLAS, ParMetis, ALUGrid, UGGrid, . 2 Steffen Müthing | Lmod or how to protect your sanity from dependency hell Problems • Do I already have ALUGrid for MPICH? • If yes, where on earth did I put it? • Did I really build it with the correct dependencies? • Why does my build fail? Do all the libraries in my nice --with= actually work together? 3 Steffen Müthing | Lmod or how to protect your sanity from dependency hell • Look around for something that’s already there • Distribution package managers (APT, rpm, Portage, MacPorts, homebrew,.
    [Show full text]
  • Using GNU Autotools to Manage a “Minimal Project”
    05AAL Ch 04 9/12/00 10:14 AM Page 21 4 Using GNU Autotools to Manage a “Minimal Project” This chapter describes how to manage a minimal project using the GNU Autotools. For this discussion, a minimal project refers to the smallest possible project that can still illustrate a sufficient number of principles related to using the tools. Studying a smaller project makes it easier to understand the more complex interactions between these tools when larger projects require advanced features. The example project used throughout this chapter is a fictitious command interpreter called ‘foonly’. ‘foonly’ is written in C, but like many interpreters, uses a lexical analyzer and a parser expressed using the lex and yacc tools. The package is developed to adhere to the GNU ‘Makefile’ standard, which is the default behavior for Automake. This project does not use many features of the GNU Autotools. The most note- worthy one is libraries; this package does not produce any libraries of its own, so Libtool does not feature them in this chapter. The more complex projects presented in Chapter 7, “A Small GNU Autotools Project” and Chapter 11, “A Large GNU Autotools Project” illustrate how Libtool participates in the build system. The essence of this chapter is to provide a high-level overview of the user-written files and how they interact. 4.1 User-Provided Input Files The smallest project requires the user to provide only two files. The GNU Autotools generate the remainder of the files needed to build the package are in Section 4.2, “Generated Output Files.”
    [Show full text]
  • The Starlink Build System SSN/78.1 —Abstract I
    SSN/78.1 Starlink Project Starlink System Note 78.1 Norman Gray, Peter W Draper, Mark B Taylor, Steven E Rankin 11 April 2005 Copyright 2004-5, Council for the Central Laboratory of the Research Councils Copyright 2007, Particle Physics and Astronomy Research Council Copyright 2007, Science and Technology Facilities Council The Starlink Build System SSN/78.1 —Abstract i Abstract This document provides an introduction to the Starlink build system. It describes how to use the Starlink versions of the GNU autotools (autoconf, automake and libtool), how to build the software set from a checkout, how to add and configure new components, and acts as a reference manual for the Starlink-specific autoconf macros and Starlink automake features. It does not describe the management of the CVS repository in detail, nor any other source maintainance patterns. It should be read in conjunction with the detailed build instructions in the README file at the top of the source tree (which takes precedence over any instructions in this document, though there should be no major disagreements), and with sun248, which additionally includes platform-specific notes. Copyright 2004-5, Council for the Central Laboratory of the Research Councils Copyright 2007, Particle Physics and Astronomy Research Council Copyright 2007, Science and Technology Facilities Council ii SSN/78.1—Contents Contents 1 Introduction 1 1.1 Quick entry-points . 2 2 Tools 3 2.1 Overview of the Autotools . 3 2.1.1 Autoconf . 5 2.1.2 Automake . 9 2.1.3 Libtool . 13 2.1.4 Autoreconf: why you don’t need to know about aclocal .
    [Show full text]
  • Beowulf Clusters — an Overview
    WinterSchool 2001 Å. Ødegård Beowulf clusters — an overview Åsmund Ødegård April 4, 2001 Beowulf Clusters 1 WinterSchool 2001 Å. Ødegård Contents Introduction 3 What is a Beowulf 5 The history of Beowulf 6 Who can build a Beowulf 10 How to design a Beowulf 11 Beowulfs in more detail 12 Rules of thumb 26 What Beowulfs are Good For 30 Experiments 31 3D nonlinear acoustic fields 35 Incompressible Navier–Stokes 42 3D nonlinear water wave 44 Beowulf Clusters 2 WinterSchool 2001 Å. Ødegård Introduction Why clusters ? ² “Work harder” – More CPU–power, more memory, more everything ² “Work smarter” – Better algorithms ² “Get help” – Let more boxes work together to solve the problem – Parallel processing ² by Greg Pfister Beowulf Clusters 3 WinterSchool 2001 Å. Ødegård ² Beowulfs in the Parallel Computing picture: Parallel Computing MetaComputing Clusters Tightly Coupled Vector WS farms Pile of PCs NOW NT/Win2k Clusters Beowulf CC-NUMA Beowulf Clusters 4 WinterSchool 2001 Å. Ødegård What is a Beowulf ² Mass–market commodity off the shelf (COTS) ² Low cost local area network (LAN) ² Open Source UNIX like operating system (OS) ² Execute parallel application programmed with a message passing model (MPI) ² Anything from small systems to large, fast systems. The fastest rank as no.84 on todays Top500. ² The best price/performance system available for many applications ² Philosophy: The cheapest system available which solve your problem in reasonable time Beowulf Clusters 5 WinterSchool 2001 Å. Ødegård The history of Beowulf ² 1993: Perfect conditions for the first Beowulf – Major CPU performance advance: 80286 ¡! 80386 – DRAM of reasonable costs and densities (8MB) – Disk drives of several 100MBs available for PC – Ethernet (10Mbps) controllers and hubs cheap enough – Linux improved rapidly, and was in a usable state – PVM widely accepted as a cross–platform message passing model ² Clustering was done with commercial UNIX, but the cost was high.
    [Show full text]
  • Improving MPI Threading Support for Current Hardware Architectures
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2019 Improving MPI Threading Support for Current Hardware Architectures Thananon Patinyasakdikul University of Tennessee, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Recommended Citation Patinyasakdikul, Thananon, "Improving MPI Threading Support for Current Hardware Architectures. " PhD diss., University of Tennessee, 2019. https://trace.tennessee.edu/utk_graddiss/5631 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by Thananon Patinyasakdikul entitled "Improving MPI Threading Support for Current Hardware Architectures." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Doctor of Philosophy, with a major in Computer Science. Jack Dongarra, Major Professor We have read this dissertation and recommend its acceptance: Michael Berry, Michela Taufer, Yingkui Li Accepted for the Council: Dixie L. Thompson Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Improving MPI Threading Support for Current Hardware Architectures A Dissertation Presented for the Doctor of Philosophy Degree The University of Tennessee, Knoxville Thananon Patinyasakdikul December 2019 c by Thananon Patinyasakdikul, 2019 All Rights Reserved. ii To my parents Thanawij and Issaree Patinyasakdikul, my little brother Thanarat Patinyasakdikul for their love, trust and support.
    [Show full text]
  • With Yocto/Openembedded
    PORTING NEW CODE TO RISC-V WITH YOCTO/OPENEMBEDDED Martin Maas ([email protected]) 1st RISC-V Workshop, January 15, 2015 Monterey, CA WHY WE NEED A LINUX DISTRIBUTION • To build an application for RISC-V, you need to: – Download and build the RISC-V toolchain + Linux – Download, patch and build application + dependencies – Create an image and run it in QEMU or on hardware • Problems with this approach: – Error-prone: Easy to corrupt FS or get a step wrong – Reproducibility: Others can’t easily reuse your work – Rigidity: If a dependency changes, need to do it all over • We need a Linux distribution! – Automatic build process with dependency tracking – Ability to distribute binary packages and SDKs 2 RISCV-POKY: A PORT OF THE YOCTO PROJECT • We ported the Yocto Project – Official Linux Foundation Workgroup, supported by a large number of industry partners – Part I: Collection of hundreds of recipes (scripts that describe how to build packages for different platforms), shared with OpenEmbedded project – Part II: Bitbake, a parallel build system that takes recipes and fetches, patches, cross-compiles and produces packages (RPM/DEB), images, SDKs, etc. • Focus on build process and customizability 3 GETTING STARTED WITH RISCV-POKY • Let’s build a full Linux system including the GCC toolchain, Linux, QEMU + a large set of packages (including bash, ssh, python, perl, apt, wget,…) • Step I: Clone riscv-poky: git clone [email protected]:ucb-bar/riscv-poky.git • Step II: Set up the build system: source oe-init-build-env • Step III: Build an image (may
    [Show full text]