Install Your Scientific Software Stack Easily with Spack

Total Page:16

File Type:pdf, Size:1020Kb

Install Your Scientific Software Stack Easily with Spack Install your scientific software stack easily with Spack Les mardis du developpement´ technologique Florent Pruvost (SED) Outline 1. Context 2. Features overview 3. In practice 4. Some feedback Florent Pruvost (SED) – Install your scientific software stack easily with Spack 2 1 Context Florent Pruvost (SED) – Install your scientific software stack easily with Spack 3 A scientific software stack • modular, several languages, different build systems Application Linear Algebra Optimized Kernels Graph Processing Runtime Systems Paradigms Miscellaneous • difficulty to be an expert in all the chain Florent Pruvost (SED) – Install your scientific software stack easily with Spack 4 An example: Aerosol • a finite elements library developed by the Inria teams Cagire and Cardamom Aerosol PaMPA PaStiX HDF5 XML2 PT-SCOTCH StarPU BLAS/LAPACK MPI CUDA Florent Pruvost (SED) – Install your scientific software stack easily with Spack 5 Constraints • R&D ) develop prototypes - test many different builds • computing/data intensive application ) HPC environment - different machines, OSes, environments - remote connection, not administrator • performances ) highly tuned installation - well chosen components - specific build options • reproducibility - control the environment - characterize what influence the build Florent Pruvost (SED) – Install your scientific software stack easily with Spack 6 Wish list • a simple process to install a default version • a flexible way to choose build variants - choose compiler, software versions - enable components, e.g. MPI : yes/no - build options, e.g. --enable-debug • be able to install it on a remote machine (supercomputer) - no root permissions - no internet access (not necessarily) • be able to reproduce experiments - not destructive installation - control the environment and thirdparty libraries Florent Pruvost (SED) – Install your scientific software stack easily with Spack 7 Traditional tools: binary package managers • dpkg (APT), RPM, pacman, etc • designed to manage a single stack • install one version of each package in a single prefix (/usr) - root permission required • seamless upgrades to a stable, well tested stack Florent Pruvost (SED) – Install your scientific software stack easily with Spack 8 Traditional tools: port systems • BSD Ports, portage, Macports, Homebrew, Gentoo, etc • minimal support for builds parameterized by compilers, dependency versions Florent Pruvost (SED) – Install your scientific software stack easily with Spack 9 Traditional tools: virtual machines and Linux containers • Docker, etc • containers allow users to build environments for different applications • does not solve the build problem (someone has to build the image) • performance, security, and upgrade issues prevent widespread HPC deployment Florent Pruvost (SED) – Install your scientific software stack easily with Spack 10 Tools designed for scientific applications a short list to focus on: Nix/Guix, Easybuild, Spack • Common features: - build from sources - own directory structure - hash parametrized by versions, dependencies, etc - no need to be root to install packages - usefull available packages: MPI, BLAS/LAPACK, FFTW, etc • Differences: - robustness VS. flexibility - maturity - languages and technical details Florent Pruvost (SED) – Install your scientific software stack easily with Spack 11 Nix/Guix: functional languages (Guile) • pros: - very precise dependency tracking - cryptographic hashes determine the exact build and run-time dependencies - safe upgrade - nice for reproducibility • cons: - administrator rights required to install Nix/Guix - limited to opensource softwares: no Intel, CUDA - deal with combinatorial builds? - multi-compiler and version support? - virtual dependency? - syntax for parametrization? Florent Pruvost (SED) – Install your scientific software stack easily with Spack 12 Easybuild: Python • pros: - designed for installation on HPC systems - support for proprietary software like Intel and CUDA - can reuse what is already installed, cf. dummy toolchain • cons: - cannot deal with combinatorial builds - requires a file per configuration of a stack - limited command line interface Florent Pruvost (SED) – Install your scientific software stack easily with Spack 13 Spack: Python 2 • pros: - deals with combinatorial builds - a couple of packages = thousands of builds available - nice command line syntax to tune the stack parameters - can reuse what is already installed, cf. config. files - no need to be root - no need to have internet access if tarballs are available locally • cons: - Spack is currently alpha software (young project) - package parameters changed a lot - new nice features = re-write packages - all parameters that affect a build are not controlled internally (external compilers and libraries) = all builds are not safe Florent Pruvost (SED) – Install your scientific software stack easily with Spack 14 2 Features overview Florent Pruvost (SED) – Install your scientific software stack easily with Spack 15 Overview • object-oriented Python 2 • Unix systems - Windows is not an OS for HPC • ∼ 500 available packages • opensource, opencommunity (github - pull request) - Feb 10, 2013 - Today - mainly developed by T. Gamblin and friends from LLNL - 95 contributors, 154 forks - 1 release every 6 months Florent Pruvost (SED) – Install your scientific software stack easily with Spack 16 Overview Florent Pruvost (SED) – Install your scientific software stack easily with Spack 16 Handles combinatorial software complexity • each unique dependency graph is a unique configuration • each configuration installed in a unique directory • hash of DAG is appended to prefix • installed packages automatically find dependencies - Spack embeds RPATHs in binaries (compiler wrappers) - no need to use modules or set LD LIBRARY PATH - things work the way you built them Florent Pruvost (SED) – Install your scientific software stack easily with Spack 17 Provides a spec syntax to describe customized DAG configurations • each expression is a spec for a particular configuration - each clause adds a constraint to the spec - constraints are optional – specify only what you need - customize install on the command line! • syntax abstracts details in the common case - makes parameterization by version, compiler, and options easy when necessary Florent Pruvost (SED) – Install your scientific software stack easily with Spack 18 Spack Specs can constrain versions of dependencies • Spack ensures one configuration of each library per DAG - consistency - user does not need to know DAG structure; only the dependency names • Spack can ensure that builds use the same compiler, or you can mix - working on ensuring ABI compatibility when compilers are mixed Florent Pruvost (SED) – Install your scientific software stack easily with Spack 19 Spack handles API-incompatibility • mpi is a virtual dependency • install the same package built with two different MPI implementations: • let Spack choose MPI version, as long as it provides MPI-2 interface: Florent Pruvost (SED) – Install your scientific software stack easily with Spack 20 Spack packages are simple Python scripts Florent Pruvost (SED) – Install your scientific software stack easily with Spack 21 Dependencies may be optional • versions can be tarballs or VCS repositories (git, svn, hg) • the user can define named variants: • and use them to install: $ spack install hwloc +cuda $ spack install hwloc -cuda • dependencies may be optional according to other conditions: - e.g. gcc dependency on mpc from 4.5 on: Florent Pruvost (SED) – Install your scientific software stack easily with Spack 22 Concretization fills in missing configuration details when the user is not explicit Florent Pruvost (SED) – Install your scientific software stack easily with Spack 23 Default behaviours is configurable • configure your preferred: - compilers, versions, depends on, variants • directly in the package, e.g. prefers Python 2.7.11 • or for specific machine/environment, edit ∼/.spack/packages.yaml Florent Pruvost (SED) – Install your scientific software stack easily with Spack 24 Spack builds in isolated environment • forking build process isolates environment for each build • compiler wrappers add include, lib, and RPATH flags - ensure that dependencies are found automatically Florent Pruvost (SED) – Install your scientific software stack easily with Spack 25 3 In practice Florent Pruvost (SED) – Install your scientific software stack easily with Spack 26 Setup • entry point on github: https://github.com/LLNL/spack • read the doc, at least Getting started: http://spack.readthedocs.io/en/latest/index.html $ sudo apt install python $ git clone https://github.com/LLNL/spack.git $ . spack/share/spack/setup-env.sh $ spack install gcc Florent Pruvost (SED) – Install your scientific software stack easily with Spack 27 Check compilers • check compilers found automatically $ spack compiler list • information about a compiler $ spack compiler info gcc Florent Pruvost (SED) – Install your scientific software stack easily with Spack 28 Configure compilers • add compilers installed in some exotic paths $ spack compiler find /home/jdoe/intel/bin • remove non desired compilers $ spack compiler rm clang • compiler configuration can be edited by hand $ vi ~/.spack/compilers.yaml Florent Pruvost (SED) – Install your scientific software stack easily with Spack 29 Install • list available packages $ spack list • information about a package $ spack info hwloc • check the concrete stack to be installed $ spack spec hwloc • install a package $ spack install [-v] hwloc
Recommended publications
  • Embedded Linux Systems with the Yocto Project™
    OPEN SOURCE SOFTWARE DEVELOPMENT SERIES Embedded Linux Systems with the Yocto Project" FREE SAMPLE CHAPTER SHARE WITH OTHERS �f, � � � � Embedded Linux Systems with the Yocto ProjectTM This page intentionally left blank Embedded Linux Systems with the Yocto ProjectTM Rudolf J. Streif Boston • Columbus • Indianapolis • New York • San Francisco • Amsterdam • Cape Town Dubai • London • Madrid • Milan • Munich • Paris • Montreal • Toronto • Delhi • Mexico City São Paulo • Sidney • Hong Kong • Seoul • Singapore • Taipei • Tokyo Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. For information about buying this title in bulk quantities, or for special sales opportunities (which may include electronic versions; custom cover designs; and content particular to your business, training goals, marketing focus, or branding interests), please contact our corporate sales depart- ment at [email protected] or (800) 382-3419. For government sales inquiries, please contact [email protected]. For questions about sales outside the U.S., please contact [email protected]. Visit us on the Web: informit.com Cataloging-in-Publication Data is on file with the Library of Congress.
    [Show full text]
  • Opensource Software in Mac OS X V. Zhhuta
    Foss Lviv 2013 191 - Linux VM з Wordpress на Azure під’єднано до SQL-бази в приватному центрі обробки даних. Як бачимо, бізнес Microsoft вже дуже сильно зав'язаний на Open Source! Далі в доповіді будуть розглянуті подробиці інтероперабельності платформ з Linux Server, Apache Hadoop, Java, PHP, Node.JS, MongoDb, і наостанок дізнаємося про цікаві Open Source-розробки Microsoft Research. OpenSource Software in Mac OS X V. Zhhuta UK2 LImIted t/a VPS.NET, [email protected] Max OS X stem from Unix: bSD. It contains a lot of things that are common for Unix systems. Kernel, filesystem and base unix utilities as well as it's own package managers. It's not a secret that Mac OS X has a bSD kernel Darwin. The raw Mac OS X won't provide you with all power of Unix but this could be easily fixed: install package manager. There are 3 package manager: MacPorts, Fink and Homebrew. To dive in OpenSource world of mac os x we would try to install lates version of bash, bash-completion and few other utilities. Where we should start? First of all you need to install on you system dev-tools: Xcode – native development tools that contain GCC and libraries. Next step: bring a GIU – X11 into your system. Starting from Mac OS 10.8 X11 is not included in base-installation and it's need to install Xquartz(http://xquartz.macosforge.org). Now it's time to look closely to package managers MacPorts Site: www.macports.org Latest MacPorts release: 2.1.3 Number of ports: 16740 MacPorts born inside Apple in 2002.
    [Show full text]
  • Lmod Or How to Protect Your Sanity from Dependency Hell
    Lmod or how to protect your sanity from dependency hell Steffen Müthing Interdisciplinary Center for Scientific Computing Heidelberg University Dune User Meeting Aachen September 26, 2013 1 Steffen Müthing | Lmod or how to protect your sanity from dependency hell ⇒ Have to keep around multiple versions of MPI, BLAS, ParMetis, ALUGrid, UGGrid, . The Issue • DUNE has a number of non-packaged dependencies • Some of those contain libraries that are bound to a compiler / MPI version • You have to support multiple compilers / MPIs • Library developer (we try to be good about this...) • Clusters with different compiler / MPI combinations • Easily switch between release / debug version of MPI (only with dynamic linking) • Use GCC in general, but switch to clang during the “fix compilation errors” stage 2 Steffen Müthing | Lmod or how to protect your sanity from dependency hell The Issue • DUNE has a number of non-packaged dependencies • Some of those contain libraries that are bound to a compiler / MPI version • You have to support multiple compilers / MPIs • Library developer (we try to be good about this...) • Clusters with different compiler / MPI combinations • Easily switch between release / debug version of MPI (only with dynamic linking) • Use GCC in general, but switch to clang during the “fix compilation errors” stage ⇒ Have to keep around multiple versions of MPI, BLAS, ParMetis, ALUGrid, UGGrid, . 2 Steffen Müthing | Lmod or how to protect your sanity from dependency hell Problems • Do I already have ALUGrid for MPICH? • If yes, where on earth did I put it? • Did I really build it with the correct dependencies? • Why does my build fail? Do all the libraries in my nice --with= actually work together? 3 Steffen Müthing | Lmod or how to protect your sanity from dependency hell • Look around for something that’s already there • Distribution package managers (APT, rpm, Portage, MacPorts, homebrew,.
    [Show full text]
  • Xcode Package from App Store
    KH Computational Physics- 2016 Introduction Setting up your computing environment Installation • MAC or Linux are the preferred operating system in this course on scientific computing. • Windows can be used, but the most important programs must be installed – python : There is a nice package ”Enthought Python Distribution” http://www.enthought.com/products/edudownload.php – C++ and Fortran compiler – BLAS&LAPACK for linear algebra – plotting program such as gnuplot Kristjan Haule, 2016 –1– KH Computational Physics- 2016 Introduction Software for this course: Essentials: • Python, and its packages in particular numpy, scipy, matplotlib • C++ compiler such as gcc • Text editor for coding (for example Emacs, Aquamacs, Enthought’s IDLE) • make to execute makefiles Highly Recommended: • Fortran compiler, such as gfortran or intel fortran • BLAS& LAPACK library for linear algebra (most likely provided by vendor) • open mp enabled fortran and C++ compiler Useful: • gnuplot for fast plotting. • gsl (Gnu scientific library) for implementation of various scientific algorithms. Kristjan Haule, 2016 –2– KH Computational Physics- 2016 Introduction Installation on MAC • Install Xcode package from App Store. • Install ‘‘Command Line Tools’’ from Apple’s software site. For Mavericks and lafter, open Xcode program, and choose from the menu Xcode -> Open Developer Tool -> More Developer Tools... You will be linked to the Apple page that allows you to access downloads for Xcode. You wil have to register as a developer (free). Search for the Xcode Command Line Tools in the search box in the upper left. Download and install the correct version of the Command Line Tools, for example for OS ”El Capitan” and Xcode 7.2, Kristjan Haule, 2016 –3– KH Computational Physics- 2016 Introduction you need Command Line Tools OS X 10.11 for Xcode 7.2 Apple’s Xcode contains many libraries and compilers for Mac systems.
    [Show full text]
  • Fulltext PDF 3,1 MB
    alpaka Parallel Programming – Online Tutorial Lecture 00 – Getting Started with alpaka Lesson 04: Installation www.casus.science Lesson 04: Installation How to download alpaka ● Install git for your operating system: ● Linux: sudo dnf install git (RPM) or sudo apt install git (DEB) ● macOS: Enter git --version in your terminal, you will be asked if you want to install git ● Windows: Download the installer from https://git-scm.com/download/win ● Open the terminal (Linux / macOS) or PowerShell (Windows) ● Navigate to a directory of your choice: cd /path/to/some/directory ● Download alpaka: git clone -b release-0.5.0 https://github.com/alpaka-group/alpaka.git alpaka Parallel Programming – Online Tutorial – Lesson 04: Installation | 2 Lesson 04: Installation Install alpaka’s dependencies ● alpaka only requires Boost and a modern C++ compiler (g++, clang++, Visual C++, …) ● Linux: ● sudo dnf install boost-devel (RPM) ● sudo apt install libboost-all-dev (DEB) ● macOS: ● brew install boost (Using Homebrew, https://brew.sh) ● sudo port install boost (Using MacPorts, https://macports.org) ● Windows: vcpkg install boost (Using vcpkg, https://github.com/microsoft/vcpkg) ● Depending on your target platform you may need additional packages ● NVIDIA GPUs: CUDA Toolkit (https://developer.nvidia.com/cuda-toolkit) ● AMD GPUs: ROCm / HIP (https://rocmdocs.amd.com/en/latest/index.html) alpaka Parallel Programming – Online Tutorial – Lesson 04: Installation | 3 Lesson 04: Installation Preparing alpaka for installation, Part 1 ● CMake is the preferred system
    [Show full text]
  • With Yocto/Openembedded
    PORTING NEW CODE TO RISC-V WITH YOCTO/OPENEMBEDDED Martin Maas ([email protected]) 1st RISC-V Workshop, January 15, 2015 Monterey, CA WHY WE NEED A LINUX DISTRIBUTION • To build an application for RISC-V, you need to: – Download and build the RISC-V toolchain + Linux – Download, patch and build application + dependencies – Create an image and run it in QEMU or on hardware • Problems with this approach: – Error-prone: Easy to corrupt FS or get a step wrong – Reproducibility: Others can’t easily reuse your work – Rigidity: If a dependency changes, need to do it all over • We need a Linux distribution! – Automatic build process with dependency tracking – Ability to distribute binary packages and SDKs 2 RISCV-POKY: A PORT OF THE YOCTO PROJECT • We ported the Yocto Project – Official Linux Foundation Workgroup, supported by a large number of industry partners – Part I: Collection of hundreds of recipes (scripts that describe how to build packages for different platforms), shared with OpenEmbedded project – Part II: Bitbake, a parallel build system that takes recipes and fetches, patches, cross-compiles and produces packages (RPM/DEB), images, SDKs, etc. • Focus on build process and customizability 3 GETTING STARTED WITH RISCV-POKY • Let’s build a full Linux system including the GCC toolchain, Linux, QEMU + a large set of packages (including bash, ssh, python, perl, apt, wget,…) • Step I: Clone riscv-poky: git clone [email protected]:ucb-bar/riscv-poky.git • Step II: Set up the build system: source oe-init-build-env • Step III: Build an image (may
    [Show full text]
  • ROOT Package Management: “Lazy Install” Approach
    ROOT package management: “lazy install” approach Oksana Shadura ROOT Monday meeting Outline ● How we can improve artifact management (“lazy-install”) system for ROOT ● How to organise dependency management for ROOT ● Improvements to ROOT CMake build system ● Use cases for installing artifacts in the same ROOT session Goals ● Familiarize ROOT team with our planned work ● Explain key misunderstandings ● Give a technical overview of root-get ● Explain how root-get and cmake can work in synergy Non Goals We are not planning to replace CMake No change to the default build system of ROOT No duplication of functionality We are planning to “fill empty holes” for CMake General overview Manifest - why we need it? ● Easy to write ● Easy to parse, while CMakeLists.txt is impossible to parse ● Collect information from ROOT’s dependencies + from “builtin dependencies” + OS dependencies + external packages to be plugged in ROOT (to be resolved after using DAG) ● It can be easily exported back as a CMakeLists.txt ● It can have extra data elements [not only what is in CMakeLists.txt, but store extra info] ○ Dependencies description (github links, semantic versioning) ■ url: "ssh://[email protected]/Greeter.git", ■ versions: Version(1,0,0)..<Version(2,0,0) Manifest is a “dump” of status of build system (BS), where root-get is just a helper for BS Manifest - Sample Usage scenarios and benefits of manifest files: LLVM/Clang LLVM use CMake as a LLVMBuild utility that organize LLVM in a hierarchy of manifest files of components to be used by build system llvm-build, that is responsible for loading, verifying, and manipulating the project's component data.
    [Show full text]
  • The TEX Live Guide TEX Live 2012
    The TEX Live Guide TEX Live 2012 Karl Berry, editor http://tug.org/texlive/ June 2012 Contents 1 Introduction 2 1.1 TEX Live and the TEX Collection...............................2 1.2 Operating system support...................................3 1.3 Basic installation of TEX Live.................................3 1.4 Security considerations.....................................3 1.5 Getting help...........................................3 2 Overview of TEX Live4 2.1 The TEX Collection: TEX Live, proTEXt, MacTEX.....................4 2.2 Top level TEX Live directories.................................4 2.3 Overview of the predefined texmf trees............................5 2.4 Extensions to TEX.......................................6 2.5 Other notable programs in TEX Live.............................6 2.6 Fonts in TEX Live.......................................7 3 Installation 7 3.1 Starting the installer......................................7 3.1.1 Unix...........................................7 3.1.2 MacOSX........................................8 3.1.3 Windows........................................8 3.1.4 Cygwin.........................................9 3.1.5 The text installer....................................9 3.1.6 The expert graphical installer.............................9 3.1.7 The simple wizard installer.............................. 10 3.2 Running the installer...................................... 10 3.2.1 Binary systems menu (Unix only).......................... 10 3.2.2 Selecting what is to be installed...........................
    [Show full text]
  • Teamcity 7.1 Documentation.Pdf
    1. TeamCity Documentation . 4 1.1 What's New in TeamCity 7.1 . 5 1.2 What's New in TeamCity 7.0 . 14 1.3 Getting Started . 26 1.4 Concepts . 30 1.4.1 Agent Home Directory . 31 1.4.2 Agent Requirements . 32 1.4.3 Agent Work Directory . 32 1.4.4 Authentication Scheme . 33 1.4.5 Build Agent . 33 1.4.6 Build Artifact . 34 1.4.7 Build Chain . 35 1.4.8 Build Checkout Directory . 36 1.4.9 Build Configuration . 37 1.4.10 Build Configuration Template . 38 1.4.11 Build Grid . 39 1.4.12 Build History . 40 1.4.13 Build Log . 40 1.4.14 Build Number . 40 1.4.15 Build Queue . 40 1.4.16 Build Runner . 41 1.4.17 Build State . 41 1.4.18 Build Tag . 42 1.4.19 Build Working Directory . 43 1.4.20 Change . 43 1.4.21 Change State . 43 1.4.22 Clean Checkout . 44 1.4.23 Clean-Up . 45 1.4.24 Code Coverage . 46 1.4.25 Code Duplicates . 47 1.4.26 Code Inspection . 47 1.4.27 Continuous Integration . 47 1.4.28 Dependent Build . 47 1.4.29 Difference Viewer . 49 1.4.30 Guest User . 50 1.4.31 History Build . 51 1.4.32 Notifier . 51 1.4.33 Personal Build . 52 1.4.34 Pinned Build . 52 1.4.35 Pre-Tested (Delayed) Commit . 52 1.4.36 Project . 53 1.4.37 Remote Run . ..
    [Show full text]
  • The Apple Ecosystem
    APPENDIX A The Apple Ecosystem There are a lot of applications used to manage Apple devices in one way or another. Additionally, here’s a list of tools, sorted alphabetically per category in order to remain vendor agnostic. Antivirus Solutions for scanning Macs for viruses and other malware. • AVG: Basic antivirus and spyware detection and remediation. • Avast: Centralized antivirus with a cloud console for tracking incidents and device status. • Avira: Antivirus and a browser extension. Avira Connect allows you to view device status online. • BitDefender: Antivirus and malware managed from a central console. • CarbonBlack: Antivirus and Application Control. • Cylance: Ransomware, advanced threats, fileless malware, and malicious documents in addition to standard antivirus. • Kaspersky: Antivirus with a centralized cloud dashboard to track device status. © Charles Edge and Rich Trouton 2020 707 C. Edge and R. Trouton, Apple Device Management, https://doi.org/10.1007/978-1-4842-5388-5 APPENDIX A THe AppLe ECOSYSteM • Malware Bytes: Antivirus and malware managed from a central console. • McAfee Endpoint Security: Antivirus and advanced threat management with a centralized server to track devices. • Sophos: Antivirus and malware managed from a central console. • Symantec Mobile Device Management: Antivirus and malware managed from a central console. • Trend Micro Endpoint Security: Application whitelisting, antivirus, and ransomware protection in a centralized console. • Wandera: Malicious hot-spot monitoring, jailbreak detection, web gateway for mobile threat detection that integrates with common MDM solutions. Automation Tools Scripty tools used to automate management on the Mac • AutoCasperNBI: Automates the creation of NetBoot Images (read: NBI’s) for use with Casper Imaging. • AutoDMG: Takes a macOS installer (10.10 or newer) and builds a system image suitable for deployment with Imagr, DeployStudio, LANrev, Jamf Pro, and other asr or Apple Systems Restore-based imaging tools.
    [Show full text]
  • Building Gplates from Source on Mac OS X Without Package Managers
    Building GPlates from source on Mac OS X without package managers Christian Heine∗ Revision: 1067 Date: 2009-02-26 February 26, 2009 Abstract This article describes how to build the open-source plate tectonic software GPlates on Mac OS X without the need of package managers like MacPorts or Fink. The pro- cedure uses available installer packages for OS X from the net and only requires to compile one single library from source. I have been able to install GPlates smoothly on a Mac Pro and oldish 12” PowerBook using the recipe below. 1 For the impatient This is for the impatient ones which don’t need lengthy explanations: • Install XCode from the Mac Dev Center • Install the GDAL framework and dependancies from William Kyngesbury’s site • Install CMake from the CMake project • Install the Qt framework from Trolltech • Download and compile the ICU libaries from the ICU Project • Download and unarchive the Boost libaries • Download and unarchive the GPlates source • Open CMake.app and point it to the directory where you unarchived the GPlates source code. • Configure GPlates build in CMake.app • Compile GPlates in Terminal.app by typing make in the GPlates source direc- tory. ∗Email: [email protected] 1 Heine BuildingGPlatesOnOSX 20090226.tex 1067 2009-02-26 21:24:04Z christian 2 Prerequisites This build requires that you have basic familiarity with the command line and the Terminal on Mac OS X using Terminal.app. In order to be able to compile some of the components, it is required to have a c++ compiler installed on your machine.
    [Show full text]
  • CDE: Run Any Linux Application On-Demand Without Installation
    CDE: Run Any Linux Application On-Demand Without Installation Philip J. Guo Stanford University [email protected] Abstract with compiling, installing, and configuring software and their myriad of dependencies. For example, the official There is a huge ecosystem of free software for Linux, but Google Chrome help forum for “install/uninstall issues” since each Linux distribution (distro) contains a differ- has over 5800 threads. ent set of pre-installed shared libraries, filesystem layout In addition, a study of US labor statistics predicts that conventions, and other environmental state, it is difficult by 2012, 13 million American workers will do program- to create and distribute software that works without has- ming in their jobs, but amongst those, only 3 million will sle across all distros. Online forums and mailing lists be professional software developers [24]. Thus, there are are filled with discussions of users’ troubles with com- potentially millions of people who still need to get their piling, installing, and configuring Linux software and software to run on other machines but who are unlikely their myriad of dependencies. To address this ubiqui- to invest the effort to create one-click installers or wres- tous problem, we have created an open-source tool called tle with package managers, since their primary job is not CDE that automatically packages up the Code, Data, and to release production-quality software. For example: Environment required to run a set of x86-Linux pro- grams on other x86-Linux machines. Creating a CDE • System administrators often hack together ad- package is as simple as running the target application un- hoc utilities comprised of shell scripts and custom- der CDE’s monitoring, and executing a CDE package re- compiled versions of open-source software, in or- quires no installation, configuration, or root permissions.
    [Show full text]