Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Jie Zhang, M.S. Graduate Program in Computer Science and Engineering The Ohio State University 2018 Dissertation Committee: Dr. Dhabaleswar K. Panda, Advisor Dr. Christopher Stewart Dr. P. Sadayappan Dr. Yang Wang Dr. Xiaoyi Lu c Copyright by Jie Zhang 2018 Abstract Cloud Computing platforms (e.g, Amazon EC2 and Microsoft Azure) have been widely adopted by many users and organizations due to their high availability and scalable comput- ing resources. By using virtualization technology, VM or container instances in a cloud can be constructed on bare-metal hosts for users to run their systems and applications when- ever they need computational resources. This has significantly increased the flexibility of resource provisioning in clouds compared to the traditional resource management ap- proaches. These days cloud computing has gained momentum in HPC communities, which brings us a broad challenge: how to design and build efficient HPC clouds with modern net- working technologies and virtualization capabilities on heterogeneous HPC clusters? Through the convergence of HPC and cloud computing, the users can get all the desir- able features such as ease of system management, fast deployment, and resource sharing. However, many HPC applications running on the cloud still suffer from fairly low perfor- mance, more specifically, the degraded I/O performance from the virtualized I/O devices. Recently, a hardware-based I/O virtualization standard called Single Root I/O Virtualiza- tion (SR-IOV) has been proposed to help solve the problem, which makes SR-IOV achieve near-native I/O performance. Whereas SR-IOV lacks locality-aware communication sup- port, which makes the communications across the co-located VMs or containers not able to leverage the shared memory backed communication mechanisms. To deliver high per- formance to the end HPC applications in the HPC cloud, we present a high-performance ii locality-aware and NUMA-aware MPI library over SR-IOV enabled InfiniBand clusters, which is able to dynamically detect the locality information on VM, container or even nested cloud environment and coordinate the data movements appropriately. The proposed design improves the performance of NAS by up to 43% over the default SR-IOV based scheme across 32 VMs, while incurring less 9% overhead compared with native perfor- mance. As one of the most attractive container technologies to build HPC clouds, we eval- uate the performance of Singularity on various aspects including processor architectures, advanced interconnects, memory access modes, and the virtualization overhead. Singular- ity shows very little overhead for running MPI-based HPC applications. SR-IOV is able to provide efficient sharing of high-speed interconnect resources and achieve near-native I/O performance, however, SR-IOV based virtual networks prevent VM migration, which is an essential virtualization capability towards high flexibility and avail- ability. Although several initial solutions have been proposed in the literature to solve this problem, there are still many restrictions on these proposed approaches, such as depend- ing on the specific network adapters and/or hypervisors, which will limit the usage scope of these solutions on HPC environments. In this thesis, we propose a high-performance hypervisor-independent and InfiniBand driver-independent VM migration framework for MPI applications on SR-IOV enabled InfiniBand clusters, which is able to not only achieve fast VM migration but also guarantee the high performance for MPI applications during the migration in the HPC cloud. The evaluation results indicate that our proposed design could completely hide the migration overhead through the computation and migration overlap- ping. In addition, the resource management and scheduling systems, such as Slurm and PBS, are widely used in the modern HPC clusters. In order to build efficient HPC clouds, some iii of the critical HPC resources, like SR-IOV enabled virtual devices and Inter-VM shared memory devices, need to be properly enabled and isolated among VMs. We thus propose a novel framework, Slurm-V, which extends Slurm with virtualization-oriented capabilities to support efficiently running multiple concurrent MPI jobs on HPC clusters. The proposed Slurm-V framework shows good scalability and the ability of efficiently running concurrent MPI jobs on SR-IOV enabled InfiniBand clusters. To the best of our knowledge, Slurm-V is the first attempt to extend Slurm for the support of running concurrent MPI jobs with isolated SR-IOV and IVShmem resources. On a heterogeneous HPC cluster, GPU devices have received significant success for parallel applications. In addition to highly optimized computation kernels on GPUs, the cost of data movement on GPU clusters plays critical roles in delivering high performance for the end applications. Our studies show that there is a significant demand to design high performance cloud-aware GPU-to-GPU communication schemes to deliver the near-native performance on clouds. We propose C-GDR, the high-performance Cloud-aware GPUDi- rect communication schemes on RDMA networks. It allows communication runtime to successfully detect process locality, GPU residency, NUMA architecture information, and communication pattern to enable intelligent and dynamic selection of the best communica- tion and data movement schemes on GPU-enabled clouds. Our evaluations show C-GDR can outperform the default scheme by up to 26% on HPC applications. iv To my family, friends, and mentors. v Acknowledgments This work was made possible through the love and support of several people who stood by me, through the many years of my doctoral program and all through my life leading to it. I would like to take this opportunity to thank all of them. My family - my parents, Chong Zhang and Jinchuan Li, who have always given me complete freedom and love to let me go after my dreams and unconditional support to let me venture forth; my uncle, Pengxi Li, who has always inspired and encouraged me to pursue the higher goals; my grandmother, Aixiang Yu, who have stood by me and prayed for me at all times. My fiancee, Hongjin Wang for her love, support, and understanding. I admire and respect her for many qualities she possesses, particularly the great courage and determined mind for the new challenges in her career. My advisor, Dr. Dhabaleswar K. Panda for his guidance and support throughout my doctoral program. I have been able to grow, both personally and professionally, through my association with him. He works hard and professionally. I can deeply feel his respect to the career he has been pursuing. Even after knowing him for six years, I am still amazed by the energy and commitment he has towards the research. My collaborators - I would like to express the appreciation to my collaborator: Dr. Xiaoyi Lu. Through the six years collaboration with him, I have been witnessing his atti- tude and passion towards science and research: he continually and convincingly conveyed vi a spirit of exploration in regard to research and scholarship, and an excitement in regard to teaching. Without his guidance and persistent help this dissertation would not have been possible. My friends - I am very happy to have met and become friends with Jithin Jose, Hari Subramoni, Mingzhe Li, Rong Shi, Ching-Hsiang Chu, Dipti Shankar, Jeff Smith, Jonathan Perkins and Mark Arnold, Gugnani Shashank, Haiyang Shi. This work would remain in- complete without their support and contribution. They have given me memories that I will cherish for the rest of my life. I would also like to thank all my colleagues, who have helped me in one way or another throughout my graduate studies. vii Vita 2004-2008 . .B.S., Computer Science, Tianjin Univer- sity of Technology and Education, China 2008-2011 . .M.S., Computer Science, Nankai Univer- sity, U.S.A 2012-Present . Ph.D., Computer Science and Engineer- ing, The Ohio State University, U.S.A Publications Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda, C-GDR: High-Performance Cloud- aware GPUDirect MPI Communication Schemes on RDMA Networks (Under Review) Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda, Is Singularity-based Container Tech- nology Ready for Running MPI Applications on HPC Clouds? The 10th International Conference on Utility and Cloud Computing (UCC ’17), Dec 2017, Best Student Paper Award Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda, High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters, The 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS ’17), May 2017 Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda, Designing Locality and NUMA Aware MPI Runtime for Nested Virtualization based HPC Cloud with SR-IOV Enabled Infini- Band, The 13th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE ’17), April 2017 Jie Zhang, Xiaoyi Lu, Sourav Chakraborty and Dhabaleswar K. Panda, SLURM-V: Ex- tending SLURM for Building Efficient HPC Cloud with SR-IOV and IVShmem, The 22nd International European Conference on Parallel and Distributed Computing (Euro-Par ’16), Aug 2016 viii Jie Zhang, Xiaoyi Lu and Dhabaleswar K. Panda, High Performance MPI Library for Container-based HPC Cloud on InfiniBand Clusters, The 45th International
Recommended publications
  • Face Time Mike Tyson Gets a Hero’S Subway Fall Scare Welcome at Bed-Stuy Event Crowd Rushes to Rescue at Atlantic Ave

    LOOK FOR BREAKING NEWS EVERY WEEKDAY AT BROOKLYNPAPER.COM Yo u r Neighborhood — Yo u r News® BrooklynPaper.com • (718) 260–2500 • Brooklyn, NY • ©2013 Serving Brownstone Brooklyn and Williamsburg AWP/12 pages • Vol. 36, No. 51 • December 20–26, 2013 • FREE ICE, ICE, BABY Prospect Park’s new skating complex is now open By Megan Riesz the virtues of the shred center. adorned overhang, and the other, connected The Brooklyn Paper “This 26-acre restoration of Lakeside builds rink out in the open. Shake the dust off those skates and tape on the park’s natural beauty — and helps re- A cafe will serve burgers, milk-shakes, up your hockey sticks. store the park’s original vision — while also and salads. But however exciting the eat- Workers are putting the final touches on including modern amenities and green infra- ery’s menu is, it will not help when it comes Prospect Park’s long-awaited ice skating com- structure that will help sustain the park for to getting a health department certification plex and we got a got a sneak peek at the regal years to come,” Bloomberg said. in time for the opening. Without it, the cafe rinks that await. Outgoing Mayor Bloomberg The Samuel J. and Ethel LeFrak Center at may be shuttered on the big day and the park Photo by Paul Martinka stopped by Tuesday’s ribbon cutting as part of Lakeside will boast two 450-capacity rinks, may call in food trucks for reinforcements, Kids christened the LeFrak Center rinks at the ribbon-cutting ceremony on Dec.
  • Industrial Control Via Application Containers: Migrating from Bare-Metal to IAAS

    Industrial Control Via Application Containers: Migrating from Bare-Metal to IAAS

    Industrial Control via Application Containers: Migrating from Bare-Metal to IAAS Florian Hofer, Student Member, IEEE Martin A. Sehr Antonio Iannopollo, Member, IEEE Faculty of Computer Science Corporate Technology EECS Department Free University of Bolzano-Bozen Siemens Corporation University of California Bolzano, Italy Berkeley, CA 94704, USA Berkeley, CA 94720, USA fl[email protected] [email protected] [email protected] Ines Ugalde Alberto Sangiovanni-Vincentelli, Fellow, IEEE Barbara Russo Corporate Technology EECS Department Faculty of Computer Science Siemens Corporation University of California Free University of Bolzano-Bozen Berkeley, CA 94704, USA Berkeley, CA 94720, USA Bolzano, Italy [email protected] [email protected] [email protected] Abstract—We explore the challenges and opportunities of control design full authority over the environment in which shifting industrial control software from dedicated hardware to its software will run, it is not straightforward to determine bare-metal servers or cloud computing platforms using off the under what conditions the software can be executed on cloud shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on computing platforms due to resource virtualization. Yet, we a series of dedicated latency tests targeting relevant real-time believe that the principles of Industry 4.0 present a unique configurations. opportunity to explore complementing traditional automation Index Terms—Industrial Control Systems, Real-Time, IAAS, components with a novel control architecture [3]. Containers, Determinism We believe that modern virtualization techniques such as application containerization [3]–[5] are essential for adequate I. INTRODUCTION utilization of cloud computing resources in industrial con- Emerging technologies such as the Internet of Things and trol systems.
  • Portability: Containers, Cloud

    Portability: Containers, Cloud

    JEDI Portability Across Platforms Containers, Cloud Computing, and HPC Mark Miesch, Rahul Mahajan, Xin Zhang, David Hahn, Francois Vandenberg, Jim Rosinski, Dan Holdaway, Yannick Tremolet, Maryam Abdioskouei, Steve Herbener, Mark Olah, Benjamin Menetrier, Anna Shlyaeva, Clementine Gas Academy website http://academy.jcsda.org/june2019 ‣ Instructions for accessing AWS ‣ Activity instructions ‣ Presentation slides ‣ Doxygen documentation for fv3-bundle We will add further content throughout the week Outline I) JEDI Portability Overview ✦ Unified vision for software development and distribution II) Container Fundamentals ✦ What are they? How do they work? ✦ Docker, Charliecloud, and Singularity III) Using the JEDI Containers ✦ How they are built and deployed ✦ Mac and Windows (Vagrant) IV) HPC and Cloud Computing ✦ Environment modules ✦ Containers in HPC? V) Summary and Outlook JEDI Software Dependencies ‣ Essential ✦ Compilers, MPI ✦ CMake Common versions among users ✦ SZIP, ZLIB and developers minimize ✦ LAPACK / MKL, Eigen 3 stack-related debugging ✦ NetCDF4, HDF5 ✦ udunits ✦ Boost (headers only) ✦ ecbuild, eckit, fckit ‣ Useful ✦ ODB-API, eccodes ✦ PNETCDF ✦ Parallel IO ✦ nccmp, NCO ✦ Python tools (py-ncepbufr, netcdf4, matplotlib…) ✦ NCEP libs ✦ Debuggers & Profilers (ddt/TotalView, kdbg, valgrind, TAU…) The JEDI Portability Vision I want to run JEDI on… Development ‣ My Laptop/Workstation/PC ✦ We provide software containers ✦ Mac & Windows system need to first establish a linux environment (e.g. a Vagrant/VirtualBox virtual machine) Development
  • The Miseducation of Hip-Hop Dance: Authenticity, and the Commodification of Cultural Identities

    The Miseducation of Hip-Hop Dance: Authenticity, and the Commodification of Cultural Identities

    The Miseducation of Hip-Hop dance: Authenticity, and the commodification of cultural identities. E. Moncell Durden., Assistant Professor of Practice University of Southern California Glorya Kaufman School of Dance Introduction Hip-hop dance has become one of the most popular forms of dance expression in the world. The explosion of hip-hop movement and culture in the 1980s provided unprecedented opportunities to inner-city youth to gain a different access to the “American” dream; some companies saw the value in using this new art form to market their products for commercial and consumer growth. This explosion also aided in an early downfall of hip-hop’s first dance form, breaking. The form would rise again a decade later with a vengeance, bringing older breakers out of retirement and pushing new generations to develop the technical acuity to extraordinary levels of artistic corporeal genius. We will begin with hip-hop’s arduous beginnings. Born and raised on the sidewalks and playgrounds of New York’s asphalt jungle, this youthful energy that became known as hip-hop emerged from aspects of cultural expressions that survived political abandonment, economic struggles, environmental turmoil and gang activity. These living conditions can be attributed to high unemployment, exceptionally organized drug distribution, corrupt police departments, a failed fire department response system, and Robert Moses’ building of the Cross-Bronx Expressway, which caused middle and upper-class residents to migrate North. The South Bronx lost 600,000 jobs and displaced more than 5,000 families. Between 1973 and 1977, and more than 30,000 fires were set in the South Bronx, which gave rise to the phrase “The Bronx is Burning.” This marginalized the black and Latino communities and left the youth feeling unrepresented, and hip-hop gave restless inner-city kids a voice.
  • Xen on X86, 15 Years Later

    Xen on X86, 15 Years Later

    Xen on x86, 15 years later Recent development, future direction QEMU Deprivileging PVShim Panopticon Large guests (288 vcpus) NVDIMM PVH Guests PVCalls VM Introspection / Memaccess PV IOMMU ACPI Memory Hotplug PVH dom0 Posted Interrupts KConfig Sub-page protection Hypervisor Multiplexing Talk approach • Highlight some key features • Recently finished • In progress • Cool Idea: Should be possible, nobody committed to working on it yet • Highlight how these work together to create interesting theme • PVH (with PVH dom0) • KConfig • … to disable PV • PVshim • Windows in PVH PVH: Finally here • Full PVH DomU support in Xen 4.10, Linux 4.15 • First backwards-compatibility hack • Experimental PVH Dom0 support in Xen 4.11 PVH: What is it? • Next-generation paravirtualization mode • Takes advantage of hardware virtualization support • No need for emulated BIOS or emulated devices • Lower performance overhead than PV • Lower memory overhead than HVM • More secure than either PV or HVM mode • PVH (with PVH dom0) • KConfig • … to disable PV • PVshim • Windows in PVH KConfig • KConfig for Xen allows… • Users to produce smaller / more secure binaries • Makes it easier to merge experimental functionality • KConfig option to disable PV entirely • PVH • KConfig • … to disable PV • PVshim • Windows in PVH PVShim • Some older kernels can only run in PV mode • Expect to run in ring 1, ask a hypervisor PV-only kernel (ring 1) to perform privileged actions “Shim” Hypervisor (ring 0) • “Shim”: A build of Xen designed to allow an unmodified PV guest to run in PVH mode
  • Xen to KVM Migration Guide

    Xen to KVM Migration Guide

    SUSE Linux Enterprise Server 12 SP4 Xen to KVM Migration Guide SUSE Linux Enterprise Server 12 SP4 As the KVM virtualization solution is becoming more and more popular among server administrators, many of them need a path to migrate their existing Xen based environments to KVM. As of now, there are no mature tools to automatically convert Xen VMs to KVM. There is, however, a technical solution that helps convert Xen virtual machines to KVM. The following information and procedures help you to perform such a migration. Publication Date: September 24, 2021 Contents 1 Migration to KVM Using virt-v2v 2 2 Xen to KVM Manual Migration 9 3 For More Information 18 4 Documentation Updates 18 5 Legal Notice 18 6 GNU Free Documentation License 18 1 SLES 12 SP4 Important: Migration Procedure Not Supported The migration procedure described in this document is not fully supported by SUSE. We provide it as a guidance only. 1 Migration to KVM Using virt-v2v This section contains information to help you import virtual machines from foreign hypervisors (such as Xen) to KVM managed by libvirt . Tip: Microsoft Windows Guests This section is focused on converting Linux guests. Converting Microsoft Windows guests using virt-v2v is the same as converting Linux guests, except in regards to handling the Virtual Machine Driver Pack (VMDP). Additional details on converting Windows guests with the VMDP can be found in the separate Virtual Machine Driver Pack documentation at https://www.suse.com/documentation/sle-vmdp-22/ . 1.1 Introduction to virt-v2v virt-v2v is a command line tool to convert VM Guests from a foreign hypervisor to run on KVM managed by libvirt .
  • Lawrence Berkeley Laboratory

    Lawrence Berkeley Laboratory

    LBL-29419 Lawrence Berkeley Laboratory UNIVERSITY OF CALIFORNIA APPLIED SCIENCE DIVISION Presented at the 3rd International Conference on System Simulation in Buildings, Liege, Belgium, December 3-5, 1990, and to be published in the Proceedings The U.S./EKS: Advances in the SPANK-based Energy Kernel System F. Buhl, E. Erdem, J.-M. Nataf, F.C. Winkelmann, M. Moshier, and E. Sowell December 1990 --- -tJ('") 0 .......... r "1 J 0 f"J D 4:--~,_. z ~ !lt n APPLIED SCIENCE fD ct-O fD fD 1J DIVISION X"Ul -< ---Ul Ill 1-' 0.. ID. til t$1 r r IJj ...... r- crn l ....J 0 p) Prepared for the U.S. Department of Energy under Contract Number DE-AC03-76SF00098 !lt "0 ...0 "1'< -+=- '< 1-'" . PJ ...0 DISCLAIMER This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor the Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or the Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof or the Regents of the University of California.
  • L4 – Virtualization and Beyond

    L4 – Virtualization and Beyond

    L4 – Virtualization and Beyond Hermann Härtig!," Michael Roitzsch! Adam Lackorzynski" Björn Döbel" Alexander Böttcher! #!Department of Computer Science# "GWT-TUD GmbH # Technische Universität Dresden# 01187 Dresden, Germany # 01062 Dresden, Germany {haertig,mroi,adam,doebel,boettcher}@os.inf.tu-dresden.de Abstract Mac OS X environment, using full hardware After being introduced by IBM in the 1960s, acceleration by the GPU. Virtual machines are virtualization has experienced a renaissance in used to test potentially malicious software recent years. It has become a major industry trend without risking the host environment and they in the server context and is also popular on help developers in debugging crashes with back- consumer desktops. In addition to the well-known in-time execution. benefits of server consolidation and legacy In the server world, virtual machines are used to preservation, virtualization is now considered in consolidate multiple services previously located embedded systems. In this paper, we want to look on dedicated machines. Running them within beyond the term to evaluate advantages and virtual machines on one physical server eases downsides of various virtualization approaches. management and helps saving power by We show how virtualization can be increasing utilization. In server farms, migration complemented or even superseded by modern of virtual machines between servers is used to operating system paradigms. Using L4 as the balance load with the potential of shutting down basis for virtualization and as an advanced completely unloaded servers or adding more to microkernel provides a best-of-both-worlds increase capacity. Lastly, virtual machines also combination. Microkernels can contribute proven isolate different customers who can purchase real-time capabilities and small trusted computing virtual shares of a physical server in a data center.
  • The Frontier, May 1930

    The Frontier, May 1930

    University of Montana ScholarWorks at University of Montana The Frontier and The Frontier and Midland Literary Magazines, 1920-1939 University of Montana Publications 5-1930 The Frontier, May 1930 Harold G. Merriam Follow this and additional works at: https://scholarworks.umt.edu/frontier Let us know how access to this document benefits ou.y Recommended Citation Merriam, Harold G., "The Frontier, May 1930" (1930). The Frontier and The Frontier and Midland Literary Magazines, 1920-1939. 32. https://scholarworks.umt.edu/frontier/32 This Journal is brought to you for free and open access by the University of Montana Publications at ScholarWorks at University of Montana. It has been accepted for inclusion in The Frontier and The Frontier and Midland Literary Magazines, 1920-1939 by an authorized administrator of ScholarWorks at University of Montana. For more information, please contact [email protected]. FRONTIER \ MAGAZIN€ Of TH€ NORTHWfST MAY Cowboy Can Ride, a drawing by Irving Shope. A Coffin for Enoch, a story by Elise Rushfeldt. Chinook Jargon, by Edward H. Thomas. The Backward States, an essay by Edmund L. Freeman. An Indian Girl's Story of a Trading Expedition to the South­ west About 1841. Other stories by Ted Olson, Roland English Hartley, William Saroyan, Martin Peterson, Merle Haines. Open Range articles by H. C. B. Colvill, William S. Lewis, Mrs. T , A. Wickes. Poems by Donald Burnie, Elizabeth Needham, Kathryn Shepherd, James Rorty. ■ Frances Huston, W hitley Gray, Eleanor Sickels, Muriel Thurston, Helen Mating, B Margaret Skavlan, James Marshall, Frank Ankenbrand, Jr., Israel Newman. Lillian T . Leonard, Edith M.
  • Hands on Virtualization Using XEN

    Hands on Virtualization Using XEN

    VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool... Hands On Virtualization using XEN Hands On Virtualization using XEN General Setup The Machine Hardware Host Preparation (Standard UBUNTU XEN Host) Installation of the XEN Packages Modification of the Bootloader GRUB Reboot the Host System Explore Your New XEN dom0 Hardware Host Virtual Machine Preparation Start Your Virtual Machine Working with the Virtual Machines Network Setup on the Host System Start/Stop the Virtual Machines Change the Memory Allocated to the VM High Availability Shared Network Storage Solution Host Preparation Phase Configure the DRBD Device Startup the DRBD Device Setup the Filesystem on the Device Test the DRBD Raid Device Migration of the VMs Advanced tutorial (if you have time left): libvirt usage with XEN Installation of libvirt and tools VM libvirt configuration virsh usage libvirt GUI example "virt-manager" Additional Information General Setup The Machine Hardware The host systems are running Ubuntu 9.04 (Jaunty Jackalope). The following procedures will be possible on most common linux distributions with specific changes to the software installation steps. For Ubuntu we will use the Advanced Packaging Tool ( apt ) similar to Debian . RedHat or SuSE are using rpm or some GUI (Graphical User Interface) installation tool. Each workshop group has access to two hardware hosts: hostname gks- <1/2>-X .fzk.de gks- <1/2>-Y .fzk.de Replace <1/2> and X and Y with the numbers given on the workshop handout. Host Preparation (Standard UBUNTU XEN Host) 1 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool..
  • Introduction to Containers

    Introduction to Containers

    Introduction to Containers Martin Čuma Center for High Performance Computing University of Utah [email protected] Overview • Why do we want to use containers? • Containers basics • Run a pre-made container • Build and deploy a container • Containers for complex software 06-Nov-20 http://www.chpc.utah.edu Slide 2 Hands on setup 1. Download the talk slides http://home.chpc.utah.edu/~mcuma/chpc/Containers20s.pdf https://tinyurl.com/yd2xtv5d 2. Using FastX or Putty, ssh to any CHPC Linux machine, e.g. $ ssh [email protected] 3. Load the Singularity and modules $ module load singularity 06-Nov-20 http://www.chpc.utah.edu Slide 3 Hands on setup for building containers 1. Create a GitHub account if you don’t have one https://github.com/join {Remember your username and password!} 2. Go to https://cloud.sylabs.io/home click Remote Builder, then click Sign in to Sylabs and then Sign in with GitHub, using your GitHub account 3. Go to https://cloud.sylabs.io/builder click on your user name (upper right corner), select Access Tokens, write token name, click Create a New Access Token, and copy it 4. In the terminal on frisco, install it to your ~/.singularity/sylabs-token file nano ~/.singularity/sylabs-token, paste, ctrl-x to save 06-Nov-20 http://www.chpc.utah.edu Slide 4 Why to use containers? 06-Nov-20 http://www.chpc.utah.edu Slide 5 Software dependencies • Some programs require complex software environments – OS type and versions – Drivers – Compiler type and versions – Software dependencies • glibc, stdlibc++ versions • Other libraries
  • Copy, Rip, Burn : the Politics of Copyleft and Open Source

    Copy, Rip, Burn : the Politics of Copyleft and Open Source

    Copy, Rip, Burn Berry 00 pre i 5/8/08 12:05:39 Berry 00 pre ii 5/8/08 12:05:39 Copy, Rip, Burn The Politics of Copyleft and Open Source DAVID M. BERRY PLUTO PRESS www.plutobooks.com Berry 00 pre iii 5/8/08 12:05:39 First published 2008 by Pluto Press 345 Archway Road, London N6 5AA www.plutobooks.com Copyright © David M. Berry 2008 The right of David M. Berry to be identifi ed as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 978 0 7453 2415 9 Hardback ISBN 978 0 7453 2414 2 Paperback Library of Congress Cataloging in Publication Data applied for This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. Logging, pulping and manufacturing processes are expected to conform to the environmental standards of the country of origin. The paper may contain up to 70% post consumer waste. 10 9 8 7 6 5 4 3 2 1 Designed and produced for Pluto Press by Chase Publishing Services Ltd, Sidmouth, EX10 9QG, England Typeset from disk by Stanford DTP Services, Northampton Printed and bound in the European Union by CPI Antony Rowe, Chippenham and Eastbourne Berry 00 pre iv 5/8/08 12:05:41 CONTENTS Acknowledgements ix Preface x 1. The Canary in the Mine 1 2. The Information Society 41 3.