Bare Metal Provisioning to Openstack Using Xcat

Total Page:16

File Type:pdf, Size:1020Kb

Bare Metal Provisioning to Openstack Using Xcat JOURNAL OF COMPUTERS, VOL. 8, NO. 7, JULY 2013 1691 Bare Metal Provisioning to OpenStack Using xCAT Jun Xie Network Information Center, BUPT, Beijing, China Email: [email protected] Yujie Su, Zhaowen Lin, Yan Ma and Junxue Liang Network Information Center, BUPT, Beijing, China Email: {suyj, linzw, mayan, liangjx}@bupt.edu.cn Abstract—Cloud computing relies heavily on virtualization mainly built on virtualization technologies (Xen, KVM, technologies. This also applies to OpenStack, which is etc.). currently the most popular IaaS platform; it mainly provides virtual machines to cloud end users. However, B. Motivation virtualization unavoidably brings some performance The adoption of virtualization in cloud environment penalty, contrasted with bare metal provisioning. In this has resulted in great benefits, however, this has not been paper, we present our approach for extending OpenStack to without attendant problems,such as unresponsive support bare metal provisioning through xCAT(Extreme Cloud Administration Toolkit). As a result, the cloud virtualized systems, crashed virtualized servers, mis- platform could be deployed to provide both virtual configured virtual hosting platforms, performance tuning machines and bare metal machines. This paper firstly and erratic performance metrics, among others. introduces why bare metal machines are desirable in a cloud Consequently, we might want to run the compute jobs on platform, then it describes OpenStack briefly and also the "bare metal”: without a virtualization layer, Consider the xCAT driver, which makes xCAT work with the rest of the following cases: OpenStack platform in order to provision bare metal Case 1: Your OpenStack cloud environment contains machines to cloud end users. At the end, it presents a boards with Tilera processors (non-X86), and Tilera does performance comparison between a virtualized and bare- not currently support virtualization technologies like metal environment KVM. Obviously, a Tilera board cannot be provisioned to Index Terms—cloud computing, IaaS, OpenStack, xCAT, an end user using traditional virtualization technologies. bare metal provisioning Thus, we need to add a bare-metal provisioning driver in OpenStack in order to provision a Tilera board or other non-virtualizable boards in an OpenStack environment. I. BACKGROUND AND MOTIVATION Case 2: You want to achieve higher performance in the cloud platform, as virtualization obviously brings some A. Background performance penalty. OpenStack [1] is an Infrastructure as a Service(IaaS) Case 3: You have a lot of relatively old and low- cloud computing project started by Rackspace Cloud and performance machines (e.g. 1 CPU, 1G memory) in the NASA in 2010. Currently more than 200 companies have data center. In this case, if you still use virtualization joined this project,including Intel, IBM, Dell and Red Hat. technologies and then provide virtual machines to end It is free open source software released under the Apache users, the performance penalty brought by virtualization 2.0 license. It is included and released in both the Ubuntu is obviously too much for the machine. Bare metal and Red Hat Linux distributions. The OpenStack project provisioning is optimal in this scenario. aims to deliver solutions for all types of clouds by being From these cases among others, we can conclude it simple to implement, massively scalable, and feature rich. would be great for OpenStack if it could support bare- It delivers various components for a cloud infrastructure metal provisioning, in addition to the traditional solution, which makes it a somewhat open sourced virtualization. This is because it can now offer to cloud Amazon Web Services(AWS), which is the most end users a hybrid IaaS cloud platform with both choices successful cloud computing platform at present. of virtual machines and bare metal machines. In OpenStack, cloud providers most basically offer The intention is obviously to ultimately support virtual machines. The virtual machines are run as guests different provisioning back-ends in order to support by a hypervisor, such as Xen or KVM. Management of different bare-metal architectures. Several provisioning pools of hypervisors by the cloud operational support tools are available, such as Dell's crowbar [2], as an system leads to the ability to scale to support a large extension of opscode's Chef system [3], Argonne number of virtual machines. In this sense, OpenStack is National Lab ’ s Heckle [4], xCAT [5], Perceus [6], © 2013 ACADEMY PUBLISHER doi:10.4304/jcp.8.7.1691-1695 1692 JOURNAL OF COMPUTERS, VOL. 8, NO. 7, JULY 2013 OSCAR [7], and ROCKS [8]. These tools provide Cloud (EC2) API [15], as well as its own (OpenStack) different bare-metal provisioning, deployment, resource API. management, and authentication methods for different The nova-schedule service is mainly responsible for architectures. These tools use standard interfaces such as scheduling compute resource requests on the available PXE (Preboot Execution Environment) [9] boot and IPMI compute nodes. (Intelligent Platform Management Interface) [10] power The nova-compute service is responsible for starting cycle management module. For boards that do not and stopping all the compute instances. support PXE and IPMI, such as the TILEmpower board, The nova-network service is responsible for managing specific back-ends must be written. IP addresses and virtual LANs for the instances. In this paper, we use xCAT to extend OpenStack, the The nova-volume service is responsible for managing most popular IaaS platform at present. xCAT (Extreme network drives that can be mounted to running instances. Cloud Administration Toolkit) is an open-sourced and The queue is a message queue implemented on top of distributed computing management tool. It is a wonderful RabbitMQ [16] which is used to implement remote tool to remotely control the machines and can also be procedure calls as a communication mechanism among used to network-install a machine, automated and the services. unattended. xCAT supports many usual operating The dashboard implements a web-based user interface. systems, including SLES, OpenSUSE, RHEL, CentOS, The database is a traditional relational database such as Windows, etc. As for the hardware, it supports X86, MySQL that is used to store persistent data that is shared iDataplex, IBM's system X, system P, and most IPMI- across the components. based machines. To support bare metal provisioning in OpenStack through xCAT, we implemented an xCAT driver. III. XCAT xCAT (Extreme Cloud Administration Toolkit) is an open source scalable distributed computing management II. OPENSTACK and provisioning tool that provides a unified interface for OpenStack is a collection of open-sourced software hardware control, discovery, and OS diskful/diskfree components designed for building public and private deployment. This robust toolkit can be used for the clouds. OpenStack provides similar functionality to other deployment and administration of huge clusters. open-source cloud projects such as Eucalyptus [11], xCAT is a typical client/server architecture. The heart OpenNebula [12], and Nimbus [13]. The main of its architecture is the xCAT daemon (xcatd) on the components include OpenStack Compute (codenamed management node. This receives requests from the client, Nova), OpenStack Image Service (codenamed Glance), validates the requests, and then invokes the operation. and OpenStack Object Storage (codenamed Swift). The xcatd daemon also receives status and inventory OpenStack is implemented as a set of Python services information from the nodes as they are being discovered that communicate with each other via message queue and and installed/booted. Thus, the xcatd is the worker that database. Fig. 1 shows a conceptual overview of the really manages all the nodes in the cluster. OpenStack architecture, with the OpenStack Compute components bolded and OpenStack Glance components (which stores and manages all the images) shown in a lighter color. Figure 2. xCAT Architecture. Credit: xCAT[17] IV. XCAT INTEGRATION INTO OPENSTACK Figure 1. OpenStack Architecture. Credit: Ken Pepple [14] To support bare metal provisioning to OpenStack through xCAT, we implemented the xCAT driver, which The nova-api service is responsible for fielding receives requests from OpenStack Nova, deals with the resource requests from users. Currently, OpenStack messages and finally transfers the requests to xCAT. implements two APIs: the Amazon Elastic Compute © 2013 ACADEMY PUBLISHER JOURNAL OF COMPUTERS, VOL. 8, NO. 7, JULY 2013 1693 A. The Logical Architecture IMM address) being some of the most important In OpenStack, it uses libvirt library to manage different information. virtualization technologies. For bare metal provisioning, we need a bare-metal driver, as an alternative to the libvirt driver. For a compute node with bare metal driver, we set compute_driver to nova.virt.baremetal.driver.BareMetalDriver in its nova configuration file (/etc/nova/nova.conf by default). As depicted in Fig. 3,when an end user requests a bare-metal machine (e.g.,a System X machine),a scheduler chooses a nova compute node,whose compute_driver is nova.virt.baremetal.driver.BareMetalDriver and at the same time baremetal_driver is xcat. And then the xCAT bare-metal driver will take care of everything and finally start the system with the specified operating system. If Figure 4. Physical Architecture what the user requested is a Tilera machine, then the scheduler will choose a nova compute
Recommended publications
  • Loadleveler: Using and Administering
    IBM LoadLeveler Version 5 Release 1 Using and Administering SC23-6792-04 IBM LoadLeveler Version 5 Release 1 Using and Administering SC23-6792-04 Note Before using this information and the product it supports, read the information in “Notices” on page 423. This edition applies to version 5, release 1, modification 0 of IBM LoadLeveler (product numbers 5725-G01, 5641-LL1, 5641-LL3, 5765-L50, and 5765-LLP) and to all subsequent releases and modifications until otherwise indicated in new editions. This edition replaces SC23-6792-03. © Copyright 1986, 1987, 1988, 1989, 1990, 1991 by the Condor Design Team. © Copyright IBM Corporation 1986, 2012. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures ..............vii LoadLeveler for AIX and LoadLeveler for Linux compatibility ..............35 Tables ...............ix Restrictions for LoadLeveler for Linux ....36 Features not supported in LoadLeveler for Linux 36 Restrictions for LoadLeveler for AIX and About this information ........xi LoadLeveler for Linux mixed clusters ....36 Who should use this information .......xi Conventions and terminology used in this information ..............xi Part 2. Configuring and managing Prerequisite and related information ......xii the LoadLeveler environment . 37 How to send your comments ........xiii Chapter 4. Configuring the LoadLeveler Summary of changes ........xv environment ............39 The master configuration file ........40 Part 1. Overview of LoadLeveler Setting the LoadLeveler user .......40 concepts and operation .......1 Setting the configuration source ......41 Overriding the shared memory key .....41 File-based configuration ..........42 Chapter 1. What is LoadLeveler? ....3 Database configuration option ........43 LoadLeveler basics ............4 Understanding remotely configured nodes .
    [Show full text]
  • Installation Procedures for Clusters
    Moreno Baricevic CNR-IOM DEMOCRITOS Trieste, ITALY InstallationInstallation ProceduresProcedures forfor ClustersClusters PART 3 – Cluster Management Tools and Security AgendaAgenda Cluster Services Overview on Installation Procedures Configuration and Setup of a NETBOOT Environment Troubleshooting ClusterCluster ManagementManagement ToolsTools NotesNotes onon SecuritySecurity Hands-on Laboratory Session 2 ClusterCluster managementmanagement toolstools CLUSTERCLUSTER MANAGEMENTMANAGEMENT AdministrationAdministration ToolsTools Requirements: ✔ cluster-wide command execution ✔ cluster-wide file distribution and gathering ✔ password-less environment ✔ must be simple, efficient, easy to use for CLI addicted 4 CLUSTERCLUSTER MANAGEMENTMANAGEMENT AdministrationAdministration ToolsTools C3 tools – The Cluster Command and Control tool suite allows configurable clusters and subsets of machines concurrently execution of commands supplies many utilities cexec (parallel execution of standard commands on all cluster nodes) cexecs (as the above but serial execution, useful for troubleshooting and debugging) cpush (distribute files or directories to all cluster nodes) cget (retrieves files or directory from all cluster nodes) crm (cluster-wide remove) ... and many more PDSH – Parallel Distributed SHell same features as C3 tools, few utilities pdsh, pdcp, rpdcp, dshbak Cluster-Fork – NPACI Rocks serial execution only ClusterSSH multiple xterm windows handled through one input grabber Spawn an xterm for each node! DO NOT EVEN TRY IT ON A LARGE CLUSTER!
    [Show full text]
  • Xcat 2 Cookbook for Linux 8/7/2008 (Valid for Both Xcat 2.0.X and Pre-Release 2.1)
    xCAT 2 Cookbook for Linux 8/7/2008 (Valid for both xCAT 2.0.x and pre-release 2.1) Table of Contents 1.0 Introduction.........................................................................................................................................3 1.1 Stateless and Stateful Choices.........................................................................................................3 1.2 Scenarios..........................................................................................................................................4 1.2.1 Simple Cluster of Rack-Mounted Servers – Stateful Nodes....................................................4 1.2.2 Simple Cluster of Rack-Mounted Servers – Stateless Nodes..................................................4 1.2.3 Simple BladeCenter Cluster – Stateful or Stateless Nodes......................................................4 1.2.4 Hierarchical Cluster - Stateless Nodes.....................................................................................5 1.3 Other Documentation Available......................................................................................................5 1.4 Cluster Naming Conventions Used in This Document...................................................................5 2.0 Installing the Management Node.........................................................................................................6 2.1 Prepare the Management Node.......................................................................................................6
    [Show full text]
  • Torqueadminguide-5.1.0.Pdf
    Welcome Welcome Welcome to the TORQUE Adminstrator Guide, version 5.1.0. This guide is intended as a reference for both users and system administrators. For more information about this guide, see these topics: l Overview l Introduction Overview This section contains some basic information about TORQUE, including how to install and configure it on your system. For details, see these topics: l TORQUE Installation Overview on page 1 l Initializing/Configuring TORQUE on the Server (pbs_server) on page 10 l Advanced Configuration on page 17 l Manual Setup of Initial Server Configuration on page 32 l Server Node File Configuration on page 33 l Testing Server Configuration on page 35 l TORQUE on NUMA Systems on page 37 l TORQUE Multi-MOM on page 42 TORQUE Installation Overview This section contains information about TORQUE architecture and explains how to install TORQUE. It also describes how to install TORQUE packages on compute nodes and how to enable TORQUE as a service. For details, see these topics: l TORQUE Architecture on page 2 l Installing TORQUE on page 2 l Compute Nodes on page 7 l Enabling TORQUE as a Service on page 9 Related Topics Troubleshooting on page 132 TORQUE Installation Overview 1 Overview TORQUE Architecture A TORQUE cluster consists of one head node and many compute nodes. The head node runs the pbs_server daemon and the compute nodes run the pbs_mom daemon. Client commands for submitting and managing jobs can be installed on any host (including hosts not running pbs_server or pbs_mom). The head node also runs a scheduler daemon.
    [Show full text]
  • Magellan Final Report
    The Magellan Report on Cloud Computing for Science U.S. Department of Energy Office of Advanced Scientific Computing Research (ASCR) December, 2011 CSO 23179 The Magellan Report on Cloud Computing for Science U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) December, 2011 Magellan Leads Katherine Yelick, Susan Coghlan, Brent Draney, Richard Shane Canon Magellan Staff Lavanya Ramakrishnan, Adam Scovel, Iwona Sakrejda, Anping Liu, Scott Campbell, Piotr T. Zbiegiel, Tina Declerck, Paul Rich Collaborators Nicholas J. Wright Jeff Broughton Rollin Thomas Brian Toonen Richard Bradshaw Michael A. Guantonio Karan Bhatia Alex Sim Shreyas Cholia Zacharia Fadikra Henrik Nordberg Kalyan Kumaran Linda Winkler Levi J. Lester Wei Lu Ananth Kalyanraman John Shalf Devarshi Ghoshal Eric R. Pershey Michael Kocher Jared Wilkening Ed Holohann Vitali Morozov Doug Olson Harvey Wasserman Elif Dede Dennis Gannon Jan Balewski Narayan Desai Tisha Stacey CITRIS/University of STAR Collboration Krishna Muriki Madhusudhan Govindaraju California, Berkeley Linda Vu Victor Markowitz Gabriel A. West Greg Bell Yushu Yao Shucai Xiao Daniel Gunter Nicholas Dale Trebon Margie Wylie Keith Jackson William E. Allcock K. John Wu John Hules Nathan M. Mitchell David Skinner Brian Tierney Jon Bashor This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02¬06CH11357, funded through the American Recovery and Reinvestment Act of 2009. Resources and research at NERSC at Lawrence Berkeley National Laboratory were funded by the Department of Energy from the American Recovery and Reinvestment Act of 2009 under contract number DE-AC02-05CH11231.
    [Show full text]
  • Xcat3 Documentation Release 2.16.3
    xCAT3 Documentation Release 2.16.3 IBM Corporation Sep 23, 2021 Contents 1 Table of Contents 3 1.1 Overview.................................................3 1.2 Install Guides............................................... 16 1.3 Get Started................................................ 33 1.4 Admin Guide............................................... 37 1.5 Advanced Topics............................................. 787 1.6 Questions & Answers.......................................... 1001 1.7 Reference Implementation........................................ 1004 1.8 Troubleshooting............................................. 1016 1.9 Developers................................................ 1019 1.10 Need Help................................................ 1037 1.11 Security Notices............................................. 1037 i ii xCAT3 Documentation, Release 2.16.3 xCAT stands for Extreme Cloud Administration Toolkit. xCAT offers complete management of clouds, clusters, HPC, grids, datacenters, renderfarms, online gaming infras- tructure, and whatever tomorrows next buzzword may be. xCAT enables the administrator to: 1. Discover the hardware servers 2. Execute remote system management 3. Provision operating systems on physical or virtual machines 4. Provision machines in Diskful (stateful) and Diskless (stateless) 5. Install and configure user applications 6. Parallel system management 7. Integrate xCAT in Cloud You’ve reached xCAT documentation site, The main page product page is http://xcat.org xCAT is an open source project
    [Show full text]
  • Xcat 2 Extreme Cloud Administration Toolkit
    Jordi Caubet, IBM Spain – IT Specialist xCAT 2 Extreme Cloud Administration Toolkit http://xcat.sourceforge.net/ © 2011 IBM Corporation xCAT – Extreme Cloud Administration Toolkit xCAT Overview xCAT Architecture & Basic Functionality xCAT Commands Setting Up an xCAT Cluster Example Energy Management Monitoring 2 © 2011 IBM Corporation xCAT – Extreme Cloud Administration Toolkit xCAT Overview xCAT Architecture & Basic Functionality xCAT Commands Setting Up an xCAT Cluster Example Energy Management Monitoring 3 © 2011 IBM Corporation xCAT – Extreme Cloud Administration Toolkit What is xCAT ? Extreme Cluster (Cloud) Administration Toolkit Open source (Eclipse Public License) cluster management solution Configuration database – a relational DB with a simple shell Distributed network services management and shell commands Framework for alerts and alert management Hardware management – control, monitoring, etc. Software provisioning and maintenance Design Goals Build on the work of others – encourage community participation Use Best Practices – borrow concepts not code Scripts only (source included) portability – key to customization! Customer requirement driven Provide a flexible, extensible framework Ability to scale “beyond your budget” 4 © 2011 IBM Corporation xCAT – Extreme Cloud Administration Toolkit What is xCAT ? Systems Need Management Administrators have to manage increasing numbers of both physical and virtual servers Workloads are becoming more specific to OS, libraries and software stacks Increasing need for dynamic reprovisioning Re-purposing of existing equipment Single commands distributed to hundreds/thousands of servers/VMs simultaneously File distribution Firmware and OS updates Cluster troubleshooting 5 © 2011 IBM Corporation xCAT – Extreme Cloud Administration Toolkit xCAT 2 Main Features Client/server architecture. Clients can run on any Perl compliant system. All communications are SSL encrypted. Role-based administration.
    [Show full text]
  • Lico 6.0.0 Installation Guide (For SLES) Seventh Edition (August 2020)
    LiCO 6.0.0 Installation Guide (for SLES) Seventh Edition (August 2020) © Copyright Lenovo 2018, 2020. LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services Administration (GSA) contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No. GS-35F- 05925. Reading instructions • To ensure that you get correct command lines using the copy/paste function, open this Guide with Adobe Acrobat Reader, a free PDF viewer. You can download it from the official Web site https://get.adobe.com/ reader/. • Replace values in angle brackets with the actual values. For example, when you see <*_USERNAME> and <*_PASSWORD>, enter your actual username and password. • Between the command lines and in the configuration files, ignore all annotations starting with #. © Copyright Lenovo 2018, 2020 ii iii LiCO 6.0.0 Installation Guide (for SLES) Contents Reading instructions. ii Check environment variables. 25 Check the LiCO dependencies repository . 25 Chapter 1. Overview. 1 Check the LiCO repository . 25 Introduction to LiCO . 1 Check the OS installation . 26 Typical cluster deployment . 1 Check NFS . 26 Operating environment . 2 Check Slurm . 26 Supported servers and chassis models . 3 Check MPI and Singularity . 27 Prerequisites . 4 Check OpenHPC installation . 27 List of LiCO dependencies to be installed. 27 Chapter 2. Deploy the cluster Install RabbitMQ . 27 environment . 5 Install MariaDB . 28 Install an OS on the management node . 5 Install InfluxDB . 28 Deploy the OS on other nodes in the cluster. 5 Install Confluent. 29 Configure environment variables . 5 Configure user authentication . 29 Create a local repository .
    [Show full text]
  • Xcat3 Documentation Release 2.11
    xCAT3 Documentation Release 2.11 IBM Corporation Mar 09, 2017 Contents 1 Table of Contents 3 1.1 Overview.................................................3 1.2 Install Guides............................................... 15 1.3 Admin Guide............................................... 27 1.4 Advanced Topics............................................. 702 1.5 Developers................................................ 796 1.6 Need Help................................................ 811 i ii xCAT3 Documentation, Release 2.11 xCAT stands for Extreme Cloud/Cluster Administration Toolkit. xCAT offers complete management of clouds, clusters, HPC, grids, datacenters, renderfarms, online gaming infras- tructure, and whatever tomorrows next buzzword may be. xCAT enables the administrator to: 1. Discover the hardware servers 2. Execute remote system management 3. Provision operating systems on physical or virtual machines 4. Provision machines in Diskful (stateful) and Diskless (stateless) 5. Install and configure user applications 6. Parallel system management 7. Integrate xCAT in Cloud You’ve reached xCAT documentation site, The main page product page is http://xcat.org xCAT is an open source project hosted on GitHub. Go to GitHub to view the source, open issues, ask questions, and particpate in the project. Enjoy! Contents 1 xCAT3 Documentation, Release 2.11 2 Contents CHAPTER 1 Table of Contents Overview xCAT enables you to easily manage large number of servers for any type of technical computing workload. xCAT is known for exceptional scaling,
    [Show full text]
  • CSM to Xcat Transition On
    Front cover Configuring and Managing AIX Clusters Using xCAT 2 An xCAT guide for CSM system administrators with advanced features How to transition your AIX cluster to xCAT xCAT 2 deployment guide for a new AIX cluster Octavian Lascu Tim Donovan Saul Hiller Simeon McAleer László Niesz Sean Saunders Kai Wu Peter Zutenis ibm.com/redbooks International Technical Support Organization Configuring and Managing AIX Clusters Using xCAT 2 October 2009 SG24-7766-00 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (October 2009) This edition applies to Version 2, Release 2, Modification 1 of Extreme Cluster Administration Tool (xCAT 2) for AIX. © Copyright International Business Machines Corporation 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . ix Trademarks . x Preface . xi The team that wrote this book . xi Become a published author . xiii Comments welcome. xiv Part 1. Introduction . 1 Chapter 1. Introduction to clustering . 3 1.1 Clustering concepts. 4 1.1.1 Cluster types . 4 1.1.2 Cluster components . 5 1.1.3 Cluster nodes . 7 1.1.4 Cluster networks . 9 1.1.5 Hardware, power control, and console access . 11 1.2 Suggested cluster diagram . 13 1.3 Managing high performance computing (HPC) clusters . 15 1.3.1 What is CSM . 15 1.3.2 xCAT 2: An 0pen source method for cluster management . 16 1.3.3 xCAT 1.1.4 and CSM to xCAT 2 - History and evolution .
    [Show full text]
  • Lico 6.1.0 Installation Guide (For SLES) Eighth Edition (Decmeber 2020)
    LiCO 6.1.0 Installation Guide (for SLES) Eighth Edition (Decmeber 2020) © Copyright Lenovo 2018, 2020. LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services Administration (GSA) contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No. GS-35F- 05925. Reading instructions • To ensure that you get correct command lines using the copy/paste function, open this Guide with Adobe Acrobat Reader, a free PDF viewer. You can download it from the official Web site https://get.adobe.com/ reader/. • Replace values in angle brackets with the actual values. For example, when you see <*_USERNAME> and <*_PASSWORD>, enter your actual username and password. • Between the command lines and in the configuration files, ignore all annotations starting with #. © Copyright Lenovo 2018, 2020 ii iii LiCO 6.1.0 Installation Guide (for SLES) Contents Reading instructions. ii Check environment variables. 29 Check the LiCO dependencies repository . 29 Chapter 1. Overview. 1 Check the LiCO repository . 29 Introduction to LiCO . 1 Check the OS installation . 30 Typical cluster deployment . 1 Check NFS . 30 Operating environment . 2 Check Slurm . 30 Supported servers and chassis models . 3 Check MPI and Singularity . 31 Prerequisites . 4 Check OpenHPC installation . 31 List of LiCO dependencies to be installed. 31 Chapter 2. Deploy the cluster Install RabbitMQ . 31 environment . 5 Install MariaDB . 32 Install an OS on the management node . 5 Install InfluxDB . 32 Deploy the OS on other nodes in the cluster. 5 Install Confluent. 33 Configure environment variables . 5 Configure user authentication . 33 Create a local repository .
    [Show full text]
  • HPC Solution on IBM POWER8
    Front cover Implementing an IBM High-Performance Computing Solution on IBM POWER8 Dino Quintero Wei Li Wainer dos Santos Moschetta Mauricio Faria de Oliveira Alexander Pozdneev Redbooks International Technical Support Organization Implementing an IBM High-Performance Computing Solution on IBM POWER8 September 2015 SG24-8263-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (September 2015) This edition applies to IBM Spectrum Scale 4.1.0-6, Ubuntu Server 14.04.1 LTS, xCAT 2.9, Mellanox OFED 2.3-2.0.0, XLC 13.1.1-0, XL Fortran 15.1.1-0, IBM Advance Toolchain 8.0 (provides GCC-4.9.2), Parallel Environment Runtime 2.1.0.0, Parallel Environment Developer Edition 2.1.0.0, ESSL 5.3.1-0, PESSL 5.1.0-0, IBM JDK 7.1-2.0, Platform LSF 9.1.3, and NVIDIA CUDA Toolkit 5.5-54 (on Ubuntu 14.10). © Copyright International Business Machines Corporation 2015. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii IBM Redbooks promotions . ix Preface . xi Authors. xi Now you can become a published author, too! . xii Comments welcome. xii Stay connected to IBM Redbooks . xiii Chapter 1. Introduction. 1 1.1 Why implement an IBM HPC solution on IBM POWER8 . 2 1.2 IBM Power System S824L . 3 1.2.1 The IBM POWER8 processor chip . 4 1.2.2 Memory subsystem.
    [Show full text]