Xcat3 Documentation Release 2.16.3

Total Page:16

File Type:pdf, Size:1020Kb

Xcat3 Documentation Release 2.16.3 xCAT3 Documentation Release 2.16.3 IBM Corporation Sep 23, 2021 Contents 1 Table of Contents 3 1.1 Overview.................................................3 1.2 Install Guides............................................... 16 1.3 Get Started................................................ 33 1.4 Admin Guide............................................... 37 1.5 Advanced Topics............................................. 787 1.6 Questions & Answers.......................................... 1001 1.7 Reference Implementation........................................ 1004 1.8 Troubleshooting............................................. 1016 1.9 Developers................................................ 1019 1.10 Need Help................................................ 1037 1.11 Security Notices............................................. 1037 i ii xCAT3 Documentation, Release 2.16.3 xCAT stands for Extreme Cloud Administration Toolkit. xCAT offers complete management of clouds, clusters, HPC, grids, datacenters, renderfarms, online gaming infras- tructure, and whatever tomorrows next buzzword may be. xCAT enables the administrator to: 1. Discover the hardware servers 2. Execute remote system management 3. Provision operating systems on physical or virtual machines 4. Provision machines in Diskful (stateful) and Diskless (stateless) 5. Install and configure user applications 6. Parallel system management 7. Integrate xCAT in Cloud You’ve reached xCAT documentation site, The main page product page is http://xcat.org xCAT is an open source project hosted on GitHub. Go to GitHub to view the source, open issues, ask questions, and participate in the project. Enjoy! Contents 1 xCAT3 Documentation, Release 2.16.3 2 Contents CHAPTER 1 Table of Contents 1.1 Overview xCAT enables you to easily manage large number of servers for any type of technical computing workload. xCAT is known for exceptional scaling, wide variety of supported hardware and operating systems, virtualization platforms, and complete “day0” setup capabilities. 1.1.1 Differentiators • xCAT Scales Beyond all IT budgets, up to 100,000s of nodes with distributed architecture. • Open Source Eclipse Public License. Support contracts are also available, contact IBM. • Supports Multiple Operating Systems RHEL, SLES, Ubuntu, Debian, CentOS, Fedora, Scientific Linux, Oracle Linux, Windows, Esxi, RHEV, and more! • Support Multiple Hardware IBM Power, IBM Power LE, x86_64 • Support Multiple Virtualization Infrastructures IBM PowerKVM, KVM, IBM zVM, ESXI, XEN • Support Multiple Installation Options Diskful (Install to Hard Disk), Diskless (Runs in memory), Cloning • Built in Automatic discovery No need to power on one machine at a time for discovery. Nodes that fail can be replaced and back in action simply by powering the new one on. 3 xCAT3 Documentation, Release 2.16.3 • RestFUL API Provides a Rest API interface for the third-party software to integrate with 1.1.2 Features 1. Discover the hardware servers • Manually define • MTMS-based discovery • Switch-based discovery • Sequential-based discovery 2. Execute remote system management against the discovered server • Remote power control • Remote console support • Remote inventory/vitals information query • Remote event log query 3. Provision Operating Systems on physical (Bare-metal) or virtual machines • RHEL • SLES • Ubuntu • Debian • Fedora • CentOS • Scientific Linux • Oracle Linux • PowerKVM • Esxi • RHEV • Windows 4. Provision machines in • Diskful (Scripted install, Clone) • Stateless 5. Install and configure user applications • During OS install • After the OS install • HPC products - GPFS, Parallel Environment, LSF, compilers . • Big Data - Hadoop, Symphony • Cloud - Openstack, Chef 4 Chapter 1. Table of Contents xCAT3 Documentation, Release 2.16.3 6. Parallel system management • Parallel shell (Run shell command against nodes in parallel) • Parallel copy • Parallel ping 7. Integrate xCAT in Cloud • Openstack • SoftLayer 1.1.3 Operating System & Hardware Support Matrix Power Power LE zVM Power KVM x86_64 x86_64 KVM x86_64 Esxi RHEL yes yes yes yes yes yes yes SLES yes yes yes yes yes yes yes Ubuntu no yes no yes yes yes yes CentOS no no no no yes yes yes Windows no no no no yes yes yes 1.1.4 Architecture The following diagram shows the basic structure of xCAT: 1.1. Overview 5 xCAT3 Documentation, Release 2.16.3 xCAT Management Node (xCAT Mgmt Node): The server where xCAT software is installed and used as the single point to perform system management over the entire cluster. On this node, a database is configured to store the xCAT node definitions. Network services (dhcp, tftp, http, etc) are enabled to respond in Operating system deployment. Service Node: One or more defined “slave” servers operating under the Management Node to assist in system man- agement to reduce the load (cpu, network bandwidth) when using a single Management Node. This concept is necessary when managing very large clusters. Compute Node: The compute nodes are the target servers which xCAT is managing. Network Services (dhcp, tftp, http,etc): The various network services necessary to perform Operating System de- ployment over the network. xCAT will bring up and configure the network services automatically without any intervention from the System Administrator. Service Processor (SP): A module embedded in the hardware server used to perform the out-of-band hardware con- trol. (e.g. Integrated Management Module (IMM), Flexible Service Processor (FSP), Baseboard Management Controller (BMC), etc) Management network: The network used by the Management Node (or Service Node) to install operating systems and manage the nodes. The Management Node and in-band Network Interface Card (NIC) of the nodes are 6 Chapter 1. Table of Contents xCAT3 Documentation, Release 2.16.3 connected to this network. If you have a large cluster utilizing Service Nodes, sometimes this network is segregated into separate VLANs for each Service Node. Service network: The network used by the Management Node (or Service Node) to control the nodes using out-of- band management using the Service Processor. If the Service Processor is configured in shared mode (meaning the NIC of the Service process is used for the SP and the host), then this network can be combined with the management network. Application network: The network used by the applications on the Compute nodes to communicate among each other. Site (Public) network: The network used by users to access the Management Nodes or access the Compute Nodes directly. RestAPIs: The RestAPI interface can be used by the third-party application to integrate with xCAT. 1.1.5 xCAT2 Release Information The following tables documents the xCAT release versions and release dates. For more detailed information regarding new functions, supported OSs, bug fixes, and download links, refer to the specific release notes. xCAT 2.16.x Table 1: 2.16.x Release Information Version Release Date New OS Supported Release Notes 2.16.2 2021-05-25 RHEL 8.3 2.16.2 Release Notes 2.16.1 2020-11-06 RHEL 8.2 2.16.1 Release Notes 2.16.0 2020-06-17 RHEL 8.1,SLES 15 2.16.0 Release Notes xCAT 2.15.x Table 2: 2.15.x Release Information Version Release Date New OS Supported Release Notes 2.15.1 2020-03-06 RHEL 7.7 2.15.1 Release Notes 2.15.0 2019-11-11 RHEL 8.0 2.15.0 Release Notes xCAT 2.14.x Table 3: 2.14.x Release Information Version Release Date New OS Supported Release Notes 2.14.6 2019-03-29 2.14.6 Release Notes 2.14.5 2018-12-07 RHEL 7.6 2.14.5 Release Notes 2.14.4 2018-10-19 Ubuntu 18.04.1 2.14.4 Release Notes 2.14.3 2018-08-24 SLES 12.3 2.14.3 Release Notes 2.14.2 2018-07-13 RHEL 6.10, Ubuntu 18.04 2.14.2 Release Notes 2.14.1 2018-06-01 RHV 4.2, RHEL 7.5 2.14.1 Release Notes (Power8) 2.14.0 2018-04-20 RHEL 7.5 2.14.0 Release Notes 1.1. Overview 7 xCAT3 Documentation, Release 2.16.3 xCAT 2.13.x Table 4: 2.13.x Release Information Version Release Date New OS Supported Release Notes 2.13.11 2018-03-09 2.13.11 Release Notes 2.13.10 2018-01-26 2.13.10 Release Notes 2.13.9 2017-12-18 2.13.9 Release Notes 2.13.8 2017-11-03 2.13.8 Release Notes 2.13.7 2017-09-22 2.13.7 Release Notes 2.13.6 2017-08-10 RHEL 7.4 2.13.6 Release Notes 2.13.5 2017-06-30 2.13.5 Release Notes 2.13.4 2017-05-09 RHV 4.1 2.13.4 Release Notes 2.13.3 2017-04-14 RHEL 6.9 2.13.3 Release Notes 2.13.2 2017-02-24 2.13.2 Release Notes 2.13.1 2017-01-13 2.13.1 Release Notes 2.13.0 2016-12-09 SLES 12.2 2.13.0 Release Notes xCAT 2.12.x Table 5: 2.12.x Release Information Version Release Date New OS Supported Release Notes 2.12.4 2016-11-11 RHEL 7.3 LE, RHEV 4.0 2.12.4 Release Notes 2.12.3 2016-09-30 2.12.3 Release Notes 2.12.2 2016-08-19 Ubuntu 16.04.1 2.12.2 Release Notes 2.12.1 2016-07-08 2.12.1 Release Notes 2.12.0 2016-05-20 RHEL 6.8, Ubuntu 14.4.4 2.12.0 Release Notes LE, Ubuntu 16.04 8 Chapter 1. Table of Contents xCAT3 Documentation, Release 2.16.3 xCAT 2.11.x xCAT Version New OS New Hardware New Feature • Bug fix xCAT 2.11.1 2016/04/22 2.11.1 Release Notes • RHEL 7.2 LE • S822LC(GCA) • NVIDIA GPU for xCAT 2.11 • UBT 14.4.3 LE • S822LC(GTA) OpenPOWER 2015/12/11 • UBT 15.10 LE • S812LC • Infiniband for 2.11 Release Notes • PowerKVM 3.1 • NeuCloud OP OpenPOWER • ZoomNet RP • SW KIT support for OpenPOWER • renergy command for OpenPOWER • rflash command for OpenPOWER • Add xCAT Trou- bleshooting Log • xCAT Log Classifi- cation • RAID Configura- tion • Accelerate genim- age process • Add bmcdiscover Command • Enhance xcatde- bugmode • new xCAT doc in ReadTheDocs 1.1.

  1051
Recommended publications
  • Z/OS ISPF Services Guide COMMAND NAME
    z/OS 2.4 ISPF Services Guide IBM SC19-3626-40 Note Before using this information and the product it supports, read the information in “Notices” on page 399. This edition applies to Version 2 Release 4 of z/OS (5650-ZOS) and to all subsequent releases and modifications until otherwise indicated in new editions. Last updated: 2021-06-22 © Copyright International Business Machines Corporation 1980, 2021. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures................................................................................................................ xv Tables................................................................................................................xvii Preface...............................................................................................................xix Who should use this document?............................................................................................................... xix What is in this document?......................................................................................................................... xix How to read the syntax diagrams..............................................................................................................xix z/OS information...............................................................................................xxiii How to send your comments to IBM...................................................................
    [Show full text]
  • Sonexion 1.5 Upgrade Guide ()
    Sonexion 1.5 Upgrade Guide () Contents About Sonexion 1.5 Upgrade Guide..........................................................................................................................3 Sonexion 1.5.0 Upgrade Introduction........................................................................................................................4 Prerequisites..............................................................................................................................................................8 Begin Upgrade.........................................................................................................................................................10 Upgrade Sonexion 1600/900 Software....................................................................................................................17 Post-Installation Steps.............................................................................................................................................28 Troubleshoot from Error Messages.........................................................................................................................33 2 -- () About Sonexion 1.5 Upgrade Guide This document provides instructions to upgrade Sonexion systems running software release 1.4.0 to the 1.5.0 software platform. Audience This publication is intended for use by Cray field technicicans who are trained with Sonexion and familiar wtih the software upgrade process. Typographic Conventions Monospace A Monospace font indicates program code,
    [Show full text]
  • How to Setup Bugtracking System Integration Mantis and Bugzilla Examples
    How to setup bugtracking system integration Mantis and Bugzilla examples Date Author Notes 20071123 Francisco Mancardi Bugzilla example 20070728 Francisco Mancardi Initial version Overview................................................................................................................................... 1 Configuration example - Mantis..................................................................................................... 2 Environment............................................................................................................................ 2 Step 1 – Mantis Configuration.................................................................................................... 2 Step 2 – Test Link – Configure Mantis interface............................................................................ 2 Step 3 – Test Link - Enable BTS integration.................................................................................. 3 Configuration example - Bugzilla................................................................................................... 4 Environment............................................................................................................................ 4 Step 1 – Test Link – Configure Mantis interface............................................................................ 4 Step 2 – Test Link - Enable BTS integration.................................................................................. 4 Screenshoots............................................................................................................................
    [Show full text]
  • Best Practices and Getting Started Guide for Oracle on IBM Linuxone
    Front cover Best practices and Getting Started Guide for Oracle on IBM LinuxONE Susan Adamovich Sam Amsavelu Srujan Jagarlamudi Raghu Malige Michael Morgan Paul Novak David J Simpson Redpaper International Technical Support Organization Best practices and Getting Started Guide for Oracle on IBM LinuxONE June 2020 REDP-5499-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (June 2020) This edition applies to Oracle 12c, Release 2. This document was created or updated on June 2, 2020. © Copyright International Business Machines Corporation 2020. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . 1 Authors. 1 Now you can become a published author, too! . 2 Comments welcome. 3 Stay connected to IBM Redbooks . 3 Chapter 1. Running Linux virtual servers under IBM z/VM . 5 1.1 z/VM fundamentals . 6 1.2 Pre-requisites and assumptions . 6 1.2.1 Pre-requisites . 6 1.2.2 Assumptions . 6 1.3 Configuring a workstation for mainframe access . 7 1.3.1 3270 emulators . 7 1.3.2 Virtual Network Computing client . 7 1.3.3 Connecting from a Linux workstation . 8 1.3.4 Connecting from a MacOS workstation. 8 1.3.5 Connecting from a Windows workstation . 9 1.4 Service validation and notice subscription . 10 1.4.1 RSU validation . 10 1.4.2 Subscribing to service notifications. 10 1.5 Tailoring z/VM for Linux workloads .
    [Show full text]
  • 1. D-Bus a D-Bus FAQ Szerint D-Bus Egy Interprocessz-Kommunikációs Protokoll, És Annak Referenciamegvalósítása
    Az Udev / D-Bus rendszer - a modern asztali Linuxok alapja A D-Bus rendszer minden modern Linux disztribúcióban jelen van, sőt mára már a Linux, és más UNIX jellegű, sőt nem UNIX rendszerek (különösen a desktopon futó változatok) egyik legalapvetőbb technológiája, és az ismerete a rendszergazdák számára lehetővé tesz néhány rendkívül hasznos trükköt, az alkalmazásfejlesztőknek pedig egyszerűen KÖTELEZŐ ismerniük. Miért ilyen fontos a D-Bus? Mit csinál? D-Bus alapú technológiát teszik lehetővé többek között azt, hogy közönséges felhasználóként a kedvenc asztali környezetünkbe bejelentkezve olyan feladatokat hajtsunk végre, amiket a kernel csak a root felasználónak engedne meg. Felmountolunk egy USB meghajtót? NetworkManagerrel konfiguráljuk a WiFi-t, a 3G internetet vagy bármilyen más hálózati csatolót, és kapcsolódunk egy hálózathoz? Figyelmeztetést kapunk a rendszertől, hogy új szoftverfrissítések érkeztek, majd telepítjük ezeket? Hibernáljuk, felfüggesztjük a gépet? A legtöbb esetben ma már D-Bus alapú technológiát használunk ilyen esetben. A D-Bus lehetővé teszi, hogy egymástól függetlenül, jellemzően más UID alatt indított szoftverösszetevők szabványos és biztonságos módon igénybe vegyék egymás szolgáltatásait. Ha valaha lesz a Linuxhoz professzionális desktop tűzfal vagy vírusirtó megoldás, a dolgok jelenlegi állasa szerint annak is D- Bus technológiát kell használnia. A D-Bus technológia legfontosabb ihletője a KDE DCOP rendszere volt, és mára a D-Bus leváltotta a DCOP-ot, csakúgy, mint a Gnome Bonobo technológiáját. 1. D-Bus A D-Bus FAQ szerint D-Bus egy interprocessz-kommunikációs protokoll, és annak referenciamegvalósítása. Ezen referenciamegvalósítás egyik összetevője, a libdbus könyvtár a D- Bus szabványnak megfelelő kommunikáció megvalósítását segíti. Egy másik összetevő, a dbus- daemon a D-Bus üzenetek routolásáért, szórásáért felelős.
    [Show full text]
  • Seminar HPC Trends Winter Term 2017/2018 New Operating System Concepts for High Performance Computing
    Seminar HPC Trends Winter Term 2017/2018 New Operating System Concepts for High Performance Computing Fabian Dreer Ludwig-Maximilians Universit¨atM¨unchen [email protected] January 2018 Abstract 1 The Impact of System Noise When using a traditional operating system kernel When running large-scale applications on clusters, in high performance computing applications, the the noise generated by the operating system can cache and interrupt system are under heavy load by greatly impact the overall performance. In order to e.g. system services for housekeeping tasks which is minimize overhead, new concepts for HPC OSs are also referred to as noise. The performance of the needed as a response to increasing complexity while application is notably reduced by this noise. still considering existing API compatibility. Even small delays from cache misses or interrupts can affect the overall performance of a large scale In this paper we study the design concepts of het- application. So called jitter even influences collec- erogeneous kernels using the example of mOS and tive communication regarding the synchronization, the approach of library operating systems by ex- which can either absorb or propagate the noise. ploring the architecture of Exokernel. We sum- Even though asynchronous communication has a marize architectural decisions, present a similar much higher probability to absorb the noise, it is project in each case, Interface for Heterogeneous not completely unaffected. Collective operations Kernels and Unikernels respectively, and show suffer the most from propagation of jitter especially benchmark results where possible. when implemented linearly. But it is hard to anal- yse noise and its propagation for collective oper- ations even for simple algorithms.
    [Show full text]
  • TSO/E Programming Guide
    z/OS Version 2 Release 3 TSO/E Programming Guide IBM SA32-0981-30 Note Before using this information and the product it supports, read the information in “Notices” on page 137. This edition applies to Version 2 Release 3 of z/OS (5650-ZOS) and to all subsequent releases and modifications until otherwise indicated in new editions. Last updated: 2019-02-16 © Copyright International Business Machines Corporation 1988, 2017. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents List of Figures....................................................................................................... ix List of Tables........................................................................................................ xi About this document...........................................................................................xiii Who should use this document.................................................................................................................xiii How this document is organized............................................................................................................... xiii How to use this document.........................................................................................................................xiii Where to find more information................................................................................................................ xiii How to send your comments to IBM......................................................................xv
    [Show full text]
  • The Linux Device File-System
    The Linux Device File-System Richard Gooch EMC Corporation [email protected] Abstract 1 Introduction All Unix systems provide access to hardware via de- vice drivers. These drivers need to provide entry points for user-space applications and system tools to access the hardware. Following the \everything is a file” philosophy of Unix, these entry points are ex- posed in the file name-space, and are called \device The Device File-System (devfs) provides a power- special files” or \device nodes". ful new device management mechanism for Linux. Unlike other existing and proposed device manage- This paper discusses how these device nodes are cre- ment schemes, it is powerful, flexible, scalable and ated and managed in conventional Unix systems and efficient. the limitations this scheme imposes. An alternative mechanism is then presented. It is an alternative to conventional disc-based char- acter and block special devices. Kernel device drivers can register devices by name rather than de- vice numbers, and these device entries will appear in the file-system automatically. 1.1 Device numbers Devfs provides an immediate benefit to system ad- ministrators, as it implements a device naming scheme which is more convenient for large systems Conventional Unix systems have the concept of a (providing a topology-based name-space) and small \device number". Each instance of a driver and systems (via a device-class based name-space) alike. hardware component is assigned a unique device number. Within the kernel, this device number is Device driver authors can benefit from devfs by used to refer to the hardware and driver instance.
    [Show full text]
  • Clustering with Openmosix
    Clustering with openMosix Maurizio Davini (Department of Physics and INFN Pisa) Presented by Enrico Mazzoni (INFN Pisa) Introduction • What is openMosix? – Single-System Image – Preemptive Process Migration – The openMosix File System (MFS) • Application Fields • openMosix vs Beowulf • The people behind openMosix • The openMosix GNU project • Fork of openMosix code 12/06/2003 HTASC 2 The openMosix Project MileStones • Born early 80s on PDP-11/70. One full PDP and disk-less PDP, therefore process migration idea. • First implementation on BSD/pdp as MS.c thesis. • VAX 11/780 implementation (different word size, different memory architecture) • Motorola / VME bus implementation as Ph.D. thesis in 1993 for under contract from IDF (Israeli Defence Forces) • 1994 BSDi version • GNU and Linux since 1997 • Contributed dozens of patches to the standard Linux kernel • Split Mosix / openMosix November 2001 • Mosix standard in Linux 2.5? 12/06/2003 HTASC 3 What is openMOSIX • Linux kernel extension (2.4.20) for clustering • Single System Image - like an SMP, for: – No need to modify applications – Adaptive resource management to dynamic load characteristics (CPU intensive, RAM intensive, I/O etc.) – Linear scalability (unlike SMP) 12/06/2003 HTASC 4 A two tier technology 1. Information gathering and dissemination – Support scalable configurations by probabilistic dissemination algorithms – Same overhead for 16 nodes or 2056 nodes 2. Pre-emptive process migration that can migrate any process, anywhere, anytime - transparently – Supervised by adaptive
    [Show full text]
  • TECHNOLOGY LIST - ISSUE DATE: March 18, 2019 Technology Definition: a Set of Knowledge, Skills And/Or Abilities, Taking a Significant Time (E.G
    IT CLASSIFICATION TECHNOLOGY LIST - ISSUE DATE: March 18, 2019 Technology Definition: A set of knowledge, skills and/or abilities, taking a significant time (e.g. 6 months) to learn, and applicable to the defined classification specification assigned. Example of Tools: These are examples only for illustration purposes and are not meant to constitute a full and/or comprehensive list. Classification Discipline Technology Definition Example of Tools The relational database management system provided by IBM that runs on Unix, Omegamon, IBM Admin Tools, Log Analyzer, Database Management Linux, Windows and z/OS platforms DB2 Compare, Nsynch, TSM, Universal DBA System DB2 including DB2 Connect and related tools Command, SQL SQL Server Mgmt Studio, Red Gate, The relational database management Vantage, Tivoli, Snap Manager, Toad, system and related tools provided by Enterprise Manager, SQL, Azure SQL SQL Server Microsoft Corp Database The relational database management Oracle enterprise manager, application system and related tools provided by Oracle express, RMAN, PL SQL, SQL developer, ORACLE Corp Toad, SQL The relational database management SYBASE system and related tools provided by Sybase ASE, OEM, RAC, Partioning, Encryption Cincom SUPRA SQL – Cincom's relational database management system provides access to data in open and proprietary environments through industry-standard SQL for standalone and client/server application Supra 2.X solutions. phpadmin, mysqladmin, MySql, Vertica, Open Source Open Source database management system SQLite, Hadoop The hierarchical database management system provided by IBM that runs on z/OS Hierarchical Database IMS mainframe platform including related tools BMC IMS Utilities, Strobe, Omegamon Cincom SUPRA® PDM – Cincom's networked, hierarchical database management system provides access to your data through a Physical Data Manager (PDM) that manages the data structures of the physical files that store the data.
    [Show full text]
  • Happy Birthday Linux
    25 Jahre Linux! Am Anfang war der Quellcode Entstehungsgeschichte und Werdegang von Linux Entwicklung und Diversifizierung der Distributionen Der Wert von Linux oder: „Wat nix kost, dat is och nix.“ Andreas Klein ORR 2016 1 Am Anfang war der Quellcode (70er) ● 1969, Ken Thompson u. Dennis Ritchie erstellen die erste Version von Unix in Assembler. ● Von 1969-1971 entwickeln sie gemeinsam die Programmiersprache B. ● Ab 1971 erweiterte in erster Linie Dennis Ritchie B, um weitere Elemente und nannte sie Anfangs NB (new B). ● 1973 waren die Erweiterungen soweit gediehen, das er die stark verbesserte Sprache C nannte (Brian W. Kernighan hat ebenfalls maßgeblich dazu beigetragen). //Unix=25 PCs ● Bis 1974 war das gesamte Betriebssystem UNIX vollständig in C implementiert und wurde mit einem C-Compiler kostenfrei an verschiedene Universitäten verteilt. ● 1978 wurden bereits über 600 Computer mit dem UNIX-Betriebssystemen betrieben. ● Das aufblühende Zeitalter der Computerisierung der 70er Jahre war geprägt vom regen und freien Austausch von Programmen und dessen zugrunde liegenden Ideen. Sinnvoller Weise tauschte man diese als Quellcode untereinander aus. ● 1979 wurde von AT&T die letzte UNIX-Version 7, mit freiem Quellcode veröffentlicht. Andreas Klein ORR 2016 2 Am Anfang war der Quellcode (80er) ● 1980 – 1983 AT&T sowie zahlreiche andere Unternehmen beginnen mit der Kommerzialisierung von UNIX, durch Koppelung an stark beschränkenden Lizenzen und Geheimhaltung des zugrunde liegenden Quelltextes. ● Richard Stallman kündigt am 27. September 1983 in den Newsgroups net.unix-wizards und net.usoft das GNU-Projekt an. ● Am 5. Januar 1984 begann Stallman offiziell mit der Arbeit am GNU-Projekt, nachdem er seine Stelle am MIT gekündigt hatte.
    [Show full text]
  • Cielo Computational Environment Usage Model
    Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Version 1.0: Bob Tomlinson, John Cerutti, Robert A. Ballance (Eds.) Version 1.1: Manuel Vigil, Jeffrey Johnson, Karen Haskell, Robert A. Ballance (Eds.) Prepared by the Alliance for Computing at Extreme Scale (ACES), a partnership of Los Alamos National Laboratory and Sandia National Laboratories. Approved for public release, unlimited dissemination LA-UR-12-24015 July 2012 Los Alamos National Laboratory Sandia National Laboratories Disclaimer Unless otherwise indicated, this information has been authored by an employee or employees of the Los Alamos National Security, LLC. (LANS), operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The U.S. Government has rights to use, reproduce, and distribute this information. The public may copy and use this information without charge, provided that this Notice and any statement of authorship are reproduced on all copies. Neither the Government nor LANS makes any warranty, express or implied, or assumes any liability or responsibility for the use of this information. Bob Tomlinson – Los Alamos National Laboratory John H. Cerutti – Los Alamos National Laboratory Robert A. Ballance – Sandia National Laboratories Karen H. Haskell – Sandia National Laboratories (Editors) Cray, LibSci, and PathScale are federally registered trademarks. Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray XE, Cray XE6, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, Gemini, Libsci and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]