Scientific Linux a Status Report on Its Adoption in the HEP Environment

Total Page:16

File Type:pdf, Size:1020Kb

Scientific Linux a Status Report on Its Adoption in the HEP Environment Scientific Linux A Status Report on its Adoption in the HEP Environment Alan Silverman (on behalf of HEPiX Board) 15th June 2005 For many years, most HEP labs have used the free Redhat release of Linux as the basic operating system for their compute farms and their desktop systems. Major exceptions were DESY (SuSE) and GSI1 (Debian). In the summer of 2003 Redhat announced that they would split their release into the free, open source, community-supported Fedora source distribution and the licensed Enterprise Linux (RHEL) release. Further the distribution of the latter in binary for-free form would be discontinued, and the company would focus on the commercial "Enterprise" distribution. However the open source nature of Linux means they must still publish the GPL2 source used to build their distribution and that must remain free. While adoption of the free Fedora release may look attractive to sites wishing to remain with a free distribution, Redhat stated that Fedora would undergo rapid (around 6 monthly) version change and Fedora releases would not be supported for much longer than the previous release (9-12 months)3. On the other hand, RHEL releases would benefit from up to 5 years support, the kind of support timescale desired by enterprises, including HEP production sites. By the autumn of 2003 the US DoE had negotiated a deal for their sites and SLAC and Jefferson Lab signed up but FNAL and BNL decided against this. CERN started to try to negotiate a similar deal to the US DoE, not only for itself but also for its associated labs. Redhat appeared sympathetic but despite contacts with the HQ staff looking after our market, these talks did not result in an interesting offer. Having decided against using the RHEL licence, FNAL started from the freely- available (because open source, see above) Redhat RHEL source and built what it called at that time LTS. FNAL made this available to all HEP sites and even included modules to permit local tailoring. In parallel with the Redhat negotiations, CERN adopted a similar procedure to Fermilab, starting from the RHEL source and building its own release, which it called CEL. Just before Spring HEPiX 2004 in Edinburgh (May 2004) Redhat proposed an agreement to CERN to make a deal for all HEP sites similar to the one offered to the US DoE and indeed they presented this to the HEPiX meeting in open session. But by then, FNAL and CERN were committed to their respective RHEL source-based solutions and these also were also presented at HEPiX, alongside each other. Given the similarity of the two solutions, the fact that the two teams were already interacting with each other and under the prompting of many sites presented, Fermi and CERN agreed to collaborate on a uniform "HEP Linux" distribution (which nevertheless could be tailored to some extent for each site). A strong selling argument would be the binary compatibility with Redhat’s RHEL distribution. Several other labs represented declared themselves very interested in adopting this. HEP Linux was renamed to Scientific Linux (SL) shortly afterwards, CERN's versions are SLC3, Fermi's versions are SLF3. CERN switched its local certification to SLC3. 1 For the purposes of this report, we consider the HEPiX community although strictly speaking GSI is not generally considered an HEP lab. 2 Gnu Public Licence 3 In fact in the past few days Redhat have cut Fedora loose and it is only supported now by the Linux community. Over the following months, CERN completed SLC3 certification and started migrating all central systems to this. In parallel, it nevertheless signed up to a deal with Redhat to evaluate their licensed RHEL offering (actually called subscriptions) on a limited number of nodes (200) and also to evaluate Redhat support (the so-called TAM support which is also used by SLAC). At the autumn HEPiX in BNL many labs reported taking up SL (or the SLF or SLC builds) and all reported being very happy with it. Further, they all reported good binary compatibility between SL and RHEL, for example BaBar uses RHEL at SLAC but SLf in their Italian sites. DESY announced that they will phase out SuSE in favour of SL over the coming year. Regarding the licensed offering and Redhat support, SLAC reported being more or less satisfied with their Redhat support (value for money compared to the cost of personnel which would be needed if they did not have TAM support) and CERN said it was too early to judge. At the recent Spring 2005 HEPiX in Karlsruhe there were yet more reports of successful take-up of SL (including SLF and SLC) and it is now clearly the HEP standard Linux with only two known exceptions (SLAC and GSI), although there are a few sites where other flavours of Linux can be found for specific purposes or evaluations. Looking to the future, FNAL have announced a release of SL4 based on Redhat’s RHEL version 4 although their production systems are still based on SL3 and they expect to publish appropriate fixes to SL3 for some time yet. Although FNAL have stated formally that they do not support SL for other sites, they have established a support infrastructure (web site, mail list, etc) which facilitates community support within the HEP world and which they naturally lead (although they would welcome and encourage other sites to participate more). CERN is still considering its policy with respect to moving to SLC4 or SLC5 because of the timing when a fully certified release of either could be deployed and the subsequent support lifecycle with respect to the startup timing of the LHC (scheduled for the middle of 2007). Regarding Redhat support, CERN currently report mixed feelings - some issues take too long to get resolved inside the company – but on balance will probably continue it for the moment at least. The bottom line is that Scientific Linux rules almost all of the HEP world. The major exception is SLAC which looks like staying with the licensed RHEL solution but compatibility between the two does not look to be an issue. Why was Scientific Linux a success? At Spring 2005 HEPiX, Jan Iven of CERN put forward a number of possible reasons • Last year many sites were in a position ready to move forward and all were affected by changes in Redhat’s licence policy • Sites preferred to remain if possible with the Redhat flavour • Slow progress in negotiating an acceptable contract with Redhat (apart from some labs who decided to take up an offer to DoE sites – only SLAC?) • The potential for SL (and consequently SLC) to be compatible with RHEL was attractive along with built-in flexibility for local tailoring • No forced commercial contracts needed • The example of major sites adopting it with the underlying implication that they would fix problems (although neither FNAL or CERN was in a position to guarantee support) In my opinion, for this happy state of affairs, we should be grateful to • FNAL Linux Support team for making a distribution which is freely and easily usable and adaptable to other sites and for supporting this, even if unofficially • CERN Linux support for basing their distribution on the same foundation • HEPiX for offering a platform for publicising the ease of use and the success of SL/SLF/SLC. Now the challenge for the mid-term future is to keep the HEP community focused on a single version of SL, with differing deadlines and constraints pulling the labs forward or holding them back. The emerging 64bit architectures and their associated compilers are also factors that could promote divergence over time. .
Recommended publications
  • Technology Overview New Features Backupedge
    Technology Overview - BackupEDGE™ Introduction to the New Features in BackupEDGE 3.x Technology Overview BackupEDGE has a long history of providing reliable data protection for New Features many thousands of users. As operating systems, storage devices and BackupEDGE 3.x usage needs and tendencies have changed over the years, it has continuously met the challenge of providing inexpensive, stable backup and disaster recovery on a variety of UNIX and Linux platforms. Clients routinely find new and clever ways to utilize products. Storage devices have taken on new and exciting features, and incredible capacities. Products designed years ago had built-in limits that were thought to be beyond comprehension. Today, these limits are routinely exceeded. The need for data security is even more apparent. We’re constantly asking our To continue to meet the evolving needs of our clients, we are always clients what tools our asking what features of our products they find most useful, what products need to serve them improvements we can make, and what new requirements they have. better. We’ve used this knowledge to map out new product strategies designed to anticipate the needs of the next generation of users, systems and storage products. This has resulted in the creation of BackupEDGE 3.x, with a combination of internal improvements, new features and enhanced infrastructure designed to become the backbone of a new generation of storage software. Summary of Major Changes and Additions BackupEDGE 3.x features include: • Improvements to partition sizing, UEFI table cleanup after DR, and SharpDrive debugging (03.04.01 build 3). • Support for Rocky Linux 8.4 and AlmaLinux 8.4 (03.04.01 build 2).
    [Show full text]
  • Fermi.Gsfc.Nasa.Gov/Ssc
    Running the Fermi Science Tools on Windows Thomas E. Stephens (Innovim/GSFC) and the Fermi Science Support Center Team fermi.gsfc.nasa.gov/ssc Abstract Introduction The Fermi Science Tools, publicly released software for performing analysis on science data from The Fermi Science Tools, now distributed as the FermiTools (see poster P4.3) are developed collaboratively between the Fermi instrument teams and the Fermi Science Support Center (FSSC). Development of these tools began before the Fermi Gamma-ray Space Telescope, have been available and supported on Linux and Mac launch and the tools have been available since the very first public data release in 2009. since launch. Running the tools on a Windows based host has always required a virtual machine For a time, the Fermi Large Area Telescope (LAT) collaboration, maintained a natively compiled version of the tools for running Linux. New technologies, such as Docker and the Windows Subsystem for Linux has Windows, this version regularly had issues and was never publicly released or supported. It was discontinued, even made it possible to use these tools in a more native like environment. for use within the LAT collaboration, in 2014. In this poster we look at three different ways to run the Fermi Science Tools on Windows: via a VM, While it has always been possible to run the tools on Windows using a virtual machine with a Linux guest operating system, this is a somewhat “heavy” solution. With the recent development of both Docker and the Windows Subsystem a Docker container, and using the Windows Subsystem for Linux.
    [Show full text]
  • Linux? POSIX? GNU/Linux? What Are They? a Short History of POSIX (Unix-Like) Operating Systems
    Unix? GNU? Linux? POSIX? GNU/Linux? What are they? A short history of POSIX (Unix-like) operating systems image from gnu.org Mohammad Akhlaghi Instituto de Astrof´ısicade Canarias (IAC), Tenerife, Spain (founder of GNU Astronomy Utilities) Most recent slides available in link below (this PDF is built from Git commit d658621): http://akhlaghi.org/pdf/posix-family.pdf Understanding the relation between the POSIX/Unix family can be confusing Image from shutterstock.com The big bang! In the beginning there was ... In the beginning there was ... The big bang! Fast forward to 20th century... Early computer hardware came with its custom OS (shown here: PDP-7, announced in 1964) Fast forward to the 20th century... (∼ 1970s) I AT&T had a Monopoly on USA telecommunications. I So, it had a lot of money for exciting research! I Laser I CCD I The Transistor I Radio astronomy (Janskey@Bell Labs) I Cosmic Microwave Background (Penzias@Bell Labs) I etc... I One of them was the Unix operating system: I Designed to run on different hardware. I C programming language was designed for writing Unix. I To keep the monopoly, AT&T wasn't allowed to profit from its other research products... ... so it gave out Unix for free (including source). Unix was designed to be modular, image from an AT&T promotional video in 1982 https://www.youtube.com/watch?v=tc4ROCJYbm0 User interface was only on the command-line (image from late 80s). Image from stevenrosenberg.net. AT&T lost its monopoly in 1982. Bell labs started to ask for license from Unix users.
    [Show full text]
  • Cluster Management
    Cluster Management Cluster Management James E. Prewett October 8, 2008 Cluster Management Outline Common Management Tools Regular Expression OSCAR Meta-characters ROCKS Regular Expression Other Popular Cluster Meta-characters (cont.) Management tools SEC Software Management/Change Logsurfer+ Control Security plans/procedures, Risk Cfengine Analysis Getting Started with Cfengine Network Topologies and Packet Parallel Shell Tools / Basic Cluster Filtering Scripting Linux Tricks PDSH Cluster-specific issues Dancer’s DSH Checking Your Work Clusterit Regression Testing C3 tools (cexec) System / Node / Software Change Basic Cluster Scripting Management Logs Backup Management How to know when to upgrade, Logging/ Automated Log Analysis trade–offs Regular Expression Review Monitoring tools Cluster Management Common Management Tools OSCAR OSCAR Information Vital Statistics: Version: 5.1 Date: June 23, 2008 Distribution Formats: tar.gz URL: http://oscar.openclustergroup.org/ Cluster Management Common Management Tools OSCAR OSCAR cluster distribution features: I Supports X86, X86 64 processors I Supports Ethernet networks I Supports Infiniband networks I Graphical Installation and Management tools ... if you like that sort of thing Cluster Management Common Management Tools OSCAR OSCAR (key) Cluster Packages Whats in the box? I Torque Resource Manager I Maui Scheduler I c3 I LAM/MPI I MPICH I OpenMPI I OPIUM (OSCAR User Management software) I pFilter (Packet filtering) I PVM I System Imager Suite (SIS) I Switcher Environment Switcher Cluster Management
    [Show full text]
  • Emulex® Drivers Version 10.2 for Linux User Manual
    Emulex® Drivers Version 10.2 for Linux User Manual P010081-01A Rev. A Emulex Connects™ Servers, Storage and People 2 Copyright © 2003-2014 Emulex. All rights reserved worldwide. No part of this document may be reproduced by any means or translated to any electronic medium without the prior written consent of Emulex. Information furnished by Emulex is believed to be accurate and reliable. However, no responsibility is assumed by Emulex for its use; or for any infringements of patents or other rights of third parties which may result from its use. No license is granted by implication or otherwise under any patent, copyright or related rights of Emulex. Emulex, the Emulex logo, AutoPilot Installer, AutoPilot Manager, BlockGuard, Connectivity Continuum, Convergenomics, Emulex Connect, Emulex Secure, EZPilot, FibreSpy, HBAnyware, InSpeed, LightPulse, MultiPulse, OneCommand, OneConnect, One Network. One Company., SBOD, SLI, and VEngine are trademarks of Emulex. All other brand or product names referenced herein are trademarks or registered trademarks of their respective companies or organizations. Emulex provides this manual "as is" without any warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability or fitness for a particular purpose. Emulex may make improvements and changes to the product described in this manual at any time and without any notice. Emulex assumes no responsibility for its use, nor for any infringements of patents or other rights of third parties that may result. Periodic changes are made to information contained herein; although these changes will be incorporated into new editions of this manual, Emulex disclaims any undertaking to give notice of such changes.
    [Show full text]
  • Linux Install Guide SOFTWARE VERSION 1.10 March 2019
    Linux Install Guide SOFTWARE VERSION 1.10 March 2019 TABLE OF CONTENTS Welcome to TGX ........................................................................................................................................... 1 License Management ................................................................................................................................... 1 System Requirements .................................................................................................................................. 2 Operating System ................................................................................................................................. 2 Hardware/Drivers ................................................................................................................................. 2 TGX Version Compatibility .................................................................................................................... 3 Sender Installation Options .......................................................................................................................... 3 Do you want TGX to configure X? ......................................................................................................... 3 Allow TGX to overwrite the existing display configuration? ................................................................ 3 Should TGX start a new X session? ....................................................................................................... 3 Generate Self-signed Certificates
    [Show full text]
  • Future Linux Distribution Scenarios
    Future Linux Distribution Scenarios Tim Bell [email protected] CERN IT-CM Grid Deployment Board 12.01.2021 CentOS 7/8 (was 2019, now out of date) ● Within the (previous commitment) release ● Full support (enhancements etc.) first 5 years ● Maintenance support next 5 years ● Extended support later available only RHEL($) CentOS Governance ● Rebuild mandate as community open source Linux, binary compatible with Red Hat Enterprise Linux ● Red Hat employed all of the community members in 2014 ● And IBM purchased Red Hat in 2019 for 34B USD ● Board structure has 11 members with 3 non-Red Hat including CERN and Fermilab employees ● CERN has hosted and participated in community events ● e.g. CentOS dojos at CERN - 2018, 2017 ● FOSDEM CERN situation before Dec 8th 2020 ● Production ● CERN CentOS 7 – 40K hosts - Majority of OS build work done upstream ● WLCG physics workloads ● Online ● Services (prior to C8 availability Q2 2020) ● CentOS 8 – 4K hosts – OS release from upstream build only, local automation ● New services starting with this ● Would become default towards the end of Run 3 in 2024 ● Retired (Nov 2020) ● Scientific Linux 6 ● No further updates available Previous Red Hat model FEDORA XX-1 FEDORA XX FEDORA XX+1 RHEL X RHEL X.0 RHEL X.1 RHEL X.Y alpha, beta GA CentOS X.0 CentOS X.1 CentOS X.Y Open development 10 years cycle Closed development Credit: Thomas Oulevey, BE-CSS Previous CentOS schedule Release Beta avail Production Maintenance End of Life CentOS 7 2014-07 2019-07 2024-06 CentOS 8 rebuild 2019-09 2024-08 2029-05 CentOS 8 stream 2019-09 2024-08 2029-08 CentOS 9 (est.) 2021 2023 2028 ● CentOS stream is a distro derived from the very latest patches for RHEL (i.e.
    [Show full text]
  • Server Administrator Version 8.3 Installation Guide — Linux Notes, Cautions, and Warnings
    Server Administrator Version 8.3 Installation Guide — Linux Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright Copyright © 2016 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. 2016 - 03 Rev. A00 Contents 1 Introduction...........................................................................................................6 What Is New In This Release................................................................................................................. 6 Software Availability.........................................................................................................................6 Systems Management Software............................................................................................................7 Server Administrator Components On A Managed System...........................................................7 Security Features..................................................................................................................................
    [Show full text]
  • Laser Interferometer Gravitational Wave Observatory - Ligo - California Institute of Technology Massachusetts Institute of Technology
    LASER INTERFEROMETER GRAVITATIONAL WAVE OBSERVATORY - LIGO - CALIFORNIA INSTITUTE OF TECHNOLOGY MASSACHUSETTS INSTITUTE OF TECHNOLOGY Document Type LIGO-T070102-00-Z 2007/09/05 Post-S5 Operating System for the LIGO Data Grid Paul Armor, Carsten Aulbert, Lisa Bouge, Duncan Brown, Gerald Davies, Erik Espinoza, Henning Fehrmann, Kevin Flasch, Steffen Grunewald, Ben Johnson, Tyler Petire Distribution of this draft: LIGO Scientific Collaboration California Institute of Technology Massachusetts Institute of Technology LIGO Project - MS 51-33 LIGO Project - MS NW22-295 Pasadena CA 91125 Cambridge, MA 01239 Phone (626) 395-2129 Phone (617) 253-4824 Fax (626) 304-9834 Fax (617) 253-7014 E-mail: [email protected] E-mail: [email protected] WWW: http://www.ligo.caltech.edu/ Processed with LATEX on 2007/09/05 LIGO-T070102-00 Contents 1 Summary 2 2 Introduction 4 3 Factors Considered 5 3.1 Security . 5 3.2 Stability/OS support . 6 3.3 Hardware Support . 6 3.4 Virtualization Support . 7 3.5 Middleware Support . 7 3.6 Application Support . 7 3.7 Filesystem support . 8 3.8 Support for Scientific Users and Developers . 8 3.9 Ease of Transition . 8 3.10 Ease of Maintenance . 9 3.11 Cost . 9 3.12 Uniformity . 10 4 LSC Computing Group Impact 10 5 Conclusion 10 1 Summary The current reference operating system (OS) for the LIGO Data Grid (LDG) is Fedora Core 4 (FC4). This OS is no longer supported and is scheduled to be upgraded shortly after the S5 run ends in September. The authors of this document have been requested by the LSC Computing Committee to select a replacement OS for this upgrade.
    [Show full text]
  • Introduction to Linux
    Introduction to Linux Dr. George Magklaras Research Computing Services By way of Introduction By way of Introduction (2) ● Abel supercomputer: Initially number 96 in the Top500 list ● 10000 + cores ● 258 Teraflops/sec max. Theoretical peak performance ● 40 TebiBytes of RAM ● 400 TebiBytes of FhGFS filesystem Agenda ● History of Linux ● Why should I choose Linux? ● What is Linux made of (components, choices) ● How you can interact with/use a Linux system? ● The shell and command line interface ● Basic command line skills History of Linux Linus Torvalds Richard Stallman History of Linux (2) Courtesy of unix.org History of Linux (3) ● UNIX originated as a research project at AT&T Bell Labs in 1969 by Ken Thompson and Dennis Ritchie. ● The first multiuser and multitasking Operating System in the world. ● Developed in several different versions for various hardware platforms (Sun Sparc, Power PC, Motorola, HP RISC Processors). ● In 1991, a student at the University of Helsinki (Linus Torvalds) created a UNIX-like system to run on the Intel 386 processor. Intel had already started dominating the PC market, but UNIX was nearly absent from the initial processor Intel market. Why should I choose Linux? ● Best price/performance ratio ● Reliable ● User friendly ● Ubiquitous (from your mobile phone to a supercomputer) ● Scientific software is developed mostly in Linux today. What is Linux made of? Linux distributions ● Often referred to as 'distros'. ● The Linux kernel with a set of programs/applications (text editors, compilers, office suites, web browsers, etc) that make the system usable. ● Slackware was one of the first Linux distributions. ● Debian, RedHat (Fedora, RHEL) and Canonical (Ubuntu) are some of the most popular ones today.
    [Show full text]
  • Scientific Linux
    Scientific Linux Troy Dawson [email protected] NLIT 2008 May 12, 2008 Scientific Linux ● Presentation Overview – What is Scientific Linux – When should you use it? – When shouldn©t you use it? – How can it be part of your Linux strategy What is Scientific Linux? ● Recompiled Red Hat Enterprise Linux – Keep everything the same – Change where we must ● Value Added In – Additions – Changes What is Scientific Linux? ● Recompiled Red Hat Enterprise Linux – Keep everything the same ● Goal is to be completely compatible with RHEL ● When possible we do not change the source ● We recompile using RHEL©s compilers and libraries – Changes where we must ● Where we legally must because of Red Hat©s trademarks ● Where we morally felt we should – Bookmarks, Up2date (automated patch updates) What is Scientific Linux? ● Value Added In – Additions ● packages ● tweaks – Changes ● Installation program modified for Sites ● Ability to sit on a release What is Scientific Linux? ● Additions – Packages ● Consolidated RHEL suites – GFS, Cluster Suite ● Commonly used in science – OpenAFS ● That we felt were missing – pine, perl-CPAN, icewm What is Scientific Linux? ● Additions – Tweaks ● Easily installed ● Don©t have to modify original package ● Not installed by default ● Modifies standard configuration©s – color ls ● Turns off color ls – terminal button on desktop panel – security enhancements What is Scientific Linux? ● Sites are a way for a Laboratory, or site, to customize Scientific Linux with minimum effort and changes ● Gives facilities choices – What packages are installed – Add, delete, modify packages – Customize installer program What is Scientific Linux? To Summarize Sites allow you to change Scientific Linux into your own custom distribution.
    [Show full text]
  • Oracle Linux Frequently Asked Questions
    Frequently Asked Questions Oracle Linux February 10, 2021 Copyright © 2021, Oracle and/or its affiliates Public INTRODUCTION This document answers commonly asked questions about Oracle Linux. If you don’t see the information you need, please feel free to connect with us via Twitter, Facebook or via LinkedIn. What is Oracle Linux? The Oracle Linux operating environment delivers leading performance, scalability and reliability for business-critical workloads deployed on premise or in the cloud. Oracle Linux is the basis of Oracle Autonomous Linux and is used to run Oracle Cloud Infrastructure. Unlike many other commercial Linux distributions, Oracle Linux is easy to download and completely free to use, distribute, and update. Oracle Linux is available under the GNU General Public License (GPLv2). Support contracts are available from Oracle. What is the Unbreakable Enterprise Kernel (UEK) for Oracle Linux? The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations and business- critical performance and security optimizations for cloud and on-premise deployment. It is the Linux kernel that powers Oracle Cloud Infrastructure and Oracle Engineered Systems such as Oracle Exadata Database Machine. Oracle Linux is application binary compatible with Red Hat Enterprise Linux whether running the Unbreakable Enterprise Kernel or Oracle’s alternative Red Hat compatible kernel. Existing applications run unchanged with the Unbreakable Enterprise Kernel because all system libraries remain unchanged. The Unbreakable Enterprise Kernel, the default kernel for Oracle Linux, is available for 64-bit Intel and AMD (x86-64) and 64-bit Arm (aarch64) systems. What are Ksplice zero-downtime updates? Ksplice zero-downtime updates are available to Oracle Linux Premier Support customers at no additional cost.
    [Show full text]