
Lonestar 5: Customizing the Cray XC40 Software Environment C. Proctor, D. Gignac, R. McLay, S. Liu, D. James, T. Minyard, D. Stanzione Texas Advanced Computing Center University of Texas at Austin Austin, Texas, USA fcproctor, dgignac, mclay, siliu, djames, minyard, [email protected] Abstract—Lonestar 5, a 30,000 core, 1.2 petaFLOP Cray environment and 2.) a highly customized TACC environment XC40, entered production at the Texas Advanced Computing that tailors other vendor-independent environments familiar Center (TACC) on January 12, 2016. Customized to meet the to the open-science community. The customized TACC needs of TACC’s diverse computational research community, Lonestar 5 provides each user a choice between two alterna- environment leverages proven Cray infrastructure in ways tive, independent configurations that are robust, mature, and that are often invisible to the user. This is executed in proven: an environment based on that delivered by Cray, and a a manner to relieve users from the burden of needing to second, highly customized environment that mirrors Stampede, worry about the implementation, facilitating existing TACC- Lonestar 4, and other TACC clusters. supported researchers to smoothly migrate workflows to This paper describes our experiences preparing Lonestar 5 for production and transitioning our users from existing Lonestar 5. resources. It focuses on unique features of the system, espe- At the heart of the TACC environment lies a set of cially customizations related to administration (e.g. hierarchical customized scripts that govern shell startup behavior. User software stack; secure virtual login nodes) and the user environments are automatically and efficiently propagated environment (e.g. consistent, full Linux environment across from login sessions to batch and interactive sessions. The batch, interactive compute, and login sessions). We motivate our choices by highlighting some of the particular needs of result is consistency: the environment that the user sees is the our research community. same whether on a login node, in an interactive session, or running via a batch submission. Thanks to a tailored network Keywords-Texas Advanced Computing Center; Lonestar 5; Cray; XC40; Software Environment and firewall rules that allow the Slurm workload manager to communicate directly with all login nodes, users are able to I. INTRODUCTION gain access directly to 1252 24-core Haswell compute nodes January 2016 marked the advent of a fifth generation of via ssh connections maintained by prolog/epilog scripts in the Lonestar series of supercomputers deployed at the Uni- conjunction with Slurm’s Pluggable Authentication Module versity of Texas at Austin-based Texas Advanced Computing (PAM) [1]. This allows for a minimum of disruption when Center (TACC). Over 3500 active users began the transition transitioning from a development cycle to production-level from Lonestar 4, a 22,000 core Dell CentOS Linux cluster, computation. to Lonestar 5, TACC’s new 30,000 core, 1.2 petaFLOP Cray The TACC user environment has been honed over several XC40 supercomputer. Serving as a unique and powerful generations of compute systems to provide a scalable and computational resource for the University of Texas research flexible platform to address workflows ranging from mas- community, as well as public institutions and other partners sively parallel system-level computation, to serial scripting across the state, Lonestar 5 supports a diverse range of applications conducting ensemble studies, to visualization of research interests across dozens of fields of science. This complex and varied data sets. Built upon this environment spectrum of users drives the demand for intuitive, adaptive, framework, Lonestar 5 implements a hierarchical software and efficient environments that work for the researcher. tree managed by TACC’s own Lmod module system. This Our paper will offer a look behind the scenes at workflow Lua-based implementation of environment modules makes it needs that helped to motivate the choices behind the crafted easy to define and manage application and library dependen- user environment custom-built to satisfy our eclectic user cies so they are consistent across both the login and compute base. Parts of the design, construction, and administration nodes, and allows users to define their own custom software of the Lonestar 5 computing environment are illustrated collections to meet their specific needs. to emphasize the functionality and accessibility that has To support protected data including but not limited to become indispensable on other TACC cluster systems. We applications and data sets falling under International Traffic also discuss our experience during both pre-production and in Arms Regulations (ITAR), secure virtual login nodes the first months of production operation. contained within an image of Cray Development and Login This system offers users a choice between two robust and (CDL) employ a multitude of logging and security tools to vetted computing environments: 1.) a variation of the Cray audit the contents of users’ protected files while allowing access to the same open science resources enjoyed by our /scratch, and /corral. While /home and /scratch are single- broader research communities. system NFS and Lustre file systems respectively, /work Multiple, portable build environments reside in their own (Lustre) and /corral (NFS) are shared across all major TACC change rooted (chroot) jail. Each is a copy of an up-to- HPC resources. Users are provided a collection of aliases date image of a compute node; support and operations staff and environment variables that allow for fluid navigation use these environments to compile and test staff-supported across these file systems. open science and protected data type applications before full As on other TACC resources, users gain access to Lones- production-level deployment. This helps to ensure a safe tar 5 through a set of login nodes. For the XC40, the login and dependable creation of over one hundred staff-supported nodes are classified as External Services Login (esLogin) applications on Lonestar 5 minimizing the risk inherent with nodes. They will be referred to as “login nodes” throughout the operation and maintenance of globally shared production the remainder of this paper. These shared resource machines file systems. serve as a location for file transfer, compilation, and job A custom MPI wrapper module connected to Cray’s submission. The environment present on these login nodes MPICH library implementation is available in the TACC is also present on the compute nodes. In addition to the shell environment. This tool provides functionality and familiarity startup behavior modifications, to achieve this uniformity, while being able to maintain the scalability and performance network customizations, discussed in Section IV, allowed inherent to Cray MPICH. Combining Cray’s User-Level for a modified build of Slurm in native mode, discussed Generic Network Interface (uGNI) with the efforts of the in Section VII, to communicate and negotiate user ssh MPICH Application Binary Interface (ABI) Compatibility traffic directly to and from compute nodes without the use Initiative, we have been able to test multiple MPI libraries of intermediary batch system management (MOM) service across the Aries network [2]. Coupled with Lmod and the nodes or wrapper applications. RPM build environments, we have been able to sample in The Lmod module system is utilized to manage the short order a variety of off-the-shelf parallel implementations software stack, both for Cray-provided applications with to potentially be integrated in future staff-supported software their TCL-based modulefiles as well as for TACC staff- releases. supported packages. Leveraging the startup behavior de- Lonestar 5 renews TACC’s relationship with Cray, one of scribed in Section III, Lmod (Section V) has been heavily the leading companies in the high performance computing integrated into customized application support (Section VI) (HPC) sector. The hardware, tools, applications, and best best practices. practices of Cray and TACC together allow our staff to Up to this point, the components mentioned in this section dynamically respond, assess, and address the needs of user are present, in some form, regardless of what environment communities that are growing in number, depth, and breadth users choose to utilize. Upon login, users default into the at record pace. This paper identifies the best of these key “TACC environment”. The changes made on the XC40 to features and implementation details that make us proud to support this environment are the main focus of this paper. welcome Lonestar 5 into the TACC supercomputing family. Where appropriate, asides will be provided to help clarify specific details about any changes or important points with II. LONESTAR 5 ENVIRONMENT regard the Cray environment. Propagating a consistent, full Linux environment from Users may transition to the “Cray environment” by issuing the moment a user logs in through each and every job the command cray_env and, as prompted, log out of their submitted stands as a keystone spanning all major TACC current Lonestar 5 terminal sessions to login in once again. HPC systems. Many pieces work in concert to provide a At this stage, the user is presented with an environment powerful and intuitive experience that has become essential that resembles the unmodified, off-the-shelf, Cray-provided for our research communities. environment. Users accustomed
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-