
Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory Title Integrating Grid Services into the Cray XT4 Environment Permalink https://escholarship.org/uc/item/6rj9s5ds Author Cholia, Shreyas Publication Date 2009-05-08 Peer reviewed eScholarship.org Powered by the California Digital Library University of California Integrating Grid Services into the Cray XT4 Environment Shreyas Cholia and Hwa-Chun Wendy Lin, National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory ABSTRACT: The 38640 core Cray XT4 "Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. In our work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic grid interfaces that mask the underlying system-specific details for the end user. KEYWORDS: Cray, XT4, grid, Globus, distributed computing, OSG, science gateways, MPI, PBS creating fully functional OSG compute and storage 1. Introduction elements on this system. We pay special attention to security, job management, storage, accounting and High performance computing (HPC) is becoming reporting services for the grid, while using the principal of increasingly parallel. With clock speeds flattening out, least privilege to set up the software stack. The end result and power consumption playing a major role in CPU is a parallel OSG computing platform that can be design, multi-core and many-core technologies are seen as transparently accessed through generic grid software. This the most efficient way to increase overall performance allows users to access the underlying HPC resources and scalabilty. While grid computing originally evolved without needing detailed knowledge of the Cray XT4 from a serial model of computing, it has become architecture, thus increasing overall usability through increasingly important for grids to be able to take transparent, service-oriented, cross-platform interfaces. advantage of highly parallel resources in order to maximize resource utilization, while maintaining the The NERSC Cray XT4 system, named Franklin, is a benefits of a distributed, on-demand service. Because of massively parallel compute system available for scientific its unique architecture, the Cray XT4 system presents a research. The system is capable of providing 356 special set of challenges when it comes to integrating grid TFlop/sec of computational power through its 9532 quad- services and tools with the underlying environment. core 2.3 GHz AMD-Opteron processors. It also provides NERSC has successfully integrated on-demand grid access to a 436TB Lustre parallel filesystem. Franklin is services based on the Open Science Grid (OSG) software located at the NERSC supercomputing center at Lawrence stack on its 38640 core (38128 compute cores) Cray XT4 Berkeley National Laboratory, and is available to system. In this white-paper, we discuss the challenges scientific researchers under the umbrella of the Office of presented by this environment and our solution for Science in the U.S. Department Of Energy. CUG 2009 Proceedings 1 of 9 3. Open Science Grid 2. What is Grid Computing? The Open Science Grid (OSG) is a distributed Grid computing provides the ability to share and computing infrastructure for large-scale scientific aggregate heterogeneous, distributed computational research, built and operated by a consortium of capabilities and deliver them as a service. According to universities, national laboratories, scientific collaborations Ian Foster, a grid is a system that “coordinates resources and software developers. Researchers from many fields, that are not subject to centralized control, using standard, including astrophysics, bioinformatics, weather and open, general-purpose protocols and interfaces, to deliver climate modeling, computer science, medical imaging, nontrivial qualities of service”. This idea can be nanotechnology and physics use the OSG infrastructure to explained by the following principle: a uniform set of advance their research. software interfaces to access non-uniform and physically distributed compute and storage resources. In practice, Sites can make their resources available to the OSG grid computing does not make any sense unless the by installing a pre-defined service stack that is made underlying resources (compute systems, data storage available as part of the Virtual Data Toolkit (VDT). This systems) are integrated into a larger whole i.e. a working includes the Globus software stack for the core grid that coordinates resources and users. computational and data grid functionality, and a set of supporting software for coordinating the resources in the There are several flavors of grids, but, for the most grid. The Globus Toolkit includes services that support part, scientific computing grids seem to have converged the following operations: on a common interoperable infrastructure as seen in the 1. GSI user and host authentication and OSG, TeraGrid, Earth Systems Grid and EGEE to name a authorization few. Most of these grids use some flavor of the Globus 2. Globus Job submission and management through middleware stack or equivalent to provide a common GRAM (GT2 and GT4) layer of services that expose the underlying resources 3. File storage and transfer through Globus GridFTP Grid computing is well equipped to deal with serial and “embarrassingly parallel” tasks (computational jobs Additionally a site must be able to advertise resource that can be broken up into parallel tasks that require little descriptions and availability, and report on usage to no communication between these tasks). This model accounting information back to the central OSG has been exploited by certain scientific applications, infrastructure. In order to do this the site must run a set of particularly in the high-energy physics community. There services: is little interaction between nodes running a subtask, and 1. CEMon for resource descriptions each task can run to completion, without consideration for 2. RSV probes for resource availability other nodes in the system. However, there has been an 3. Gratia probes for accounting information increasing need for access to grid based parallel computers – systems that can be provisioned on-demand As the OSG broadens in scope parallel computing based on specific needs, while still providing the benefits becomes increasingly important as a means to achieve of a highly parallel resource with a fast interconnect. This scalable performance. Newer science communities allows users to farm out a tightly coupled set of including ones in bioinformatics and climate science have computational interactions to an appropriately tightly a stake in the OSG, but also have parallel HPC coupled system that matches the job requirements. requirements for their jobs. The NERSC Franklin system plays a key role in fulfilling the parallel computing needs Additionally, user data is often distributed across of the OSG. multiple centers and may need to be moved across the grid, to an appropriate storage resource that is local to the Opening up the NERSC Cray XT4 system to the computational tasks. There needs to be a set of standard OSG allowed us to deliver the power and scale of a high-performance interfaces to access and transfer data massively parallel HPC system to whole new set of from multiple locations. This points to a data-grid where scientists. This has created an opportunity to enable new users can move data across resources transparently. science by allowing scientists from around the world to run their jobs in this manner through the OSG. CUG 2009 Proceedings 2 of 9 However, there are some peculiarities in the XT4 1. The system is partitioned into service nodes and architecture that make this quite a challenge. The key compute nodes. The service nodes handle behind grid computing is the idea of a standard, uniform interactive user-logins, batch system interface that can be used to access a wide variety of management, and I/O. The compute nodes run underlying platforms. This means that any system specific user jobs. details must be hidden from the user. The user only needs 2. The compute nodes do not have the same to define the job or workflow or data transfer operation environment as the interactive or service nodes. through a common interface, and to be able to query the While the interactive nodes have a full-featured resources or jobs. The grid middleware must then derivative of SuSE Linux, the compute nodes run abstract all system specific details, and should be able to a stripped down operating system called interface with the underlying environment to be able to Compute Node Linux (CNL). This allows for manage these requests. increased scalability and performance on the compute nodes, but also means that any In the rest of this paper we describe how we set up executables that are staged on the service nodes, and configured the OSG and Cray software to navigate must be precompiled as static binaries, since the complexities of
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-