DVS, GPFS and External Lustre at NERSC - How It's Working on Hopper

DVS, GPFS and External Lustre at NERSC - How It's Working on Hopper

DVS, GPFS and External Lustre at NERSC - How It's Working on Hopper Tina Butler, Rei Chi Lee and Gregory F. Butler, National Energy Research Scientific Computing Center ABSTRACT: Providing flexible, reliable and high performance access to user data is an ongoing problem for HPC centers. This paper will discuss how NERSC has partitioned its storage resources between local and global filesystems, and the configuration, functionality, operation and performance of these several different parallel filesystems in use on NERSC's Cray XE6, Hopper. KEYWORDS: Filesystems, GPFS, Lustre, DVS NERSC NERSC and Global Filesystems The National Energy Research Scientific At the end of the 1990's NERSC was Computing Center (NERSC) is the primary high procuring and integrating a large IBM SP3 performance computing (HPC) facility for cluster, Seaborg. As part of that procurement, scientific research sponsored by the Office of NERSC chose to provide all user storage on Science in the US Department of Energy. Seaborg using IBM's Global Parallel File System NERSC is operated by Lawrence Berkeley (GPFS). GPFS was quite new at that time, and National Laboratory (LBNL) and provides there were some concerns about using a computing resources for more than 4000 filesystem originally designed for multi-media researchers worldwide annually. NERSC has a streaming for user home directories. However, highly diverse workload and supports after some growing pains, and a number of applications at a wide range of scales and problem reports and fixes, GPFS proved to be a computational intensity. reliable and high performance filesystem. Around the same time, it became clear that user data was an increasingly significant issue NERSC systems from both management and productivity To support that range of users and perspectives. Users run codes and do pre- and applications, NERSC fields a new large system post-processing on multiple machines, and about every three years, and keeps each large therefore need access to their data in multiple system for 6-7 years. Smaller mid-range or places. Copying files around and keeping them special purpose systems are installed synchronized is a waste of researchers valuable periodically. time and a waste of significant amounts of Current NERSC systems include: storage. In 2001, NERSC started the Global Unified Hopper (NERSC-6), a Cray XE6, with 6384 Parallel File System (GUPFS) project. The goal 24-core nodes, 1.28 PF peak. of the project was to provide a scalable, high Franklin (NERSC-5) a Cray XT4 with 9532 performance, high bandwidth shared filesystem 4-core nodes, 356 TF peak, for all NERSC production computing and Carver, an IBM iDataplex cluster with 400 support systems. The primary purpose of the 8-core nodes GUPFS project was to make it easier to conduct Magellan, an IBM iDataplex cluster; cloud advanced scientific research using NERSC computing testbed systems. This was to be accomplished through PDSF, throughput commodity cluster the use of a shared filesystem providing a unified Euclid, a Sunfire analytics system file namespace, operating on consolidated shared Dirac, a GPU testbed storage that could be directly accessed by all HPSS, an archival HSM storage system NERSC production systems. The first phase of NGF, the center-wide shared filesystem the project was to evaluate emerging technologies in shared/clustered filesystem 1 software, high performance storage area network Cray systems required a new software layer. (SAN) fabrics and high performance storage Directly mounting GPFS filesystems on devices to determine which combination of Franklin’s 9,532 compute nodes was not hardware and software best fulfilled the technically or financially feasible. Fortunately, requirements for a center-wide shared filesystem. Cray had acquired the rights to the Data The second phase of the project was a staged Virtualization Service (DVS) software originally deployment of the chosen technologies in a developed by Extreme Scale (later Cassatt). production environment. A significant number of DVS essentially provides I/O forwarding candidates for the various technologies were services for an underlying filesystem client. assessed on the GUPFS testbed system. While XT login nodes were able to directly In October 2005 the first center-wide shared mount the NGF filesystems, using DVS on XT filesystem instance was deployed at NERSC service nodes allowed the GPFS-based based on DDN and Engenio storage, Fibre filesystems to be accessed by XT compute Channel (FC) fabric and GPFS. GPFS’s nodes. This capability was a major productivity resilience, remote clustering features and data gain for NERSC users. management policies made it the choice for the Hopper’s externalized architecture NERSC’s center-wide filesystem. This first Hopper, NERSC’s newest system, is a Cray instance of the NERSC Global Filesystem XE6. Hopper is the first large system that (NGF), /project, was designed to support NERSC has procured since NGF was deployed collaborative projects that used multiple systems as a production resource. The existence of NGF in their workflow. Initially /project provided 70 was a major factor in the architecting of the TB of high performance, permanent storage Hopper system. First, Hopper has two external served over Fibre Channel and Ethernet; it has scratch filesystems; the filesystems are Lustre- been extremely popular and has grown to 873 based, but were designed to be easily detached TB, with expansion to 1.5 PB coming later this and incorporated into NGF at a later time if year. NERSC so chose. In the external filesystems, the metadata and object storage servers are After the demonstrated success of /project, commodity Linux servers – the connection to the NERSC deployed NGF instances for user home interior of the XE6 is via Lustre Routers (LNET) directories (/global/u1 and /global/u2) and on XIO service nodes. system common areas for shared software NERSC also chose to externalize the login (/global/common). These smaller filesystems are nodes for Hopper. At the time that Hopper was served over 10 Gb Ethernet and are tuned for being procured, Franklin was having issues with smaller block sizes and moderate performance. overloaded internal login nodes, and interconnect The latest NGF filesystem put in production is a instability. Externalizing the login functionality global scratch area with size and performance allowed the incorporation of commodity servers similar to /project. All NERSC production with more memory, processors and swap space systems are using NGF filesystems; Carver, than internal service nodes. The external login Magellan, Euclid and Dirac are using NGF nodes and external filesystems were seen as a exclusively – they have no local user-writable way to provide an environment where users can storage. access their files, do pre- and post-processing, NGF and Cray systems develop codes, and submit jobs even if the large At the time that NGF was entering compute resource was unavailable. The external production, NERSC’s largest system was environment was designed to have as few Franklin, a Cray XT4. Cray’s filesystem of dependencies on the internal XE6 resources as choice for XT systems has been Lustre. The possible. The external login nodes also have all standard Lustre installation on a Cray system NGF filesystems mounted. DVS is used on includes directly attached storage arrays and Hopper to project the NGF filesystems to metadata and object storage servers on service compute nodes. Hopper external nodes are nodes. Access to the Lustre filesystem(s) from shown in Table 1. compute nodes running a lightweight OS is Another benefit of the externalized server provided by a lightweight Lustre client. design became apparent when NERSC and Cray Franklin was delivered with local Lustre agreed to a phased delivery for the Hopper filesystems for user home directories and scratch system. The first phase of Hopper was a space. Making NGF filesystems accessible on modestly sized XT5, but included all the 2 externalized login and filesystem servers and Hopper’s configuration storage. The intent was to integrate the external Hopper’s internal node types and counts are storage and login nodes into the NERSC shown in Table 2; overall system configuration infrastructure in Phase 1 and resolve any issues is shown in Figure 1. The compute partition of prior to Phase 2 delivery. NERSC ended up the system has 6384 dual socket 2.1 GHz Magny being able to keep the Phase 1 system available Cours 12 core-based nodes providing 153,216 for users while the Phase 2 XE6 was installed cores. Hopper can produce 3,677,184 CPU and integrated by splitting the login nodes and hours per day. external scratch filesystems between the two To support the compute partition, Hopper systems. When the time came to move the has several types of service nodes. Services that second external Lustre filesystem to the XE6, the require an external interface like Lustre routers lifetime of the XT5 system was extended by (IB), network nodes (10 GbE), or DVS (FC or transitioning user files to the NGF-based IB) are sited on the latest version of Cray’s /global/scratch. service node, the XIO. Services that do not Quantity Node Type require direct outside access like Torque or PBS MOM nodes can be put on repurposed compute 12 External Login Nodes nodes. This allows more flexible configuration 4 External Data Mover Nodes of these services and provides the greater 52 Storage server nodes (Lustre OSS) resources of the compute node to the serial 4 Metadata server nodes (Lustre portion of the running application. On Franklin, MDS) there have been frequent problems with 26 LSI Engenio 7900 storage system oversubscription of resources on MOM nodes. 208 16-slot drive enclosures This has been mitigated on Hopper through the 3120 1 TB 7.2K RPM SATA drives use of repurposed compute nodes. The other 2 LSI Engenio 3992 for metadata type of service node that can reside easily on a 24 450 GB 15K RPM FC drives for repurposed compute node is a shared root DVS metadata server.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us