Project Status Report High End Computing Capability Strategic Capabilities Assets Program November 10, 2015

Dr. Rupak Biswas – Project Manager NASA , Moffett Field, CA Rupak.Biswas@.gov (650) 604-4411 www.nasa.gov Modular Supercomputing Facility Site Infrastructure Design Approved

• The Modular Supercomputing Facility (MSF) Mission Impact: The MSF prototype will demonstrate site infrastructure design package was the feasibility of energy-efficient modular computing, which could ultimately more than double approved by the NASA Ames Permit Review supercomputing capability HECC provides to NASA. Board. • HECC facilities engineers coordinated with Ames’ Code J engineering and module supplier SGI/CommScope to develop site requirements and designs. • The site was sized to accommodate two Data Center Units (DCU)-20 modules (up to 32 compute racks) and has a 2.5-megawatt power capacity to accommodate any future upgrades. • The approved design package was provided to mechanical/electrical contractors for build-to- print bid quotations. • Contractor selection and site construction are This artist’s rendering of the Modular Supercomputing Facility (MSF) is shown in its actual location at NASA Ames, adjacent to the NASA forecast to begin in early December and Advanced Supercomputing (NAS) facility. The site is sized for two modules completed by March 2016. to be placed side by side. Utilities for the MSF will be carried to the site through underground conduits. Marco Librero, NASA/Ames • The initial implementation will be a proof-of- concept prototype consisting of one DCU-20 POCs: William Thigpen, [email protected], (650) 604-1061, NASA Advanced Supercomputing (NAS) Division; module, four compute racks, and one service Chris Buchanan, [email protected], (650) 604-4308, rack. NAS Division, Computer Sciences Corp.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 2 HECC Deploys Re-Exporter To Share Kepler Data More Efficiently

• The HECC Systems team deployed a Mission Impact: HECC’s in-house re-exporter technique enables projects to easily and securely network file system (NFS) re-exporter to access data stored within the HECC enclave without share Kepler mission data more efficiently. duplicating data. • The re-exporter (called keprex) enables Kepler users to easily access the data on HECC from their remote workstations without duplicating the data. • The Systems team first pioneered this approach for the NASA Earth Exchange (NEX) project to give users simple and secure access to the data stored within the HECC enclave via the NEX sandbox system. • The first phase of this project enables users on remote workstations whose unique user identity information matches the keprex HECC’s in-house Kepler re-exporter system (called keprex) accesses server information to access the Kepler data. Kepler mission data via the InfiniBand network. Once user information is In the second phase, keprex will enable authenticated, keprex exports the data via 10 gigabit Ethernet. users with different user identity information to be mapped after validation. POCs: Bob Ciotti, [email protected], (650) 604-4408, NASA Advanced Supercomputing (NAS) Division; Davin Chan, [email protected], (650) 604-3613, NAS Division, Computer Science Corp.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 3 HECC Experts Modernize Job Scripts to Improve Efficiency and System Utilization

• HECC Application Performance and Productivity Mission Impact: HECC code expertise in modernizing (APP) experts worked with John Henderson, outdated job scripts greatly simplifies workflow, and Atmospheric and Environmental Research (AER), improves job turnaround time and system utilization on the Pleiades . to improve six-year old job scripts for running the Stochastic Time-Inverted Lagrangian Transport (STILT) model, which is used for air quality studies. • APP’s new method uses a combination of GNU Parallel and PBS job arrays, which together form the most efficient and powerful way to run a battery of serial jobs. – In addition to greatly simplifying the job scripts, this approach keeps all other sub-jobs running when one node crashes; frees up nodes quicker to avoid idle nodes; and starts running sub-jobs on single nodes as they become available. – Both the GNU Parallel and PBS job arrays methods are documented in the HECC Knowledge Base. STILT tracing of 500 particles released backwards in time from the receptor • Henderson is “very impressed with how smoothly location over 10 days. The image shows the last 4 hours of particle vector the runs have progressed,” and will pass the new motion as each particle converges on the receptor location (yellow stick/ball near Fairbanks, Alaska). Particles are colored by height above ground, and scripts to colleagues running STILT jobs at AER so turn pink when they are in the lower half of the Earth’s planetary boundary that they can take advantage of this new layer. The cumulative footprint field, shown as the larger-scale gridded methodology. boxes on the surface, increases as the particles accumulate pollutants. Tim Sandstrom, NASA/Ames; John Henderson, AER

POC: Johnny Chang, [email protected], (650) 604-4356, NASA Advanced Supercomputing Division, Computer Sciences Corp.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 4 HECC-Langley Collaboration Doubles Performance of Aerospace Structures Code

• Continuing a long-term collaboration, HECC Mission Impact: Doubling the performance of the Application Performance and Productivity (APP) ScIFEN code means researchers can simulate twice experts and an applications group at Langley the number of fatigue cycles on an aerospace vehicle in the same amount of time. This greatly reduces Research Center (LaRC) improved performance of reliance on lower-fidelity methods that have higher a code by a factor of 2.1. The teams jointly uncertainties. optimized the Scalable Implementation of Finite Elements by NASA (ScIFEN) code by: – Identifying a more efficient iterative solver and matrix preconditioner. – Identifying compiler flags that give the compiler more freedom to inline functions; that is, to replace a function call with a copy of the function itself. – Installing compiler directives to inform the compiler, freeing it to fully optimize “hot loops”—those loops where the code spends most of its time. – Recoding several loop nests to avoid unnecessary and expensive computation. • ScIFEN is the third code improved through this The Scalable Implementation of Finite Elements by NASA (ScIFEN) code, developed with support from a NASA Aeronautics Research Institute collaboration. In each case being investigated, the Seedling Fund award, produces computationally demanding simulations of teams participate in multiple WebEx sessions to material behavior—such as deformation and fatigue-cracking at the micro- review previous analysis and optimization work and scale—in aerospace vehicles. This simulation image shows cracking in the microstructure of an aluminum alloy. brainstorm new approaches. As the LaRC staff become more comfortable with the tools and POCs: Dan Kokron, [email protected], Robert Hood, techniques, they will be better able to develop [email protected], (650) 604-0740, NASA Advanced codes suitable for modern HPC systems. Supercomputing (NAS) Division, Computer Sciences Corp.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 5 ESS Team Completes Deployment of Mac OS X 10.10 to Staff Workstations

• HECC’s Engineering Servers and Services (ESS) Mission Impact: Deployment of the Mac OS X 10.10 team completed the deployment of Mac OS X (Yosemite) image enables HECC to support the latest 10.10 (Yosemite) to 200 workstations and laptops Mac workstations and laptops for local scientific users and support staff. used by staff at the NAS facility. • The ESS team began the rollout in March, after the Security team approved the Yosemite image. Development for the Yosemite rollout included: – Building the NAS image using Casper. – Configuring security and FileVault. – Upgrading Casper, our Mac enterprise management tool, to the latest version. – Upgrading scientific software. – Defining the optimal upgrade process. – Completing a full data dump, system imaging, applications installation, encryption, and data restore for each Mac upgraded. • Using knowledge gained from the Yosemite upgrade, ESS has already begun the download and development of the new OS X 10.11 El Capitan The upgrade to Mac OS X 10.10 (Yosemite) on HECC workstations and laptops is completed, and the development of the OS X 10.11 El Capitan build. image is now underway.

POC: Chad Tran, [email protected], (650) 604-2780, NASA Supercomputing Division, Computer Sciences Corp.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 6 Casper Server Now Accessible Over NAS VPN to Improve Mac Patching

• To improve the process of patching Macs to ensure Mission Impact: By improving the process of applying that critical patches are applied as efficiently as security patches to staff systems not connected to the possible, the HECC Networks and Engineering local network, HECC system administrators increase their ability to manage patches and reduce the risk of Servers and Services (ESS) teams opened up the successful attacks on user systems. Casper enterprise management server to the NAS virtual private network (VPN). • Previously, the Casper server was only reachable by systems on the local NAS Ethernet. To allow patching of Macs on the NAS VPN, HECC staff took the following actions: – Obtained security review and approval of proposed service configuration. – Opened up necessary ports to allow the Casper server to be accessible on the VPN. – Modified Casper to recognize Mac addresses of systems on the VPN, rather than recognizing only IP addresses. • Security patches and monthly update patches can now be easily applied to systems used by HECC/ The Casper application is now accessible on the NAS virtual private NAS staff who are on travel, on leave, at remote network to quickly push out the increasing number of Macintosh security patches. locations, or telecommuting.

POCs: Chris Shaw, [email protected], (650) 604-4354, Harjot Sidhu, [email protected], (650) 604-4935, NASA Supercomputing Division, Computer Sciences Corp.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 7 ESS Team Rolls Out Beta Test of PIV Authentication on the Mac Using Centrify

• To meet NASA’s new requirement for multi- Mission Impact: Transitioning HECC Mac systems to factor authentication on user systems, HECC’s use PIV smartcards for computer login meets government-wide access requirements and will Engineering Servers and Services (ESS) team provide NASA with a proven model to role out to all rolled out a Mac beta test of Personal Identity NASA Mac users. Verification (PIV) smartcard authentication using the Centrify software. • ESS worked through numerous issues to achieve the successful beta deployment, including: – Obtained a NASA Data Center (NDC) organizational unit and Centrify Zone technology, defined for our systems to allow ESS to manage the systems with the Centrify Server Suite. – Developed and managed an account to authenticate to FileVault and bring up the login window for PIV authentication.

– Developed machine-based enrollment through Once the smartcard authentication method is in place on Macs at the NASA configuration files for PIV authentication that will not Advanced Supercomputing facility, staff will be prompted for a smartcard lock out legacy technologies (such as wireless) that PIN instead of an NDC password to complete authentication to their cannot use PIV authentication. systems. • Full rollout of Mac PIV smartcard authentication to staff is expected to start in early November. POCs: Ted Bohrer, [email protected], (650) 604-4335, Ed Garcia, [email protected], (650) 604-1338, NASA Supercomputing Division, ADNET Systems

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 8 Developing Computational Aeroacoustics Tools for Airframe Noise Prediction *

• Computational aeroacoustics (CAA) plays a critical role Mission Impact: Reducing aircraft noise is a goal of in understanding and characterizing noise-generation NASA’s Aeronautics Research Mission Directorate. sources for aircraft component designs. Modeling and With the help of computational aeroacoustics tools and simulation experts at NASA Ames are developing state- predictive capabilities, next-generation aircraft can of-the-art CAA tools and predictive capabilities with their meet more stringent noise-generation standards. Launch Ascent and Vehicle Aerodynamics (LAVA) code. • Using LAVA to run parallel computations across thousands of Pleiades nodes, the researchers: – Demonstrated and validated high-fidelity, time-accurate, higher-order computational methods for complex noise- prediction problems. – Utilized far-field noise propagation techniques to identify major airframe noise frequencies and sources. – Assessed two analytical benchmark problems to advance techniques for predicting noise generated by landing gear and by the leading-edge slat of a high-lift wing—major contributors to aircraft noise during landing. – Improved the resolution of unsteady flow features in simulations of the Four Jet Impingement Device, which is Image from a computational aeroacoustics simulation of landing gear for a used to experimentally reproduce the broadband noise benchmark airframe noise problem. The flow field is shown with signatures of aircraft engines. isocontours of the q-criterion (a measure of vorticity) colored by Mach • Results compare well with experimental and number. The simulation was computed on the Pleiades supercomputer using the Launch Ascent and Vehicle Aerodynamics (LAVA) code. Michael computational results for key benchmark problems, and Barad, NASA/Ames the innovative techniques provide an excellent method for future simulations. POCs: Cetin Kiris, [email protected], (650) 604-4485, NASA * HECC provided supercomputing resources and services in support of this work Advanced Supercomputing Division.

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 9 HECC Facility Hosts Several Visitors and Tours in October 2015

• HECC hosted 7 tour groups in October. Guests learned about the agency-wide missions being supported by HECC assets; some groups also viewed the expanded D-Wave quantum computer system. Visitors this month included: • Veronica La Regina, a former International Space University professor and current senior researcher for the Italian Space Agency, who was a guest of ARC management. • 18 ARC fall interns, as part of an Ames center tour. • 50 attendees of the International Forum for Aviation Research (IFAR) Summit 2016 toured the facility; this annual meeting was held at ARC, and the group included representatives from 25 countries. • The National Space Society received tours of various ARC facilities, including HECC. • 45 executives from major Spanish companies, who visited Silicon Valley as part of an Innovation Tour Bryan Biegel, NAS deputy division chief, presented an overview of HECC organized by Oracle. (The organizer requested a visit to project capabilities and services to Rebellion Lab guests, and gave them a ARC as part of the agenda.) Attendees included chief tour of the Pleiades supercomputer room. information officers, chief technology officers, and chief innovation officers. • 40 French business executives from Rebellion Labs toured ARC to learn about how NASA conducts public/ private partnerships and transfers technology from the POC: Gina Morello, [email protected], (650) 604-4462, NASA public to private sector. Advanced Supercomputing Division

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 10 Papers

• “HEROIC: 3D General Relativistic Radiative Postprocessor with Comptonization for Black Hole Accretion Disks,” R. Narayan, Y. Zhu, D. Psaltis, A. Sadowski, arXiv: 1510.04208 [astro-ph.HE], October 14, 2015. * http://arxiv.org/abs/1510.04208 • “Compressibility and Density Fluctuations in Molecular-Cloud Turbulence,” L. Pan, P. Padoan, T. Haugbolle, A. Nordlund, arXiv:1510.04742 [astro-ph.GA], October 15, 2015. * http://arxiv.org/abs/1510.04742 • “Formation of Globular Clusters in Atomic-Cooling Halos via Rapid Gas Condensation and Fragmentation During the Epoch of Reionization,” T. Kimm, R. Cen, J. Rosdahl, S. Yi, arXiv:1510.05671 [astro-ph.GA], October 19, 2015. * http://arxiv.org/abs/1510.05671 • “Structure in Galaxy Distribution. III. Fourier Transforming the Universe,” J. Scargle, M. Way, P. Gazis, arXiv:1510.06129 [astro-ph.CO], October 21, 2015. * http://arxiv.org/abs/1510.06129

* HECC provided supercomputing resources and services in support of this work National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 11 Presentations

• “Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach,” J. Stupl, B. Nelson, presented at the Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, September 20–23, 2015.* • “Hybrid Electric Distributed Propulsion Technologies for Large Commercial Aircraft: A NASA Perspective,” N. Madavan, presented at the 2015 IEEE Energy Conversion Congress & Exposition, Montreal, Canada, September 21–24, 2015.* • “Evaluating and Improving HECC User Project Productivity,” W. Thigpen, presented at the HECC Board of Advisors Meeting in Washington D.C., October 22, 2015.

* HECC provided supercomputing resources and services in support of this work National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 12 News and Events

• Explosions and Plasma Jets Associated with Sunspot Formation Revealed, Space Daily, October 5, 2015—A team of scientists analyzed observations of sunspots from the Solar Dynamics Observatory (SDO) and the Interface Region Imaging Spectrograph (IRIS) missions by performing numerical simulations on the Pleiades supercomputer. http://www.spacedaily.com/reports/Mechanism_of_explosions_and_plasma_jets_associated_with_sunspot_formation_revealed_999.html • Video: Prologue O/S—Improving the Odds of Job Success, insideHPC, October 12, 2015—In this video from the 2015 PBS Works User Group meeting, Dale Talcott from the NASA Advanced Supercomputing (NAS) Division discusses how NAS has customized PBS to identify problem nodes before running jobs. http://insidehpc.com/2015/10/video-prologue-os-improving-the-odds-of-job-success/ • SC15 Releases Short Video Explaining Why High Performance Computing is Important to NASA, The Official SC15 Blog, October 15, 2015—In this video, NAS Division aerospace engineer Shishir Pandya explains how HPC helps advance airplane and rocket technologies to save fuel and make travel more affordable for the public. • How HPC is Helping Define the Future of Aviation, Scientific Computing, October 20, 2015—As a follow up to the SC15 #HPCMatters video, Shishir Pandya blogs about how NASA uses supercomputers to help design fuel-efficient aircraft. http://www.scientificcomputing.com/blogs/2015/10/how-hpc-helping-define-future-aviation

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 13 News and Events

• Central Valley Idle Farmland Doubling During Drought, Central Valley Business Times, October 26, 2015—The current drought has idled more than 1.03 million acres of land in the Central Valley, roughly 15 percent of the 7 million acres of irrigated farmland in the Valley, according to a study by NASA, in collaboration with the USDA, USGS, and the California Department of Water Resources. The compute-intensive analysis was conducted through the NASA Earth Exchange using HECC resources at the NAS facility. http://www.centralvalleybusinesstimes.com/stories/001/?ID=29314 – Idle Farmland In California’s Central Valley Doubles During Drought, Growing Produce, October 27, 2015. http://www.growingproduce.com/vegetables/idle-farmland-in-californias-central-valley-doubles-during-drought/ • SGI Reports Fiscal First Quarter 2016 Financial Results, SGI press release, October 28, 2015—SGI releases their first quarter financial results for 2016, highlighting recent achievements, including Pleiades’ debut at number five on the June 2015 HPCG benchmark list. http://investors.sgi.com/releasedetail.cfm?ReleaseID=939024 – SGI Reports Fiscal First Quarter 2016 Financial Results, HPCwire, October 29, 2015. http://www.hpcwire.com/off-the-wire/sgi-reports-fiscal-first-quarter-2016-financial-results/

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 14 HECC Utilization

100% Share Limit Job Drain 90% Dedtime Drain

80% Limits Exceeded Unused Devel Queue 70% Insufficient CPUs Held 60% Queue Not Schedulable

50% Non-Chargable Not Schedulable 40% No Jobs Dedicated 30% Down Degraded 20% Boot 10% Used

0% Pleiades Endeavour Merope Production October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 15 HECC Utilization Normalized to 30-Day Month

18,000,000

16,000,000

14,000,000

12,000,000 NAS

10,000,000 NLCS NESC

8,000,000 SMD

HEOMD

6,000,000 ARMD Standard Billing Units Billing Standard Alloc. to Orgs 4,000,000

2,000,000

0

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 16 HECC Utilization Normalized to 30-Day Month

9 9 ARMD SMD 8 8 8 7 6 9 7 6 6 7 9 5 1 3 5 5 2 6 8 4 4 4 23 5 7 3 1 3 4 2 2 1 1 0 0 Standard Billing Units in Millions Millions in Units Billing Standard Standard Billing Units in Millions Millions in Units Billing Standard

SMD SMD Allocation With Agency Reserve ARMD ARMD Allocation With Agency Reserve

9 HEOMD, NESC 1 6 Ivy Bridge Racks added; 20 Nehalem, 12 Westmere Racks Rered from Pleiades 8 2 8 Ivy Bridge Racks added mid-Feb; 8 Ivy Bridge Racks added late Feb to Pleiades 7 3 4 Ivy Bridge Racks added mid-March to Pleaides 6 4 6 Westmere Racks added to Merope, Merope Harpertown rered 5 89 5 16 Westmere Racks rered, 3 Ivy Bridge Racks added, 15 Haswell Racks added to 4 2 3 Pleiades; 10 Nehalem Racks and 2 Westmere Racks added to Merope 1 5 67 3 4 6 16 Westmere Racks rered from Pleiades 2 7 14 Haswell racks added to Pleiades 1 8 7 Merope Nehalem Racks removed from Merope 0 Standard Billing Units in Millions Millions in Units Billing Standard 9 7 Merope Westmere Racks added from Merope

HEOMD NESC HEOMD+NESC Allocation

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 17 Tape Archive Status

140

130

120 Capacity

110 Used 100 HECC 90 Non Mission 80 Specific NAS 70

PetaBytes NLCS 60

50 NESC

40 SMD

30 HEOMD

20 ARMD

10

0 Unique File Data Unique Tape Data Total Tape Data Tape Capacity Tape Library Capacity October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 18 Tape Archive Status

140

130

120

110

100

90 Tape Library Capacity 80 Tape Capacity 70 Total Tape Data

PetaBytes 60 Unique Tape Data 50

40

30

20

10

0

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 19 Pleiades: SBUs Reported, Normalized to 30-Day Month

18,000,000

NAS 16,000,000

14,000,000 NLCS

12,000,000 NESC

10,000,000 SMD

8,000,000

HEOMD 6,000,000 Standard Billing Units Billing Standard

4,000,000 ARMD

2,000,000 Alloc. to Orgs

0

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 20 Pleiades: Devel Queue Utilization

1,750,000

NAS 1,500,000

NLCS 1,250,000

NESC 1,000,000

SMD 750,000

HEOMD

Standard Billing Units Billing Standard 500,000

ARMD 250,000

Devel Queue Alloc. 0

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 21 Pleiades: Monthly Utilization by Job Length

3,000,000

2,500,000

2,000,000

1,500,000

Standard Billing Units 1,000,000

500,000

0 0 - 1 hours > 1 - 4 hours > 4 - 8 hours > 8 - 24 hours > 24 - 48 hours > 48 - 72 hours > 72 - 96 hours > 96 - 120 hours > 120 hours Job Run Time (hours) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 22 Pleiades: Monthly Utilization by Size and Mission

4,500,000

4,000,000

NAS 3,500,000

NLCS 3,000,000

NESC 2,500,000

2,000,000 SMD Standard Billing Units 1,500,000 HEOMD

1,000,000 ARMD

500,000

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024 1025 - 2048 2049 - 4096 4097 - 8192 8193 - 16385 - 16384 32768

Job Size (cores) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 23 Pleiades: Monthly Utilization by Size and Length

4,500,000

4,000,000 > 120 hours

3,500,000 > 96 - 120 hours

> 72 - 96 hours 3,000,000

> 48 - 72 hours 2,500,000

> 24 - 48 hours 2,000,000

> 8 - 24 hours Standard Billing Units 1,500,000

> 4 - 8 hours 1,000,000

> 1 - 4 hours 500,000 0 - 1 hours 0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024 1025 - 2049 - 4097 - 8193 - 16385 - 2048 4096 8192 16384 32768

Job Size (cores) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 24 Pleiades: Average Time to Clear All Jobs

264

240

216

192

168

144

Hours 120

96

72

48

24

0 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15 Sep-15 Oct-15

ARMD HEOMD/NESC SMD

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 25 Pleiades: Average Expansion Factor

9.00

8.00

7.00

6.00

5.00

4.00

3.00

2.00

1.00 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15 Sep-15 Oct-15

ARMD HEOMD SMD

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 26 Endeavour: SBUs Reported, Normalized to 30-Day Month

100,000

90,000 NAS

80,000 NLCS

70,000

NESC 60,000

50,000 SMD

40,000 HEOMD

Standard Billing Units Billing Standard 30,000

ARMD 20,000

10,000 Alloc. to Orgs

0

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 27 Endeavour: Monthly Utilization by Job Length

18,000

15,000

12,000

9,000

Standard Billing Units 6,000

3,000

0 0 - 1 hours > 1 - 4 hours > 4 - 8 hours > 8 - 24 hours > 24 - 48 hours > 48 - 72 hours > 72 - 96 hours > 96 - 120 hours > 120 hours Job Run Time (hours) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 28 Endeavour: Monthly Utilization by Size and Mission

30,000

25,000 NAS

20,000 NESC

SMD 15,000

HEOMD Standard Billing Units 10,000

ARMD

5,000

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024

Job Size (cores) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 29 Endeavour: Monthly Utilization by Size and Length

30,000

> 120 hours

25,000 > 96 - 120 hours

> 72 - 96 hours 20,000 > 48 - 72 hours

15,000 > 24 - 48 hours

> 8 - 24 hours Standard Billing Units 10,000 > 4 - 8 hours

5,000 > 1 - 4 hours

0 - 1 hours

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024

Job Size (cores) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 30 Endeavour: Average Time to Clear All Jobs

60

54

48

42

36

30 Hours

24

18

12

6

0 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15 Sep-15 Oct-15

ARMD HEOMD/NESC SMD

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 31 Endeavour: Average Expansion Factor

4.00

3.50

3.00

2.50

2.00

1.50

1.00 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15 Sep-15 Oct-15

ARMD HEOMD SMD

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 32 Merope: SBUs Reported, Normalized to 30-Day Month

700,000

NAS 600,000

NLCS 500,000

NESC 400,000

SMD 300,000

HEOMD

Standard Billing Units Billing Standard 200,000

ARMD

100,000

Alloc. to Orgs

0

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 33 Merope: Monthly Utilization by Job Length

500,000

450,000

400,000

350,000

300,000

250,000

200,000 Standard Billing Units 150,000

100,000

50,000

0 0 - 1 hours > 1 - 4 hours > 4 - 8 hours > 8 - 24 hours > 24 - 48 hours > 48 - 72 hours > 72 - 96 hours > 96 - 120 hours > 120 hours Job Run Time (hours) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 34 Merope: Monthly Utilization by Size and Mission

400,000

350,000 NAS

300,000

NESC 250,000

SMD 200,000

150,000 HEOMD Standard Billing Units

100,000 ARMD

50,000

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024 1025 - 2048 2049 - 4096 4097 - 8192

Job Size (cores) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 35 Merope: Monthly Utilization by Size and Length

400,000

> 120 hours 350,000 > 96 - 120 hours

300,000 > 72 - 96 hours

250,000 > 48 - 72 hours

200,000 > 24 - 48 hours

150,000 > 8 - 24 hours Standard Billing Units

> 4 - 8 hours 100,000

> 1 - 4 hours 50,000 0 - 1 hours

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024 1025 - 2048 2049 - 4096 4097 - 8192

Job Size (cores) October 2015

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 36 Merope: Average Time to Clear All Jobs

36

30

24

18 Hours

12

6

0 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15 Sep-15 Oct-15

ARMD HEOMD/NESC SMD

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 37 Merope: Average Expansion Factor

4.00

3.50

3.00

2.50

2.00

1.50

1.00 Nov-14 Dec-14 Jan-15 Feb-15 Mar-15 Apr-15 May-15 Jun-15 Jul-15 Aug-15 Sep-15 Oct-15

ARMD HEOMD SMD

National Aeronautics and Space Administration Nov 10, 2015 High-End Computing Capability Project 38