Project Status Report High End Computing Capability Strategic Capabilities Assets Program November 10, 2016

Dr. Rupak Biswas – Project Manager NASA , Moffett Field, CA Rupak.Biswas@.gov (650) 604-4411 www.nasa.gov HECC Facilities Team Completes Bringing Infrastructure Online and Testing of MSF

• Working with NASA Ames Code J and SGI/ Mission Impact: By working closely with NASA Ames Center Operations and SGI/CommScope staff, the CommScope staff, the HECC Facilities team HECC Modular Supercomputing Facility (MSF) project connected the Modular Supercomputing team ensured the successful installation of the first Facility (MSF) to site electrical utilities, a 2.5- MSF module. megawatt transformer, water supply, drainage, and emergency services (such as fire and physical security). • The MSF’s first system, Electra, is now operational and consumes 440 kilowatts of power with a LINPACK benchmark load. The initial Power Usage Effectiveness (PUE) during the LINPACK run (slide 4) was between 1.02–1.03. • The HECC team received training on and have taken over responsibility for operation of the Electra module. The MSF control systems are Left: The Modular Supercomputing Facility’s Programmable Logic Controller (PLC) user interface. Right: The MSF construction site, located completely functional and we are in the across from the NASA Advanced Supercomputing facility at NASA’s process of conducting environmental testing. Ames Research Center. • Electra is still on schedule to go into POCs: William Thigpen, [email protected], (650) 604-1061, NASA Advanced Supercomputing (NAS) Division; production in December 2016. Chris Tanner, [email protected], (650) 604-6754, NAS Division, CSRA LLC

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 2 Successful LINPACK and HPCG Benchmark Runs Completed on Pleiades

• During dedicated time, HECC and SGI Mission Impact: Running the LINPACK and HPCG benchmarks across a large portion of Pleiades engineers ran the LINPACK and High provides HECC with a good method to identify Performance Conjugate Gradient (HPCG) and address system issues, thereby improving overall benchmarks on the Pleiades . reliability for users. • Pleiades achieved 5.95 petaflops (PF) on the LINPACK benchmark, with 83.7% efficiency, using the majority of the system. This number would have moved the system up to 9th place on the June 2016 TOP500 list. Pleiades has a theoretical peak performance of 7.25 PF. • Pleiades achieved 175 teraflops on the HPCG benchmark. This result would have moved the system up to 7th place on the June 2016 HPCG list. • The benchmarks provide a good stress test The LINPACK and HPCG benchmarks are widely used to evaluate the performance of different high-performance computers, and provide two on the system and help to quickly identify complementary viewpoints on performance on different workloads. marginal hardware on the system. Over 60 components were identified as sub-optimal POCs: Bob Ciotti, [email protected], (650) 604-4408, NASA Advanced Supercomputing (NAS) Division; and replaced during this test. Davin Chan, [email protected], (650) 604-3613, NAS Division, CSRA LLC

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 3 Successful LINPACK and HPCG Benchmark Runs Completed on Electra

• SGI completed running the LINPACK and Mission Impact: HECC’s new Electra system demonstrates that a high-performance computer can HPCG benchmarks on the Electra be power-efficient while still providing the resources to supercomputer as part of a stress test. meet NASA’s growing computational requirements. • Electra achieved 1.096 petaflops (PF) on the LINPACK benchmark, with 88.5% efficiency (95.7% efficiency based on the Advanced Vector Extensions, or AVS frequency.) This would have placed the system in 80th place on the June 2016 TOP500 list. Electra has a Davin: Can you please get an image theoretical peak performance of 1.2 PF. from Chris B. or Chris T. that they • Electra achieved 25.188 teraflops on the haven’t used already this month? HPCG benchmark. This number would have placed the system in 38th place on the June 2016 HPCG list. • The Power Utilization Efficiency (PUE) measurement, which is the total facility The Electra system, consisting of 1,152 Broadwell nodes, is able to run efficiently while running demanding benchmarks like LINPACK and power divided by the IT equipment power, HPCG. fluctuated between 1.02–1.03 during the LINPACK benchmark. A PUE of 1.0 would be POCs: Bob Ciotti, [email protected], (650) 604-4408, NASA Advanced Supercomputing (NAS) Division; indicative of a system that used no power for Davin Chan, [email protected], (650) 604-3613, NAS cooling. Division, CSRA LLC National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 4 HECC Networks Team Leads Agency Design on DNS Service

• The agency’s Domain Name System (DNS) service Mission Impact: HECC network architecture designs was centralized to run out of Marshall Space Flight are a model for the agency, and lead to reduced Center by the NASA Integrated Communications downtime and higher availability of DNS service to all of NASA. Services (NICS) contract under the Discrete Data Input/Internet Protocol Address Management (DDI/ IPAM) project. HECC provided key input requirements at the start of the project that were later adopted by the agency as a more reliable and standard model. – The HECC Networks team provided requirements for high availability across DNS servers in a virtual cluster pair. In the event that one DNS server goes down, another will take over its role, resulting in no downtime. – HECC’s NASLAN network was the first to have this implemented and now the agency has 5 other sites set up with a similar architecture. – The HECC requirement of having redundant array of independent disk (RAID) hard drives mirrored in case of drive failure is now standard practice across all DNS servers to increase availability. This diagram shows the high-availability Domain Name System (DNS) • NASLAN continues to be the only network site, architecture that HECC network engineers helped the NASA Integrated throughout the agency, in compliance with IPv6 Communications Services team design. requirements to have its DNS servers supporting native IPv6. We will continue working with NICS as POC: Nichole Boscia, [email protected], (650)604-0891, the IPv6 part of this effort moves forward. NASA Advanced Supercomputing Division, CSRA LLC

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 5 Assessing Liquid Rocket Engine Feed Line Flow for the Space Launch System *

• Researchers at NASA Marshall Spaceflight Mission Impact: HECC supercomputing resources Center ran simulations on Pleiades to help enabled modeling of the full-scale geometry with fine details, providing designers of the SLS liquid designers better understand how fuel will flow propulsion system with information that can help within the Space Launch System (SLS) liquid ensure its components can withstand the complex flow propulsion system. environments generated by the propulsion system. – The section of the SLS propulsion system that was modeled included a liquid hydrogen (LH) duct bend, a ball-strut tie rod joint, a gimbal joint, and the RS-25 engine’s low pressure fuel pump (LPFP). – Three distinct cases were modeled: the LH feed line with no angulation and no inducer; the LH feed line with no angulation, coupled to the RS-25 LPFP; and the LH feed line with 4.5 degrees gimbal angle, coupled to the RS-25 LPFP. – The simulations assessed the flow environment in the LH feed line upstream of the LPFP, at normal RS-25 operating conditions during SLS ascent. – Flow uniformity, a key factor in LPFP performance and operation, was calculated for the three cases and compared. The various sources of flow variation were identified and their relative contributions to the flow A planar cut through the simulation model of the Space Launch System field were determined. feed line, showing the average axial velocity of the liquid propellant near • Results showed that the current configuration the pump inlet for the straight duct with the RS-25 low pressure fuel pump. appears to have an acceptable flow environment. POC: Andrew Mulder, [email protected], (256) 544-6750, * HECC provided supercompung resources and services in support of this work NASA Marshall Space Flight Center National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 6 Enabling NASA Earth Exchange (NEX) Petabyte Data Production Systems *

• To support a growing number of projects that are Mission Impact: The Pleiades supercomputer, developing new, large-scale science data products, combined with HECC’s massive data storage and high- the NEX team developed a hybrid HPC-cloud data speed networks, enables the NASA Earth Exchange (NEX) team to engage large scientific communities and production system during the last year. provide large-scale modeling and data analysis • The NEX system represents both the production capabilities not previously available to most scientists. rules and the execution of the rules as a directed graph, from which user can analyze and reproduce scientific results. • The system makes it possible for small science teams to execute projects that would not have been possible previously. Examples include: – Production pipeline for Landsat and NASA Global Imagery and Browse Service – 37 steps, 7 petabytes of data, and 670,000 processor-hours. – NEX Downscaled Climate Projections (NEX-DCP30) and NEX Global Daily Downscaled Projections (NEX- GDDP) pipeline – 40 terabytes (TB) of data and 320,000 processor-hours. – Estimating biomass by counting trees across the United NEX production pipeline example: Preliminary national forest disturbance States by using machine learning and computer vision – map from the North American Forest Dynamics (NAFD) team, developed using a combination of 25 years of Landsat data (1985–2010) and over 40 TB of data/100,000 processor-hours. 200,000 compute hours. The product is a wall-to-wall forest disturbance • Over the last year, NEX projects used more than 9 map for the entire United States at 30-meter spatial resolution. million processor-hours on Pleiades and processed POCs: Petr Votava, [email protected], (650) 604-4675, more than 3 petabytes of data. Andrew Michaelis, [email protected], (650) 604-5413, NASA Ames Research Center * HECC provided supercompung resources and services in support of this work National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 7 HECC Facility Hosts Several Visitors and Tours in October 2016

• HECC hosted 9 tour groups in October; guests learned about the agency-wide missions being supported by HECC assets, and some groups also viewed the D-Wave 2X quantum computer system. Visitors this month included: – Peter Hughes, Center Chief Technologist for NASA Goddard Spaceflight Center. – A group from the United Nations who is interested in the technological partnership opportunities with NASA Ames Research Center in planetary sustainability/Earth science and space exploration. – Technical staff from the Office of Naval Research. – A group from the California Council on Science and Technology. – A large group of students from Mt. Diablo High School in Oakland, part of an at-risk-youth program. NAS Division Chief Piyush Mehrotra presents computational fluid – Members of the 2016 Haas School of Business, dynamics results on the NAS facility hyperwall to members of the 2016 Berkeley Innovation Fair. Haas School of Business, Berkeley Innovation Fair. – Students from the Ames Fall Internship program.

POC: Gina Morello, [email protected], (650) 604-4462, NASA Advanced Supercomputing Division

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 8 Papers

• “Extension of the MURaM Radiative MHD Code for Coronal Simulations, M. Rempel, arXiv:1609.09818 [astro-ph.SR], September 30, 2016. * https://arxiv.org/abs/1609.09818 • “Cosmological Hydrodynamic Simulations of Preferential Accretion in the SMDH of Milky Way Size Galaxies,” N. Sanchez, et al., arXiv:1610.01155 [astro-ph.GA], October 4, 2016. * https://arxiv.org/abs/1610.01155 • “The Influence of Daily Meteorology on Boreal Fire Emissions and Regional Trace Gas Variability,” E. Wiggins, et al., Journal of Geophysical Research: Biosciences, October 6, 2016. * http://onlinelibrary.wiley.com/doi/10.1002/2016JG003434/full • “Properties of Dark Matter Halos as a Function of Local Environment Density,” C. Lee, et al. arXiv:1610.02108 [astro-ph.GA] October 7, 2016. * https://arxiv.org/abs/1610.02108 • “Colors, Star Formation Rates, and Environments of Star Forming and Quiescent Galaxies at the Cosmic Noon,” R. Feldmann, et al., arXiv:1610:02411 [astro-ph.GA] October 7, 2016. * https://arxiv.org/abs/1610.02411

* HECC provided supercompung resources and services in support of this work

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 9 Papers (cont.)

• “Optimization of Flexible Wings and Distributed Flaps at Off-Design Conditions,” D. Rodriguez, M. Aftosmis, M. Nemec, G. Anderson, Journal of Aircraft (AIAA), Published online October 7, 2016. * http://arc.aiaa.org/doi/abs/10.2514/1.C033535 • “The Role of Baryons in Creating Statistically Significant Planes of Satellites Around Milky Way-Mass Galaxies,” S. Ahmed, A. Brooks, C. Christensen, arXiv:1610.03077 [astro-ph.GA], October 10, 2016. * https://arxiv.org/abs/1610.03077 • “Why Do High-Redshift Galaxies Show Diverse Gas-Phase Metallicity Gradients?,” X. Ma, et al., arXiv:1610.03498 [astro-ph.GA], October 11, 2016. * https://arxiv.org/abs/1610.03498 • “Effect of Thermal Nonequilibrium on Ignition in Scramjet Combustors,” R. Fievet, et al., Proceedings of the Combustion Institute, Published online October 13, 2016. * http://www.sciencedirect.com/science/article/pii/S1540748916304552 • “Incompressible Modes Excited by Supersonic Shear in Boundary Layers: Acoustic CFS Instability,” M. Belyaev, arXiv:1610.04289 [astro-ph.HE], October 13, 2016. * https://arxiv.org/abs/1610.04289 • “SatCNN: Satellite Image Dataset Classification Using Agile Convolutional Neural Networks,” Y. Zhong, et al., Remote Sensing Letters, Vol. 8, Issue 2, October 25, 2016. http://www.tandfonline.com/doi/full/10.1080/2150704X.2016.1235299 * HECC provided supercompung resources and services in support of this work

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 10 Papers (cont.)

• “Shocks and Angular Momentum Flips: A Different Path to Feeding the Nuclear Regions of Merging Galaxies,” P. Capelo, M. Dotti, arXiv:1610.08507 [astro-ph.GA], October 26, 2016. * https://arxiv.org/abs/1610.08507 • “Conversion of Internal Gravity Waves into Magnetic Waves,” D. Lecoanet, et al., arXiv:1610.08506 [astro-ph.SR], October 26, 2016. * https://arxiv.org/abs/1610.08506 • “Quantifying Supernovae-Driven Multiphase Outflows,” M. Li, et al., arXiv:1610.08971 [astro-ph.GA], October 27, 2016. * https://arxiv.org/abs/1610.08971 • “A NASA Perspective on Quantum Computing: Opportunities and Challenges,” R. Biswas, et al., Parallel Computing Journal, accepted for publication, October 27, 2016. http://www.sciencedirect.com/science/article/pii/S0167819116301326

* HECC provided supercompung resources and services in support of this work

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 11 News and Events

• Looking to the Stars, The La Grande Observer, October 21, 2016—Adam Moreno, a forest ecologist, has been awarded a fellowship from NASA to do research at its Ames Research Center with assistance from the Pleiades supercomputer and data from the space agency’s satellite system. He will use these tools to develop a model for forecasting when a forest is reaching the verge of catastrophic collapse. “I want to develop an early warning system,” Moreno said. http://www.lagrandeobserver.com/news/local/4777546-151/looking-to-the-stars

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 12 HECC Utilization

100% Share Limit Job Drain 90% Dedtime Drain

80% Limits Exceeded Unused Devel Queue 70% Insufficient CPUs Held 60% Queue Not Schedulable

50% Non-Chargable Not Schedulable 40% No Jobs Dedicated 30% Down Degraded 20% Boot 10% Used

0% Pleiades Endeavour Merope Production October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 13 HECC Utilization Normalized to 30-Day Month

22,000,000

20,000,000

18,000,000

16,000,000

14,000,000 NAS

NLCS 12,000,000 NESC

10,000,000 SMD

HEOMD 8,000,000 ARMD Standard Billing Units Billing Standard 6,000,000 Alloc. to Orgs

4,000,000

2,000,000

0

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 14 HECC Utilization Normalized to 30-Day Month

12 12 SMD ARMD 10 10 6 89 8 5 8 1 34 9 7 6 2 6 4 8 1 3 56 4 4 2 7

2 2

0 0 Standard Billing Units in Millions Millions in Units Billing Standard Standard Billing Units in Millions Millions in Units Billing Standard

SMD SMD Allocation With Agency Reserve ARMD ARMD Allocation With Agency Reserve

12 HEOMD, NESC 10 1 16 Westmere racks rered from Pleiades 2 8 14 Haswell racks added to Pleiades 3 7 Nehalem ½ racks rered from Merope 9 4 6 3 7 Westmere ½ racks added to Pleiades 4 5 8 5 16 Westmere racks rered from Pleiades 2 6 4 1 7 6 10 Broadwell racks added to Pleiades 7 4 Broadwell racks added to Pleiades 2 8 14 (all) Westmere racks rered from Pleiades 9 0 14 Broadwell racks added to Pleiades (10.5 Dedicated to ARMD) Standard Billing Units in Millions Millions in Units Billing Standard

HEOMD NESC HEOMD+NESC Allocation

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 15 Tape Archive Status

600

550

Capacity 500 Used 450 HECC 400 Non Mission 350 Specific NAS 300

PetaBytes NLCS 250 NESC 200 SMD 150 HEOMD 100 ARMD 50

0 Unique File Data Unique Tape Data Total Tape Data Tape Capacity Tape Library Capacity October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 16 Tape Archive Status

600

550

500

450

400

350 Tape Library Capacity Tape Capacity 300 Total Tape Data PetaBytes 250 Unique Tape Data

200

150

100

50

0

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 17 Pleiades: SBUs Reported, Normalized to 30-Day Month

22,000,000

20,000,000 NAS

18,000,000 NLCS 16,000,000

14,000,000 NESC

12,000,000 SMD 10,000,000

8,000,000 HEOMD Standard Billing Units Billing Standard 6,000,000 ARMD 4,000,000

2,000,000 Alloc. to Orgs

0

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 18 Pleiades: Devel Queue Utilization

2,000,000

NAS 1,750,000

1,500,000 NLCS

1,250,000 NESC

1,000,000 SMD

750,000 HEOMD Standard Billing Units Billing Standard 500,000 ARMD

250,000

Devel Queue Alloc. 0

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 19 Pleiades: Monthly Utilization by Job Length

4,500,000

4,000,000

3,500,000

3,000,000

2,500,000

2,000,000

Standard Billing Units 1,500,000

1,000,000

500,000

0 0 - 1 hours > 1 - 4 hours > 4 - 8 hours > 8 - 24 hours > 24 - 48 > 48 - 72 > 72 - 96 > 96 - 120 > 120 hours hours hours hours hours Job Run Time (hours) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 20 Pleiades: Monthly Utilization by Size and Mission

4,000,000

3,500,000 NAS

3,000,000 NLCS

2,500,000 NESC

2,000,000 SMD

1,500,000 Standard Billing Units HEOMD

1,000,000 ARMD

500,000

0 1 - 32 33 - 64 65 - 128 129 - 257 - 513 - 1025 - 2049 - 4097 - 8193 - 16385 - 32769 - 65537 - 256 512 1024 2048 4096 8192 16384 32768 65536 262144 Job Size (cores) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 21 Pleiades: Monthly Utilization by Size and Length

4,000,000

> 120 hours 3,500,000

> 96 - 120 hours 3,000,000 > 72 - 96 hours 2,500,000 > 48 - 72 hours

2,000,000 > 24 - 48 hours

1,500,000 > 8 - 24 hours Standard Billing Units

1,000,000 > 4 - 8 hours

> 1 - 4 hours 500,000

0 - 1 hours 0 1 - 32 33 - 64 65 - 128 129 - 257 - 513 - 1025 - 2049 - 4097 - 8193 - 16385 - 32769 - 65537- 256 512 1024 2048 4096 8192 16384 32768 65536 262144 Job Size (cores) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 22 Pleiades: Average Time to Clear All Jobs

312

288

264

240

216

192

168

Hours 144

120

96

72

48

24

0 Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16

ARMD HEOMD/NESC SMD

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 23 Pleiades: Average Expansion Factor

10.00

9.00

8.00

7.00

6.00

5.00

4.00

3.00

2.00

1.00 Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16

ARMD HEOMD SMD

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 24 Endeavour: SBUs Reported, Normalized to 30-Day Month

100,000

90,000 NAS

80,000 NLCS

70,000

NESC 60,000

50,000 SMD

40,000 HEOMD

Standard Billing Units Billing Standard 30,000

ARMD 20,000

10,000 Alloc. to Orgs

0

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 25 Endeavour: Monthly Utilization by Job Length

10,000

9,000

8,000

7,000

6,000

5,000

4,000 Standard Billing Units 3,000

2,000

1,000

0 0 - 1 hours > 1 - 4 hours > 4 - 8 hours > 8 - 24 hours > 24 - 48 hours > 48 - 72 hours > 72 - 96 hours > 96 - 120 hours > 120 hours Job Run Time (hours) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 26 Endeavour: Monthly Utilization by Size and Mission

20,000

17,500 NAS

15,000

NESC 12,500

SMD 10,000

7,500 HEOMD Standard Billing Units

5,000 ARMD

2,500

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024

Job Size (cores) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 27 Endeavour: Monthly Utilization by Size and Length

20,000

> 120 hours 17,500 > 96 - 120 hours 15,000 > 72 - 96 hours

12,500 > 48 - 72 hours

10,000 > 24 - 48 hours

7,500 > 8 - 24 hours Standard Billing Units

> 4 - 8 hours 5,000

> 1 - 4 hours 2,500 0 - 1 hours

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024

Job Size (cores) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 28 Endeavour: Average Time to Clear All Jobs

84

78

72

66

60

54

48

42 Hours

36

30

24

18

12

6

0 Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16

ARMD HEOMD/NESC SMD

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 29 Endeavour: Average Expansion Factor

4.00

3.50

3.00

2.50

2.00

1.50

1.00 Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16

ARMD HEOMD SMD

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 30 Merope: SBUs Reported, Normalized to 30-Day Month

700,000

NAS 600,000

NLCS 500,000

NESC 400,000

SMD 300,000

HEOMD

Standard Billing Units Billing Standard 200,000

ARMD

100,000

Alloc. to Orgs

0

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 31 Merope: Monthly Utilization by Job Length

500,000

450,000

400,000

350,000

300,000

250,000

200,000 Standard Billing Units 150,000

100,000

50,000

0 0 - 1 hours > 1 - 4 hours > 4 - 8 hours > 8 - 24 hours > 24 - 48 hours > 48 - 72 hours > 72 - 96 hours > 96 - 120 hours > 120 hours Job Run Time (hours) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 32 Merope: Monthly Utilization by Size and Mission

400,000

350,000 NAS

300,000 NLCS

250,000 NESC

200,000 SMD

150,000 Standard Billing Units HEOMD

100,000 ARMD

50,000

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024 1025 - 2049 - 4097 - 8193 - 2048 4096 8192 16384 Job Size (cores) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 33 Merope: Monthly Utilization by Size and Length

400,000

> 120 hours 350,000 > 96 - 120 hours 300,000 > 72 - 96 hours

250,000 > 48 - 72 hours

200,000 > 24 - 48 hours

150,000 > 8 - 24 hours Standard Billing Units

> 4 - 8 hours 100,000

> 1 - 4 hours 50,000 0 - 1 hours

0 1 - 32 33 - 64 65 - 128 129 - 256 257 - 512 513 - 1024 1025 - 2049 - 4097 - 8193 - 2048 4096 8192 16384 Job Size (cores) October 2016

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 34 Merope: Average Time to Clear All Jobs

32

28

24

20

16 Hours

12

8

4

0 Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16

ARMD HEOMD/NESC SMD

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 35 Merope: Average Expansion Factor

7.00

6.00

5.00

4.00

3.00

2.00

1.00 Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16

ARMD HEOMD SMD

National Aeronautics and Space Administration November 10, 2016 High-End Computing Capability Project 36