<<

2014 R&D 100 Award Nomination

HP Apollo Liquid-Cooled Supercomputing Platform Achieving new levels of effciency in high- performance computing

Hewlett-Packard Company

National Renewable Energy Laboratory (NREL) Golden, Colorado

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014

NREL is a national laboratory of the U.S. Department of Energy, Offce of Energy Effciency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. 2014 R&D 100 AWARDS ENTRY FORM

1. GENERAL ENTRY INFORMATION

1A. Product brand name and name of submitting organization(s) HP Apollo Platform for High-Performance Computing

Submitted jointly by the Hewlett-Packard Company (HP) and the National Renewable Energy Laboratory (NREL).

1B. Short description of the product The HP Apollo Platform integrates a dense high-performance computing (HPC) cooling. It packs amazing computational capability into a small space, eliminates the

dramatically reducing the risk of using electronics in the proximity of water.

1C. Photo

Figure 1. This photo illustrates the small footprint of the first installation of the Apollo product, a system named “Peregrine,” in NREL’s Energy Systems Integration Facility (ESIF). Shown in the photo, from right to left, are Energy Secretary Ernest Moniz, NREL Director Dan Arvizu, and NREL Principal Investigator Steve Hammond, director of NREL’s Computational Science Center. Photo by Dennis Schroeder, NREL 27496

1D. Price in U.S. dollars

Business Sensitive: Do Not Cite, Copy, or Distribute Front page background photo from iStock 17325024; 2 Embargoed from publication or disclosure until June 9, 2014 inset photo by Hewlett-Packard Development Company, L.P. 1E. Evidence, such as a press release or invoice, showing that the product was frst available for sale or licensing between Jan. 1, 2013, and Dec. 31, 2013 Energy Secretary Moniz Dedicates Clean Energy Research Center, New : http://energy.gov/articles/energy-secretary-moniz-dedicates-clean-energy-research- center-new-supercomputer

http://www8.hp.com/us/en/hp-news/press-release.html?id=1530136#.Uzxn1qhdU1I

2. PRODUCT DESCRIPTION

2A. What does the product or technology do? Describe the principal applications of this product. systems scale up by orders of magnitude, constraints on energy consumption and heat dissipation will impose limitations on HPC systems and the facilities in which they are housed.

greater performance density, cutting energy consumption in half, and creating synergies

2B. How will this product beneft the market that it serves? the technology.

power plant.

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 3 Meanwhile, the HPC market is expanding and growing rapidly. The warm water-cooled HP Apollo Platform can be employed in a wide range of HPC applications to greatly reduce energy consumption in this rapidly growing market.

3. TECHNOLOGY DESCRIPTION

3A. How does the product operate? Describe the materials, composition, construction, or mechanism of action. storage, management, and dissemination of data and information. It typically consists employ warm water for cooling.

Figure 2. The use of water cooling declined in the 1990s as the IT industry replaced complex water-cooled systems with lower-cost and simpler , but systems are now reaching power densities that prohibit the continued use of air cooling. Illustration from Hewlett-Packard Development Company, L.P.

As the demands for supercomputing continue, the power consumption, space

Business Sensitive: Do Not Cite, Copy, or Distribute 4 Embargoed from publication or disclosure until June 9, 2014 dense systems. The rapid rise in energy consumption by the world-wide IT industry has largest power consumer of all the countries on Earth.

Multiple approaches are now entering the market, from immersion to phase-change IT industry, making it amenable to mass-manufacturing while being easy to deploy,

1. Enable warm-water cooling. exchange where the heat is being generated and keeps critical components safely within cooling.

2. Maximize heat reuse. Because datacenters are now built to scales where they consume leading industry actors are now looking at ways to reuse what would otherwise be called

3. Achieve optimal temperatures in computing components. far better ability to control the temperature of the computing components (processors, operator to maintain uniform cooling and associated computing performance systems. This tight thermal control also allows the silicon chips to be maintained at the optimal temperature for their best performance.

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 5 least three more patent applications.

Liquid Cooling Technology

introduce contaminants.

and the memory using a set of heat pipes. A heat pipe is a tube or pipe that holds a Figure 3. A heat pipe, shown here, is a tube or pipe that holds a working fluid and its vapor in equilibrium, with the fluid held by a wick. Heat applied to the evaporator end causes the working fluid to vaporize, and that vapor condenses on the cool end of the pipe, transferring its heat to that end. The condensation causes a pressure differential that helps drive the flow in the heat pipe. The condensed vapor then returns to the hot end of the pipe via the wick. Illustration from Hewlett-Packard Development Company, L.P.

almost instantly.

Once the compute tray is placed within the rack, the thermal block on the side of the tray

Business Sensitive: Do Not Cite, Copy, or Distribute 6 Embargoed from publication or disclosure until June 9, 2014 with a minimum of a thousand pounds of force, making sure the mechanical connection is as tight as it can be to guarantee a good thermal transfer.

to as a dry disconnect. The thermal interface used at the solid contact region does not the dry disconnect, its safety and reliability far outweigh the minor drawback.

Figure 4. These top and angled views of an HP Apollo Server tray (upper and lower left photos) show four rectangular computational units, each equivalent to about 20 laptop . The heat pipes are the copper tubes that extend from roughly the middle of each unit to the side of the tray, where they connect to two thermal blocks (shown edgewise in the top view and on the right in the lower left view) that allow for heat transfer to the water cooling system. The lower right photo shows the computational units with the covers removed, exposing the area where the heat pipes connect to the heat source. Photos from Hewlett-Packard Development Company, L.P. distributed in parallel to four sections, aligning with the four processors or accelerators

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 7 Figure 5. The left illustration shows a water cooling module, or “water wall,” with 10 dark grey thermal bus bars running horizontally and separated by bright metallic strips. The thermal blocks on the compute trays clamp directly to these thermal bus bars. On the opposite side of the thermal bus bars, water flows past a pin fin array that optimizes the heat transfer. The pin fin array is shown on the upper right. The lower right illustration shows a single thermal bus bar with the thermal contact facing up and the warm-water inlet and hot- water outlets exposed. Photos for maximizing the temperature difference between the inlet and the outlet water, which is from Hewlett-Packard Development Company, L.P. mechanism that is temperature dependent, making them smaller, more reliable, and much

Figure 6. The tiny flow control valves used in the HP Apollo supercomputing platform are fully passive, with an activation mechanism that is temperature dependent. The cutout rendering on the right shows the valves’ placement within the thermal bus bar. Photo from Hewlett-Packard Development Company, L.P.

workload. This ensures that the pumping energy for the cooling system is optimized for

Business Sensitive: Do Not Cite, Copy, or Distribute 8 Embargoed from publication or disclosure until June 9, 2014 These water streams are aggregated together with a purpose-built rack piping, featuring

heat exchanger in the middle of the rack to take care of the residual heat not transferred Figure 7. The HP Apollo servers employ an air-to-water heat exchanger to remove heat from components that aren’t directly cooled with water. As shown in this top view, the cooled air flows through the center of the rack, then returns down each side before being cooled again. Illustration from Hewlett-Packard Development Company, L.P.

Power Delivery

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 9 DC power distribution inside the rack.

found in a datacenter.

the uninterruptable power supply with a battery backup unit within the rack. This install.

Figure 8. Compared to the electrical distribution in a typical datacenter, the HP Apollo supercomputing platform operates at much higher efficiency, cutting energy losses in half by providing 480 VAC to each rack. A battery backup unit (BBU) provides high-voltage DC backup power within the rack, eliminating the need for an uninterruptable power supply (UPS). Illustration from Hewlett-Packard Development Company, L.P. System integration

processors or accelerators per rack, an unmatched density that is about four times that cooling, the HP Apollo platform is truly integrated as a system within each rack. By

Business Sensitive: Do Not Cite, Copy, or Distribute 10 Embargoed from publication or disclosure until June 9, 2014 up deployments. The same integration is executed on the Ethernet side, with all compute

using a redundant management module located next to the Ethernet switches. Through a to related systems, such as the datacenter building control system.

energy usage, optimizing operating expenses. Smart sensors automatically track thermal

In addition, by coupling such supercomputer information to the HPC center scheduler, where user jobs are allocated on the computer resources, an entire new realm of

and datacenters.

Modular Cooling System

To address any customer concerns with installing a warm-water cooling system, the

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 11 Figure 9. The coolant distribution unit (CDU) for the HP Apollo supercomputing system. Photo from Hewlett-Packard Development Company, L.P.

in parallel will step up and spread the load increase.

downtime. This is fundamentally the difference between the mean time between failure and the Apollo approach may be similar, the HP Apollo approach is a clear winner when considering MTTR.

And although the dry-disconnect technology and the rack plumbing infrastructure make the likelihood of a water leak much smaller than competing technologies, the Apollo platform goes a step further by maintaining the secondary loop at sub-atmospheric

Business Sensitive: Do Not Cite, Copy, or Distribute 12 Embargoed from publication or disclosure until June 9, 2014

On the plumbing side, to build the initial system at NREL, the team went for the traditional route of engineering custom piping, performing in-situ assembly by subcontracted plumbers, pressure testing, and reworking as needed. All in all, the process Figure 10. Drawing on the NREL experience, the HP Apollo supercomputing platform now offers a modular plumbing solution with flexible hoses and quick disconnects. Drawing from Hewlett-Packard Development Company, L.P.

this approach, HP was able to deploy a system three times larger in about a fourth of the time. Installing this modular system is comparable to changing a showerhead, while a

cooled technology.

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 13 Peregrine

11) recirculates more water through the bypass loop to guarantee an optimized delta-T https://www.youtube.com/watch?v=9Ih3R84Corg

Figure 11. On the level below the Peregrine supercomputer is its liquid cooling system, which interfaces with the ESIF systems for building heating in the winter and heat rejection in the summer (through evaporative cooling towers). Photo by Dennis Schroeder, NREL 24686

Year by R&D Magazine.

Business Sensitive: Do Not Cite, Copy, or Distribute 14 Embargoed from publication or disclosure until June 9, 2014 Peregrine also

for NREL and HP to work together to perfect the system. This led to a number of practical

3B. Describe the key innovation(s) of the product and the scientifc theories that support them. 3 3

4. PRODUCT COMPARISON

4A. List your product’s competitors by manufacturer, brand name, and model number. which compute components are directly submerged in a thermally, but not electrically,

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 15 4B. MATRIX

Comparison Matrix A: General comparison of warm water- and air-cooled systems.

The following matrix compares the warm-water cooling approach of the HP Apollo supercomputing platform with air cooling.

HP Apollo warm The HP Apollo platform Air cooled systems water-cooled system advantage

Allows for a higher Computing density High Medium computing density than air-cooled systems

The HP Apollo platform Small footprint Yes No ofers the smallest footprint

More efcient cooling of Direct, efficient cooling of Yes No compute units than air compute units cooling Unlike air-cooled systems, the HP Apollo system Allows for reuse of waste heat Yes No provides hot water for reuse Many datacenters consume twice as much power as their Power utilization effectiveness 1.06 or better Typically 1.8 or more IT components; the HP system directs almost all its energy into IT components The HP Apollo system is Scalable, but limited easier to scale to large Scalability Highly scalable by large footprint and sizes than air-cooled power demand technologies The HP Apollo system Requires expensive chillers No Yes avoids the need for expensive chillers

Business Sensitive: Do Not Cite, Copy, or Distribute 16 Embargoed from publication or disclosure until June 9, 2014 Comparison Matrix B: Comparison of the HP Apollo supercomputing platform with other warm-water cooled supercomputers While competing technologies employ warm-water cooling and reuse the warm water for energy needs, the HP Apollo supercomputing platform offers a number of advantages over these competitors due to its innovative design. These advantages will help the HP Apollo technology to “trickle down” to other IT platforms. The following matrix compares the warm-water cooling approach of the HP Apollo supercomputing platform with its warm-water cooled competitors.

IBM Direct Water Bullx Direct Liquid HP Apollo Cray CS300-LC Cooled DX360 Cooled (NREL Peregrine) (MSU) (SuperMUC) (CEA)

100% liquid cooled? YES NO NO NO

Cooling level matched to YES NO NO NO computational load?

Employs dry compute trays? YES NO NO NO

380-480 VAC supplied directly to racks achieves Power distribution system high efciency and 100-250 VAC 100-250 VAC 100-277 VAC simplifes datacenter power distribution

Integrated, quickly YES NO NO NO deployable system

Server density 195 servers/m2 72 servers/m2 125 servers/m2 83 servers/m2

Full energy reused Reuse implemented No reuse No reuse Reuse of waste heat to heat buildings at small scale implemented implemented

Power utilization 1.06 or better 1.2 1.25 1.1 effectiveness

HP fact sheet IBM video Cray webpage Sources Bull press release HP video IBM presentation Cray blog

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 17 4C. Describe how your product improves upon competitive products or technologies.

Superior to air cooling technologies

• maintain precise thermal control of the computer chips

• Does not allow for reuse of waste heat

• Power consumption threatens to limit the size of future supercomputers

• Low computational density results in a large footprint that could also limit the size of future supercomputers

Superior to other warm water-cooled systems

• 100% water cooled: chillers to cool some system or rack components. The HP Apollo platform does cool the processors while purposefully increasing the water temperature for energy cooling and water chillers.

• Server density: , the HP Apollo needed for comparable IBM and Cray systems.

• Flow control: water running through the system based on use—and therefore cooling needs—to

Business Sensitive: Do Not Cite, Copy, or Distribute 18 Embargoed from publication or disclosure until June 9, 2014 • Dry disconnect: the thermal bus-bar system, enabling datacenter staff to easily take out a compute tray and replace it, or replace components without worrying about water leaks.

• Power distribution system: the rack, and a simple DC step-down module within each tray, eliminating most labor to deploy.

• Integrated computational system: fully-integrated, pre-tested system that has the cooling infrastructure optimized, and cost.

• Modular, integrated cooling system: The HP Apollo platform employs multiple capabilities. Together with the factory-assembled, modular plumbing system

4D. Describe limitations of your product. What criticisms would your competitors ofer? The HP dry-disconnect technology has a slightly larger thermal resistance than competing

resistance. HP accepted the thermal resistance penalty in order to create a product cooled with warm water that could be mass marketed and supported (with maintenance and repair) throughout the world.

IBM brought to the top of the processors through micro-channels that are so small that they do not tolerate any impurities in the water. In fact, their technology calls for deionized

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 19 manufacture, deploy, and maintain.

At the opposite end of the spectrum, is currently trying to bring its workstation

technologies like IBM and Cray, HP has made a conscious decision to trade a couple years of failed attempts.

5. SUMMARY The HP Apollo supercomputing platform approaches HPC from an entirely new the HP Apollo platform makes the best use of the heat generated, while also operating as

building, and future expansion of that system may heat other buildings on the NREL campus. As this technology is adopted throughout the IT industry, it will offer new for instance.

energy

Business Sensitive: Do Not Cite, Copy, or Distribute 20 Embargoed from publication or disclosure until June 9, 2014 6. CONTACT INFORMATION

6A. Principal investigator(s) from each of the submitting organizations Nicolas Dube, Distinguished Technologist [email protected]

Computational Science Center, NREL [email protected]

6B. Media and public relations person who will interact with R&D’s editors regarding entry material Heather Lammers, Media Relations Manager National Renewable Energy Laboratory (NREL) [email protected]

[email protected]

6C. Person who will handle Banquet arrangements for winners National Renewable Energy Laboratory (NREL) [email protected]

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 21 APPENDIX Letter of Support:

Business Sensitive: Do Not Cite, Copy, or Distribute 22 Embargoed from publication or disclosure until June 9, 2014 Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 23 Case study National Renewable Energy Lab slashes data center power costs with HP servers Sustainability-focused organization sets a new milestone water liquid cooling

Industry

Objective data center

Approach Use innovative warm-water cooling developed by HP.

IT matters • NREL expects to save $800,000 in server cooling costs and $200,000 in building heating costs.

Business matters • The project demonstrates the viability of a new approach to cooling that could lead to power savings across a broad spectrum of industries.

“We are looking at saving $1 million per year in operations costs for a data center that cost less to build than a typical data center.”

– Steve Hammond, Director of Computational Sciences, NREL

The National Renewable Energy Laboratory (NREL) focuses on creative answers to today’s energy challenges. From fundamental science and energy analysis to validating new products for the commercial market, NREL researchers are dedicated to transforming the way the world uses energy.

That’s the case with the organization’s new high-performance computing system. While providing an astounding amount of compute power to drive renewable energy research, the NREL computing with a liquid cooling system developed by HP.

Business Sensitive: Do Not Cite, Copy, or Distribute 24 Embargoed from publication or disclosure until June 9, 2014 Case study | NREL

On a mission of Peregrine, uses warm water in its computing sustainability manner. The National Renewable Energy Laboratory (NREL) is the U.S. Department of Energy’s • At a facilities level, the data center is primary national laboratory for renewable designed to capture the “waste heat” from computing systems so it can be used to development. NREL is operated by the Alliance for Sustainable Energy, LLC. The lab focuses potentially adjacent facilities on the NREL on a future built around clean energy. campus.

A world-class and practices, delivers advances in science supercomputer and engineering, and transfers knowledge and At the heart of NREL’s new HPC data center innovations to address the nation’s energy and is the Peregrine supercomputer, which was environmental goals. the results of collaboration by the lab with HP and , who were chosen in a competitive In the course of its work, the lab makes heavy use of high-performance computing (HPC) systems that enable research that wouldn’t servers. These include scalable HP ProLiant™ be possible with direct experimentation alone. SL230s and SL250s Generation 8 (Gen8) Research conducted on the lab’s HPC systems servers based on eight-core Intel® ® E5- costs for important technologies, including solar photovoltaics, wind energy, energy based Intel Xeon processors and Intel Many storage, electric vehicles, and the large-scale Integrated Core architecture-based Intel® Xeon integration of renewables with the Smart Grid. Phi™ coprocessors.

In the course of its work, NREL strives to lead Four of the ten racks in the system contain by example. The lab’s 327-acre campus in next-generation servers with Intel Xeon Golden, Colorado, is home to four LEED-rated processors, and Intel coprocessors. buildings. And in its day-to-day operations, Using only a fraction of the racks required by the lab tries to continually raise the bar for the the average data center, the Peregrine system increased NREL’s modeling and simulation the idea behind the lab’s new HPC data center, capabilities over six-fold. which was designed to be one of the world’s In HPC terminology, Peregrine is a petascale system. In layman’s terms, that means the system can perform one million billion calculations per second. That unfathomable across the data center level of performance makes Peregrine the The new HPC data center, based in NREL’s world’s largest supercomputer exclusively Energy Systems Integration Facility (ESIF), dedicated to advancing renewable energy delivers an annualized average power usage research, according to NREL. comparison, the average data center operates of the Peregrine story. The other side is the Environmental Protection Agency’s Energy Star Program. driven by the HP-engineered warm-water liquid cooling system. This groundbreaking PUE rating is driven by innovations that span the data center and work together in an integrated manner: World-class cooling In conventional data centers, a mechanical • At an architectural level, NREL’s data center chiller delivers cold water into the data center features a chiller-less cooling system and its and its air-conditioning units, which then blow compact design results in short runs for both electrical and plumbing components, saving computer components from overheating. energy and expense. “From a data center perspective, that’s not • At a computing systems level, the data center’s new supercomputer, named director of computational sciences. “It’s like putting your beverage on your kitchen table 2

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 25 Case study | NREL

and then going to turn up the air conditioner to building on the coldest day of the winter get your drink cold.” here in Colorado using just the waste heat captured from the HPC system,” Hammond Hammond and his colleagues in NREL’s Energy Systems Integration Facility (ESIF) recognized view that integrates the data center into the a better way to cool a data center. Instead broader building.” of starting with compressor-based chillers that refrigerate water, start with warm water and use evaporative cooling towers. This and the ability to reuse heat work together to evaporative cooler (swamp cooler) rather help NREL greatly reduce its overall energy than an air conditioner to cool a home. Water million in annual energy savings and cost thermodynamics, or heat exchange.

“For us, warm-water cooling was the key “Compared to a typical data center, we expect to save $800,000 of operating expenses work,” Hammond says. “As a cooling medium, per year with our initial one-megawatt of air. A juice glass full of water has the cooling capturing and using waste heat, we estimate capacity of a room full of air. And the pump we will save another $200,000 that would energy needed to move that juice glass of otherwise be used to heat the building. So, water, to eject the heat from the system, is less than the fan energy needed to move that in operations costs for a data center that cost room full of air—much less.” less to build than a typical data center.”

In the Peregrine system, the water that cools the servers goes into the system at a relatively liquid cooling enabled NREL to build an warm 75 degrees Fahrenheit. “That temperate extremely energy-dense data center. The new without compromising the IT equipment, of supporting computing equipment with an without any chillers,” Hammond says. power to meet the needs of more than 7,500 houses. “We will have a megawatt of power in cooling don’t stop at avoiding the need for chillers. They continue with the output of the liquid cooling system. The water is returned a lot of power. The heat dissipated from that is very substantial. That density is enabled with degrees or higher. That creates a ready-made liquid cooling.” walkways around them. A selection based on ‘best value’ So how was it that NREL decided to work with the building’s heat system, we can heat the HP and Intel to build the Peregrine system? 3

Business Sensitive: Do Not Cite, Copy, or Distribute 26 Embargoed from publication or disclosure until June 9, 2014 Case study | NREL Customer at a glance: That was the outcome of an open process “We’re pushing the industry,” he says. “It’s that evaluated the proposals and capabilities where the industry is headed. We’re just Application cooling. Other systems use liquid cooling, but performance requirements, its compute they’ve got compressors. We’re pushing the Hardware capability requirements, and its aggressive envelope. We’re putting our money where our • Scalable HP ProLiant SL230s and SL250s values are.” Generation 8 (Gen8) servers based on eight- Hammond hopes the ESIF data center project • Next-generation HP servers that use 22nm “We had multiple companies vying for our will inspire data center operators across the business,” Hammond says. “Our qualitative country and beyond to see ways to use energy Xeon processors and Intel Many Integrated criteria included the computational capability Core architecture-based Intel® Xeon Phi™ coprocessors selected, under best value procurement, the “The project is successful in a number of system that best matched our requirements. ways. We’re providing an environment where HP services And the system from HP best matched industry can try some new things and learn • Custom onsite services from our experiences. It’s a higher calling requirements.” than just bringing in a system and putting it into production. We’re trying to catalyze they HP won the contract for the Peregrine system industry to do things they might not do on not for a few reasons but many reasons, their own, and do it in an environment where according to Hammond. “It was the way they are safe to take a chance. We all win, and the system was engineered, the compute to me it’s a best use of taxpayer dollars.” capability, the way it was packaged, and the partnership. There were a lot of things that Ultimately, NREL’s new HPC data center and were very attractive. It was clearly the best Peregrine supercomputer could herald a solution.” revolutionary shift in the way systems are cooled. It certainly demonstrates the potential In creating the Peregrine system, engineers at of warm-water liquid cooling to drive down NREL and HP worked in a close, collaborative power consumption and operating costs. In working toward the same goals. center project, Hammond recalls some words attributed to the inventor Tomas Edison: “The collaboration with HP has been wonderful,” Hammond says. “We learned from perspiration. HP and HP learned from us. We’re all trying capabilities. And any part we played in helping “It’s not rocket science; it’s make a better product for the rest of the solid engineering. You don’t need a miracle to invent national laboratory.“ something that doesn’t exist. “One of the roles of a national lab is to help You just work at it and you’ll the industry pioneer, to take a chance, to take some shared risks so that we all get a better get there.” product at the end of the day.” – Steve Hammond, Director of Computational Sciences, NREL Pushing the industry forward Learn more at Looking ahead, Hammond is excited about nrel.gov/esi/esif.html the prospects for the technologies used in the ESIF data center and the Peregrine supercomputer.

Sign up for updates hp.com/go/getupdated Share with colleagues Rate this document

warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel is a trademark of Intel Corporation in the U.S. and other countries.

Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 27 7. AFFIRMATION part of, or supplemental to, this entry is a fair and accurate representation of this product.

[email protected]

Business Sensitive: Do Not Cite, Copy, or Distribute 28 Embargoed from publication or disclosure until June 9, 2014