2014 R&D 100 Award Nomination HP Apollo Liquid-Cooled Supercomputing Platform Achieving new levels of effciency in high- performance computing Hewlett-Packard Company National Renewable Energy Laboratory (NREL) Golden, Colorado Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 NREL is a national laboratory of the U.S. Department of Energy, Offce of Energy Effciency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. 2014 R&D 100 AWARDS ENTRY FORM 1. GENERAL ENTRY INFORMATION 1A. Product brand name and name of submitting organization(s) HP Apollo Platform for High-Performance Computing Submitted jointly by the Hewlett-Packard Company (HP) and the National Renewable Energy Laboratory (NREL). 1B. Short description of the product The HP Apollo Platform integrates a dense high-performance computing (HPC) cooling. It packs amazing computational capability into a small space, eliminates the dramatically reducing the risk of using electronics in the proximity of water. 1C. Photo Figure 1. This photo illustrates the small footprint of the first installation of the Apollo product, a system named “Peregrine,” in NREL’s Energy Systems Integration Facility (ESIF). Shown in the photo, from right to left, are Energy Secretary Ernest Moniz, NREL Director Dan Arvizu, and NREL Principal Investigator Steve Hammond, director of NREL’s Computational Science Center. Photo by Dennis Schroeder, NREL 27496 1D. Price in U.S. dollars Business Sensitive: Do Not Cite, Copy, or Distribute Front page background photo from iStock 17325024; 2 Embargoed from publication or disclosure until June 9, 2014 inset photo by Hewlett-Packard Development Company, L.P. 1E. Evidence, such as a press release or invoice, showing that the product was frst available for sale or licensing between Jan. 1, 2013, and Dec. 31, 2013 Energy Secretary Moniz Dedicates Clean Energy Research Center, New Supercomputer: http://energy.gov/articles/energy-secretary-moniz-dedicates-clean-energy-research- center-new-supercomputer http://www8.hp.com/us/en/hp-news/press-release.html?id=1530136#.Uzxn1qhdU1I 2. PRODUCT DESCRIPTION 2A. What does the product or technology do? Describe the principal applications of this product. systems scale up by orders of magnitude, constraints on energy consumption and heat dissipation will impose limitations on HPC systems and the facilities in which they are housed. greater performance density, cutting energy consumption in half, and creating synergies 2B. How will this product beneft the market that it serves? the technology. power plant. Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 3 Meanwhile, the HPC market is expanding and growing rapidly. The warm water-cooled HP Apollo Platform can be employed in a wide range of HPC applications to greatly reduce energy consumption in this rapidly growing market. 3. TECHNOLOGY DESCRIPTION 3A. How does the product operate? Describe the materials, composition, construction, or mechanism of action. storage, management, and dissemination of data and information. It typically consists employ warm water for cooling. Figure 2. The use of water cooling declined in the 1990s as the IT industry replaced complex water-cooled systems with lower-cost and simpler supercomputers, but systems are now reaching power densities that prohibit the continued use of air cooling. Illustration from Hewlett-Packard Development Company, L.P. As the demands for supercomputing continue, the power consumption, space Business Sensitive: Do Not Cite, Copy, or Distribute 4 Embargoed from publication or disclosure until June 9, 2014 dense systems. The rapid rise in energy consumption by the world-wide IT industry has largest power consumer of all the countries on Earth. Multiple approaches are now entering the market, from immersion to phase-change IT industry, making it amenable to mass-manufacturing while being easy to deploy, 1. Enable warm-water cooling. exchange where the heat is being generated and keeps critical components safely within cooling. 2. Maximize heat reuse. Because datacenters are now built to scales where they consume leading industry actors are now looking at ways to reuse what would otherwise be called 3. Achieve optimal temperatures in computing components. far better ability to control the temperature of the computing components (processors, computer operator to maintain uniform cooling and associated computing performance systems. This tight thermal control also allows the silicon chips to be maintained at the optimal temperature for their best performance. Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 5 least three more patent applications. Liquid Cooling Technology introduce contaminants. and the memory using a set of heat pipes. A heat pipe is a tube or pipe that holds a Figure 3. A heat pipe, shown here, is a tube or pipe that holds a working fluid and its vapor in equilibrium, with the fluid held by a wick. Heat applied to the evaporator end causes the working fluid to vaporize, and that vapor condenses on the cool end of the pipe, transferring its heat to that end. The condensation causes a pressure differential that helps drive the flow in the heat pipe. The condensed vapor then returns to the hot end of the pipe via the wick. Illustration from Hewlett-Packard Development Company, L.P. almost instantly. Once the compute tray is placed within the rack, the thermal block on the side of the tray Business Sensitive: Do Not Cite, Copy, or Distribute 6 Embargoed from publication or disclosure until June 9, 2014 with a minimum of a thousand pounds of force, making sure the mechanical connection is as tight as it can be to guarantee a good thermal transfer. to as a dry disconnect. The thermal interface used at the solid contact region does not the dry disconnect, its safety and reliability far outweigh the minor drawback. Figure 4. These top and angled views of an HP Apollo Server tray (upper and lower left photos) show four rectangular computational units, each equivalent to about 20 laptop computers. The heat pipes are the copper tubes that extend from roughly the middle of each unit to the side of the tray, where they connect to two thermal blocks (shown edgewise in the top view and on the right in the lower left view) that allow for heat transfer to the water cooling system. The lower right photo shows the computational units with the covers removed, exposing the area where the heat pipes connect to the heat source. Photos from Hewlett-Packard Development Company, L.P. distributed in parallel to four sections, aligning with the four processors or accelerators Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 7 Figure 5. The left illustration shows a water cooling module, or “water wall,” with 10 dark grey thermal bus bars running horizontally and separated by bright metallic strips. The thermal blocks on the compute trays clamp directly to these thermal bus bars. On the opposite side of the thermal bus bars, water flows past a pin fin array that optimizes the heat transfer. The pin fin array is shown on the upper right. The lower right illustration shows a single thermal bus bar with the thermal contact facing up and the warm-water inlet and hot- water outlets exposed. Photos for maximizing the temperature difference between the inlet and the outlet water, which is from Hewlett-Packard Development Company, L.P. mechanism that is temperature dependent, making them smaller, more reliable, and much Figure 6. The tiny flow control valves used in the HP Apollo supercomputing platform are fully passive, with an activation mechanism that is temperature dependent. The cutout rendering on the right shows the valves’ placement within the thermal bus bar. Photo from Hewlett-Packard Development Company, L.P. workload. This ensures that the pumping energy for the cooling system is optimized for Business Sensitive: Do Not Cite, Copy, or Distribute 8 Embargoed from publication or disclosure until June 9, 2014 These water streams are aggregated together with a purpose-built rack piping, featuring heat exchanger in the middle of the rack to take care of the residual heat not transferred Figure 7. The HP Apollo servers employ an air-to-water heat exchanger to remove heat from components that aren’t directly cooled with water. As shown in this top view, the cooled air flows through the center of the rack, then returns down each side before being cooled again. Illustration from Hewlett-Packard Development Company, L.P. Power Delivery Business Sensitive: Do Not Cite, Copy, or Distribute Embargoed from publication or disclosure until June 9, 2014 9 DC power distribution inside the rack. found in a datacenter. the uninterruptable power supply with a battery backup unit within the rack. This install. Figure 8. Compared to the electrical distribution in a typical datacenter, the HP Apollo supercomputing platform operates at much higher efficiency, cutting energy losses in half by providing 480 VAC to each rack. A battery backup unit (BBU) provides high-voltage DC backup power within the rack, eliminating the need for an uninterruptable power supply (UPS). Illustration from Hewlett-Packard Development Company, L.P.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages28 Page
-
File Size-