University of California Irvine Gets 45% More Research Done with Dell and Intel HPC Cluster • HPC

Total Page:16

File Type:pdf, Size:1020Kb

University of California Irvine Gets 45% More Research Done with Dell and Intel HPC Cluster • HPC University of California Irvine gets 45% more research done with Dell and Intel HPC cluster • HPC “ The Intel Xeon 5600 series processors have enabled us to work with a 33 percent increase in speed and a 45 percent increase in the amount of data that we can acquire and maintain. The partnership between Dell and Intel has allowed us to speed up our processes in the way that we deploy our software codes.” Ronald D. Hubbard, Executive Director, GreenPlanet HPCC, School of Physical Sciences, University of California Irvine Customer Profile Company: University of California Irvine Industry: Education Country: United States Students: 34,000 Faculty: 2,685 Web: www.UCIrvine.edu Benefits Business Need • 45% more research done than University of California Irvine (UC Irvine) School of Physical Sciences with previous solutions needed its own high-performance computing (HPC) cluster to provide • 33% increase in performance its 27 research professors and their teams control over their own compared with previous solutions destiny. Previously researchers handed out their data to HPC centers • 24x7 access to data for researchers for analysis and did not have 24x7 access to it. • 10 federal grants won out of Solution 11 grant applications The university worked with Dell and Intel to help design and build an • 28% energy savings in-house HPC solution using Dell™ PowerEdge™ servers with Intel® • 23% space savings Xeon® processors, Dell PowerVault™ MD1000 direct attached storage arrays and QLogic TrueScale InfiniBand interconnect. When Former Vice President Al Gore won the Nobel Peace Prize in 2007 for his work on climate change, he shared the prize with 34 researchers on the Intergovernmental Panel for Climate Change, 11 of whom do their research at the University of California Irvine (UC Irvine), School of Physical Sciences. Three other researchers have received the Nobel Prize during their tenure at UC Irvine: Frank Sherwood Rowland (Chemistry, 1995), Frederick Reines (Physics, 1995) and Irwin Rose (Chemistry, 2004). One of the 10 universities in the who committed themselves to high- University of California system, UC performance computing at UC Irvine. Irvine prides itself on its distinguished They put us in a position where we “ When I started my history of scientific research. could get their advanced tools and their advanced products—not necessarily Yet until a few years ago, UC Irvine research, I was off-the-shelf products—by opening did not have sufficient homegrown their engineering and design-build very happy to be to compute power to support its award- processes to us and ultimately allowing winning Nobel groups or its young and us to do work that we couldn’t have be doing cutting- innovative researchers in such areas as done without their help.” edge research atmospheric chemistry, bioinformatics, biopharmaceutical engineering and UC Irvine’s HPC solution, GreenPlanet, by simulating other disciplines that focus on chemistry started with 88 Dell PowerEdge 1950 and synthetic chemistry. Its researchers servers and 14 PowerEdge 2950 servers approximately 800 had to outsource their data to high- with Intel Xeon processors. QLogic atoms. And now performance computing (HPC) labs TrueScale InfiniBand interconnect, such as the Fermi Lab in Chicago, sourced through Dell, provided high with GreenPlanet the National Center for Atmospheric thoughput, low latency connectivity Research in Boulder, Colorado, the among the nodes. Dell PowerVault we’re doing Stanford Linear Accelerator Center MD1000 direct attached storage arrays routinely a quarter in Menlo Park, California and the delivered rack-dense, scalable storage. San Diego Supercomputer Center. of a million atoms, “Our researchers were not in charge Technology at Work and for two or three of their own destiny. They needed to be able to access their own data on an Services orders of magnitude hourly basis, and they needed compute Dell™ Hardware Customization longer in time.” power that would speed up the pace Services of research,” says Ronald D. Hubbard, ® Douglas Tobias, executive director, GreenPlanet HPCC, Intel Cluster Ready Program Professor of Chemistry, School of Physical Sciences, UC Irvine. University of California Irvine Hubbard was also recently named a Hardware 2010 laureate by the ComputerWorld Dell PowerEdge™ C6100 servers Honors program for his work on a case with Intel® Xeon® 5600 series study outlining collaboration between processors UC Irvine’s dark particle physicists and Dell PowerEdge C1100 servers Intel Corp. with Intel Xeon 5500 and 5600 series processors Seeking partners in progress Dell PowerEdge R710, R610 and Each of the large research groups at R410 servers with Intel Xeon 5500 UC Irvine got together with Hubbard, and 5600 series processors and they decided that UC Irvine needed an HPC solution. “We put together Dell PowerEdge 2950 and 1950 a plan and we decided that the only servers with Intel Xeon processors way we could make it work was by Dell PowerVault™ MD1000 direct having partners in progress, not just attached storage arrays vendors,” says Hubbard. “Dell and Intel QLogic TrueScale InfiniBand rose to the challenge through their interconnect CEOs, Michael Dell and Craig Barrett, Tapping federal grants increase in speed and a 45 percent With the infrastructure in place, federal increase in the amount of data that money followed. we can acquire and maintain. The partnership between Dell and Intel has “Due to our activity in the science definitely allowed us the opportunity community with the National Science to speed up our processes in the way Foundation, the National Institutes that we deploy our software codes.” of Health, the Department of Energy, the Department of Defense and NASA, The Dell and Intel solution also saves we’ve applied for and received grants 28 percent in energy consumption and to expand GreenPlanet over these 23 percent in space. past 17 months from 102 nodes to “Anytime a university like ours is looking 327 servers,” says Hubbard. “We applied for new partnerships, we’re looking for for 11 grants and got 10.” a lot of ingredients. And what Dell has UC Irvine’s federal grants span a wide been able to bring through their partners spectrum of scientific activities. A $20 are the roadmaps and the highways to million grant for UC Irvine’s Chemistry do more dense and scalable research at the Space Time Limit (CASTL) project, computing. Without that GreenPlanet will enable researchers to look at would not be a reality, and without that “ We study systems photographs of chemical reactions GreenPlanet wouldn’t be productive. within cells for the very first time using And it’s through the Dell process and with hundreds of high resolution video images. Other the process of their partners that we’ve thousands of atoms projects on the macro-end of the been able to be so successful in a very spectrum will gather data about the short period of time. GreenPlanet and simulate their deepest novas in the universe. was designed to be an enabler of new discoveries. In order to make that behavior. So, it’s “In between, we have really strong design work, it takes a lot of people, a staples of atmospheric chemistry, very important for lot of process and a lot of production. coupled with synthetic organic Dell and Intel, more specifically, have chemistry, as well as pharmaceutical us to have access been partners that have allowed all chemistry and a great deal of climate that to happen.” to a large number and weather modeling that takes place from the 30,000-mile level Simplifying acquisition and of processors that via satellite right down to the ground deployment level on a minute-by-minute basis,” have very fast In addition to opening up its design says Hubbard. and build processes to enable UC interconnect. And 33% increase in speed Irvine to obtain the advanced tools the great thing that it needed to do more science The 327 servers in the expanded cluster in less time, Dell worked with Intel include Dell PowerEdge R710, R610 about GreenPlanet to bring UC Irvine into Intel’s Cluster and R410s servers with Intel Xeon 5500 Ready Program. Intel Cluster Ready is it satisfies those and 5600 series processors. “Currently, certification includes comprehensive we are procuring Dell PowerEdge constraints for us.” testing and integration work, enabling C6100 and C1100 servers purpose- Dell HPC clusters with Intel Cluster Rommie Amaro, made by Dell with Intel Xeon 5600 series Ready certification to simplify Assistant Professor of processors, Intel’s newest and most acquisition and deployment. Pharmaceutical Science and advanced processors,” says Hubbard. Computer Science, UC Irvine also took advantage of School of Physical Sciences, With four two-socket server nodes in a the Dell Hardware Customization University of California Irvine 2U rack chassis, the PowerEdge C6100 is service that integrates hardware, a hyper-scale-inspired building block for images, applications, peripherals and HPC environments and cloud builders. documents with systems as they’re “Through the efforts of Dell and Intel, being built. we’ve been able to accelerate our “With the integration that Dell does and activities plus we’re seeing great energy getting the product preassembled and savings and space savings in being able put into our cluster, we have become to do a greater amount of work in the more efficient and proficient in the same footprint,” says Hubbard. “In fact, way we monitor and manage on a daily we’re able to do a lot more work than basis,” says Hubbard. “GreenPlanet HPC we ever thought we could because of is powered by Linux. We run multiple the advancements that Dell has made applications. We run very diverse in their Cloud 6100 series servers. The software codes across 27 different Intel Xeon 5600 series processors have professors and their research groups. enabled us to work with a 33 percent And inside all these research groups This opens up the landscape of the are divergent codes that take weeks or type of questions that you can ask.
Recommended publications
  • GPU Developments 2018
    GPU Developments 2018 2018 GPU Developments 2018 © Copyright Jon Peddie Research 2019. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • CTL RFP Proposal
    State of Maine Department of Education in coordination with the National Association of State Procurement Officials PROPOSAL COVER PAGE RFP # 201210412 MULTI-STATE LEARNING TECHNOLOGY INITIATIVE Bidder’s Organization Name: CTL Chief Executive - Name/Title: Erik Stromquist / COO Tel: 800.642.3087 x 212 Fax: 503.641.5586 E-mail: [email protected] Headquarters Street Address: 3460 NW Industrial St. Headquarters City/State/Zip: Portland, OR 97210 (provide information requested below if different from above) Lead Point of Contact for Proposal - Name/Title: Michael Mahanay / GM, Sales & Marketing Tel: 800.642.3087 x 205 Fax: 503.641.5586 E-mail: [email protected] Street Address: 3460 NW Industrial St. City/State/Zip: Portland, OR 97219 Proposed Cost: $294/yr. The proposed cost listed above is for reference purposes only, not evaluation purposes. In the event that the cost noted above does not match the Bidder’s detailed cost proposal documents, then the information on the cost proposal documents will take precedence. This proposal and the pricing structure contained herein will remain firm for a period of 180 days from the date and time of the bid opening. No personnel on the multi-state Sourcing Team or any other involved state agency participated, either directly or indirectly, in any activities relating to the preparation of the Bidder’s proposal. No attempt has been made or will be made by the Bidder to induce any other person or firm to submit or not to submit a proposal. The undersigned is authorized to enter into contractual obligations on behalf of the above-named organization.
    [Show full text]
  • Mass-Producing Your Certified Cluster Solutions
    Mass-Producing Your Certified Cluster Solutions White Paper The Intel® Cluster Ready program is designed to help you make the most of your engineering Intel® Cluster Ready resources. The program enables you to sell several different types of clusters from each solution High-Performance you design, by varying the hardware while maintaining the same software stack. You can gain a Computing broader range of cluster products to sell without engineering each one “from scratch.” As long as the software stack remains the same and works on each one of the hardware configurations, you can be confident that your clusters will interoperate with registered Intel Cluster Ready applications. Your customers can purchase your clusters with that same confidence, knowing they’ll be able to get their applications up and running quickly on your systems. Intel® Cluster Ready: Mass-Producing Your Certified Cluster Solutions Table of Contents Overview of Production-Related Activities � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 3 Creating and certifying recipes � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 3 Maintaining recipes � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � 3 Mass-producing recipes � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � �
    [Show full text]
  • NVM Express and the PCI Express* SSD Revolution SSDS003
    NVM Express and the PCI Express* SSD Revolution Danny Cobb, CTO Flash Memory Business Unit, EMC Amber Huffman, Sr. Principal Engineer, Intel SSDS003 Agenda • NVM Express (NVMe) Overview • New NVMe Features in Enterprise & Client • Driver Ecosystem for NVMe • NVMe Interoperability and Plugfest Plans • EMC’s Perspective: NVMe Use Cases and Proof Points The PDF for this Session presentation is available from our Technical Session Catalog at the end of the day at: intel.com/go/idfsessions URL is on top of Session Agenda Pages in Pocket Guide 2 Agenda • NVM Express (NVMe) Overview • New NVMe Features in Enterprise & Client • Driver Ecosystem for NVMe • NVMe Interoperability and Plugfest Plans • EMC’s Perspective: NVMe Use Cases and Proof Points 3 NVM Express (NVMe) Overview • NVM Express is a scalable host controller interface designed for Enterprise and client systems that use PCI Express* SSDs • NVMe was developed by industry consortium of 80+ members and is directed by a 13-company Promoter Group • NVMe 1.0 was published March 1, 2011 • Product introductions later this year, first in Enterprise 4 Technical Basics • The focus of the effort is efficiency, scalability and performance – All parameters for 4KB command in single 64B DMA fetch – Supports deep queues (64K commands per Q, up to 64K queues) – Supports MSI-X and interrupt steering – Streamlined command set optimized for NVM (6 I/O commands) – Enterprise: Support for end-to-end data protection (i.e., DIF/DIX) – NVM technology agnostic 5 NVMe = NVM Express NVMe Command Execution 7 1
    [Show full text]
  • System Design for Telecommunication Gateways
    P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come SYSTEM DESIGN FOR TELECOMMUNICATION GATEWAYS Alexander Bachmutsky Nokia Siemens Networks, USA A John Wiley and Sons, Ltd., Publication P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come SYSTEM DESIGN FOR TELECOMMUNICATION GATEWAYS P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come SYSTEM DESIGN FOR TELECOMMUNICATION GATEWAYS Alexander Bachmutsky Nokia Siemens Networks, USA A John Wiley and Sons, Ltd., Publication P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come This edition first published 2011 C 2011 John Wiley & Sons, Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
    [Show full text]
  • Accelerate Your AI Journey with Intel
    Intel® AI Workshop 2021 Accelerate Your AI Journey with Intel Laurent Duhem – HPC/AI Solutions Architect ([email protected]) Shailen Sobhee - AI Software Technical Consultant ([email protected]) Notices and Disclaimers ▪ Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. ▪ No product or component can be absolutely secure. ▪ Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. For more complete information about performance and benchmark results, visit http://www.intel.com/benchmarks . ▪ Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks . ▪ Intel® Advanced Vector Extensions (Intel® AVX) provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo.
    [Show full text]
  • Extracting and Mapping Industry 4.0 Technologies Using Wikipedia
    Computers in Industry 100 (2018) 244–257 Contents lists available at ScienceDirect Computers in Industry journal homepage: www.elsevier.com/locate/compind Extracting and mapping industry 4.0 technologies using wikipedia T ⁎ Filippo Chiarelloa, , Leonello Trivellib, Andrea Bonaccorsia, Gualtiero Fantonic a Department of Energy, Systems, Territory and Construction Engineering, University of Pisa, Largo Lucio Lazzarino, 2, 56126 Pisa, Italy b Department of Economics and Management, University of Pisa, Via Cosimo Ridolfi, 10, 56124 Pisa, Italy c Department of Mechanical, Nuclear and Production Engineering, University of Pisa, Largo Lucio Lazzarino, 2, 56126 Pisa, Italy ARTICLE INFO ABSTRACT Keywords: The explosion of the interest in the industry 4.0 generated a hype on both academia and business: the former is Industry 4.0 attracted for the opportunities given by the emergence of such a new field, the latter is pulled by incentives and Digital industry national investment plans. The Industry 4.0 technological field is not new but it is highly heterogeneous (actually Industrial IoT it is the aggregation point of more than 30 different fields of the technology). For this reason, many stakeholders Big data feel uncomfortable since they do not master the whole set of technologies, they manifested a lack of knowledge Digital currency and problems of communication with other domains. Programming languages Computing Actually such problem is twofold, on one side a common vocabulary that helps domain experts to have a Embedded systems mutual understanding is missing Riel et al. [1], on the other side, an overall standardization effort would be IoT beneficial to integrate existing terminologies in a reference architecture for the Industry 4.0 paradigm Smit et al.
    [Show full text]
  • Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability
    Fast Setup and Integration of ABAQUS on HPC Linux Cluster and the Study of Its Scalability Betty Huang, Jeff Williams, Richard Xu Baker Hughes Incorporated Abstract: High-performance computing (HPC), the massive powerhouse of IT, is now the fastest-growing sector in the industry, especially for the oil and gas industry, which has outpaced other US industries in integrating HPC into its critical business functions. HPC can offer greater capacity and flexibility to allow more advanced data analysis, which usually cannot be handled by individual workstations. In April 2008, Baker Oil Tools installed a Linux Cluster to boost its finite element analysis and computational fluid dynamics application performance. Platform Open Cluster Stack (OCS) has been implemented as cluster and system management software and load-sharing facility for high-performance computing (LSF-HPC) as job scheduler. OCS is a pre-integrated, modular and hybrid software stack that contains open-source software and proprietary products. It is a simple and easy way to rapidly assemble and manage small- to large-scale Linux-based HPC clusters. 1. Introduction of HPC A cluster is a group of linked computers working together closely so they form a single computer. Depending on the function, clusters can be divided into several types: high-availability (HA) cluster, load- balancing cluster, grid computing, and Beowulf cluster (computing). In this paper, we will focus on the Beowulf-type high-performance computing (HPC) cluster. Driven by more advanced simulation project needs and the capability of modern computers, the simulation world relies more and more on HPC. HPC cluster has become the dominating resource in this area.
    [Show full text]
  • Appro Xtreme-X Computers with Mellanox QDR Infiniband
    Intel Cluster Ready Appro Xtreme-X Computers with Mellanox QDR Infiniband AA PP PP RR OO II NN TT EE RR NN AA TT II OO NN AA LL II NN CC Steve Lyness Vice President, HPC Solutions Engineering [email protected] Company Overview :: Corporate Snapshot • Computer Systems manufacturer founded in 1991 • From OEM to branded products and solutions in 2001 – Headquarters in Milpitas, CA – Regional Office in Houston, TX – Global Presence via International Resellers and Partners – Manufacturing and Hardware R&D in Asia – 100+ employees worldwide • Leading developer of high performance servers, clusters and supercomputers. – Balanced Architecture – Delivering a Competitive Edge – Shipped over thousands of clusters and servers since 2001 – Solid Growth from 69.3M in 2006 to 86.5M in 2007 • Target Markets: - Electronic Design Automation - Financial Services - Government / Defense - Manufacturing - Oil & Gas *Source: IDC WW Quarterly PC Tracker SLIDE | 2 SLIDE A P P R O H P C P R E S E N T A T I O N Company Overview :: Why Appro High Performance Computing Expertise – Designing Best in Class Supercomputers Leadership in Price/Performance Energy Efficient Solution Best System Scalability and Manageability High Availability Features SLIDE | 3 SLIDE A P P R O H P C P R E S E N T A T I O N Company Overview :: Our Customers SLIDE | 4 SLIDE A P P R O H P C P R E S E N T A T I O N Company Overview :: Solid Growth +61% Compound Annual Revenue Growth $86.5M $69.3M $33.2M 2005 2006 2007 SLIDE | 5 SLIDE A P P R O H P C P R E S E N T A T I O N HPC Experience ::
    [Show full text]
  • GPU Developments 2017T
    GPU Developments 2017 2018 GPU Developments 2017t © Copyright Jon Peddie Research 2018. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • High Performance Computing
    C R C P R E S S . T A Y L O R & F R A N C I S High Performance Computing A Chapter Sampler www.crcpress.com Contents 1. Overview of Parallel Computing From: Elements of Parallel Computing, by Eric Aubanel 2. Introduction to GPU Parallelism and CUDA From: GPU Parallel Program Development Using CUDA, by Tolga Soyata 3. Optimization Techniques and Best Practices for Parallel Codes From: Parallel Programming for Modern High Performance Computing Systems, by Pawel Czarnul 4. Determining an Exaflop Strategy From: Programming for Hybrid Multi/Manycore MMP Systems, by John Levesque, Aaron Vose 5. Contemporary High Performance Computing From: Contemporary High Performance Computing: From Petascale toward Exascale, by Jeffrey S. Vetter 6. Introduction to Computational Modeling From: Introduction to Modeling and Simulation with MATLAB® and Python, by Steven I. Gordon, Brian Guilfoos 20% Discount Available We're offering you 20% discount on all our entire range of CRC Press titles. Enter the code HPC10 at the checkout. Please note: This discount code cannot be combined with any other discount or offer and is only valid on print titles purchased directly from www.crcpress.com. www.crcpress.com Copyright Taylor & Francis Group. Do Not Distribute. CHAPTER 1 Overview of Parallel Computing 1.1 INTRODUCTION In the first 60 years of the electronic computer, beginning in 1940, computing performance per dollar increased on average by 55% per year [52]. This staggering 100 billion-fold increase hit a wall in the middle of the first decade of this century. The so-called power wall arose when processors couldn't work any faster because they couldn't dissipate the heat they pro- duced.
    [Show full text]
  • Spotlight on Ansys 12.0
    TM EXCELLENCE IN ENGINEERING SIMULATION ADVANTAGEVOLUME III ISSUE 1 2009 SPOTLIGHT ON ANSYS 12.0 FRAMEWORK MULTIPHYSICS HIGH-PERFORMANCE PAGE 6 PAGE 11 COMPUTING PAGE 25 EDITORIAL Engineering Simulation: t pa Needed Now © iStockphoto.com/Atra More Than Ever In a tough economy, forward-thinking companies are investing in leading-edge simulation technology to drive top-line revenue growth and bottom-line savings. Time and cost benefits of engineering simulation are this simulation method has the potential to reduce crack well documented. Predicting product performance and growth analysis time by over 90 percent compared determining optimal solutions early in the design phase help with manual methods. The productivity gain will enable to avoid late-stage problems and to eliminate trial-and-error engineers to analyze more designs annually, thus keeping testing cycles that drive up costs and bog down schedules. up with increased demand for turbochargers around the Simulation enables engineers to perform what-if studies world and strengthening the company’s leadership and to compare alternatives, processes that otherwise position in this competitive industry sector. would be impractical. Indeed, bottom-line savings are The prediction method is based on improved fracture one key benefit that prompts most companies to implement mechanics capabilities for calculating J integrals, one of simulation, and are most readily quantified in return-on- the many enhancements in ANSYS 12.0. Previewed in the investment calculations. Spotlight section of this issue, the release is a milestone A second, and potentially greater, benefit is boosting for the software supplier and a huge step forward for the top-line revenue growth.
    [Show full text]