IBM Power System E950 Technical Overview and Introduction

Total Page:16

File Type:pdf, Size:1020Kb

IBM Power System E950 Technical Overview and Introduction Front cover IBM Power System E950 Technical Overview and Introduction James Cruickshank Yongsheng Li (Victor) Armin Röll Redpaper International Technical Support Organization IBM Power System E950: Technical Overview and Introduction August 2018 REDP-5509-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (August 2018) This edition applies to the IBM Power System E950 (9040-MR9) system. Important: At time of publication, this book is based on a pre-GA version of a product. For the most up-to-date information regarding this product, consult the product documentation or subsequent updates of this book. © Copyright International Business Machines Corporation 2018. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix Authors. ix Now you can become a published author, too! . .x Comments welcome. xi Stay connected to IBM Redbooks . xi Chapter 1. General description . 1 1.1 System overview . 3 1.2 Operating environment . 7 1.3 Physical package . 8 1.4 System features . 9 1.4.1 Minimum configuration . 10 1.4.2 Power supply features . 11 1.4.3 Processor module features . 11 1.4.4 Memory features . 12 1.4.5 PCIe slots . 14 1.4.6 USB. 15 1.4.7 Disk and media features . 16 1.5 I/O drawers . 20 1.5.1 PCIe Gen3 I/O Expansion Drawer . 21 1.5.2 I/O drawers and usable PCI slots . 22 1.5.3 EXP24S SFF Gen2-bay Drawer . 23 1.5.4 EXP24SX and EXP12SX SAS Storage Enclosures . 24 1.6 System racks. 25 1.6.1 New racks . 26 1.6.2 IBM Enterprise 42U Slim Rack 7965-S42. 26 1.6.3 IBM 7014 Model T42 rack. 26 1.6.4 1.8 Meter Rack (#0551) . 28 1.6.5 2.0 Meter Rack (#0553) . 28 1.6.6 Rack (#ER05) . 28 1.6.7 The AC power distribution unit and rack content . 29 1.6.8 Rack-mounting rules . 32 1.6.9 Useful rack additions. 32 1.6.10 Original equipment manufacturer racks . 35 1.7 Hardware Management Console . 36 1.7.1 New features of the Hardware Management Console . 36 1.7.2 Hardware Management Console overview . 36 1.7.3 Hardware Management Console code level . 38 1.7.4 Two architectures of Hardware Management Console. 39 1.7.5 Connectivity to POWER9 processor-based systems . 40 1.7.6 High availability Hardware Management Console configuration. 41 Chapter 2. Architecture and technical overview . 43 2.1 The IBM POWER9 processor . 44 2.1.1 POWER9 processor overview. 44 2.1.2 POWER9 processor core . 46 © Copyright IBM Corp. 2018. All rights reserved. iii 2.1.3 Simultaneous multithreading. 47 2.1.4 POWER9 compatibility modes . 48 2.1.5 Processor feature codes . 48 2.1.6 Memory access. 50 2.1.7 On-chip L3 cache innovation and intelligent caching . 51 2.1.8 Hardware transactional memory . 52 2.1.9 POWER9 accelerator processor interfaces . 52 2.1.10 Power and performance management . 54 2.1.11 Comparison of the POWER9, POWER8, and POWER7+ processors . 57 2.2 Memory subsystem . 58 2.2.1 DIMM memory riser card. 59 2.2.2 Memory placement rules. 62 2.2.3 Memory activations . 63 2.2.4 Memory throughput. 64 2.2.5 Active Memory Mirroring . 66 2.2.6 Memory error correction and recovery . 68 2.2.7 Special Uncorrectable Error handling . 68 2.3 Capacity on Demand. 68 2.3.1 Capacity on Demand: New features . 69 2.3.2 Capacity Upgrade on Demand . 69 2.3.3 Processor activations . 70 2.3.4 Elastic Capacity on Demand (Temporary) . 71 2.3.5 Utility Capacity on Demand. 72 2.3.6 Trial Capacity on Demand. 73 2.3.7 Software licensing and CoD . 73 2.3.8 Solution Edition for Healthcare . 73 2.4 System buses . 74 2.4.1 PCIe Gen4 . ..
Recommended publications
  • Investigations on Hardware Compression of IBM Power9 Processors
    Investigations on hardware compression of IBM Power9 processors Jérome Kieffer, Pierre Paleo, Antoine Roux, Benoît Rousselle Outline ● The bandwidth issue at synchrotrons sources ● Presentation of the evaluated systems: – Intel Xeon vs IBM Power9 – Benchmarks on bandwidth ● The need for compression of scientific data – Compression as part of HDF5 – The hardware compression engine NX-gzip within Power9 – Gzip performance benchmark – Bitshuffle-LZ4 benchmark – Filter optimizations – Benchmark of parallel filtered gzip ● Conclusions – on the hardware – on the compression pipeline in HDF5 Page 2 HDF5 on Power9 18/09/2019 Bandwidth issue at synchrotrons sources Data analysis computer with the main interconnections and their associated bandwidth. Data reduction Upgrade to → Azimuthal integration 100 Gbit/s Data compression ! Figures from former generation of servers Kieffer et al. Volume 25 | Part 2 | March 2018 | Pages 612–617 | 10.1107/S1600577518000607 Page 3 HDF5 on Power9 18/09/2019 Topologies of Intel Xeon servers in 2019 Source: intel.com Page 4 HDF5 on Power9 18/09/2019 Architecture of the AC922 server from IBM featuring Power9 Credit: Thibaud Besson, IBM France Page 6 HDF5 on Power9 18/09/2019 Bandwidth measurement: Xeon vs Power9 Computer Dell R840 IBM AC922 Processor 4 Intel Xeon (12 cores) 2 IBM Power9 (16 cores) 2.6 GHz 2.7 GHz Cache (L3) 19 MB 8x 10 MB Memory channels 4x 6 DDR4 2x 8 DDR4 Memory capacity → 3TB → 2TB Memory speed theory 512 GB/s 340 GB/s Measured memory speed 160 GB/s 270 GB/s Interconnects PCIe v3 PCIe v4 NVlink2 & CAPI2 GP-GPU co-processor 2Tesla V100 PCIe v3 2Tesla V100 NVlink2 Interconnect speed 12 GB/s 48 GB/s CPU ↔ GPU Page 8 HDF5 on Power9 18/09/2019 Strength and weaknesses of the OpenPower architecture While amd64 is today’s de facto standard in HPC, it has a few competitors: arm64, ppc64le and to a less extend riscv and mips64.
    [Show full text]
  • IBM Flashsystem A9000 Product Guide (Version 12.3)
    Front cover IBM FlashSystem A9000 Product Guide (Updated for Version 12.3.2) Bert Dufrasne Stephen Solewin Francesco Anderloni Roger Eriksson Lisa Martinez Product Guide IBM FlashSystem A9000 Product Guide This IBM® Redbooks® Product Guide is an overview of the main characteristics, features, and technologies that are used in IBM FlashSystem® A9000 Models 425 and 25U, with IBM FlashSystem A9000 Software V12.3.2. IBM FlashSystem A9000 storage system uses the IBM FlashCore® technology to help realize higher capacity and improved response times over disk-based systems and other competing flash and solid-state drive (SSD)-based storage. FlashSystem A9000 offers world-class software features that are built with IBM Spectrum™ Accelerate. The extreme performance of IBM FlashCore technology with a grid architecture and comprehensive data reduction creates one powerful solution. Whether you are a service provider who requires highly efficient management or an enterprise that is implementing cloud on a budget, FlashSystem A9000 provides consistent and predictable microsecond response times and the simplicity that you need. As a cloud optimized solution, FlashSystem A9000 suits the requirements of public and private cloud providers who require features, such as inline data deduplication, multi-tenancy, and quality of service. It also uses powerful software-defined storage capabilities from IBM Spectrum Accelerate, such as Hyper-Scale technology, VMware, and storage container integration. FlashSystem A9000 is a modular system that consists of three grid controllers and a flash enclosure. An external view of the Model 425 is shown in Figure 1. Figure 1 IBM FlashSystem A9000 Model 425 © Copyright IBM Corp. 2017, 2018. All rights reserved.
    [Show full text]
  • Faster Oracle Performance with IBM Flashsystem 2 Faster Oracle Performance with IBM Flashsystem
    IBM Systems and Technology Group May 2013 Thought Leadership White Paper Faster Oracle performance with IBM FlashSystem 2 Faster Oracle performance with IBM FlashSystem Executive summary The result is a massive performance gap, felt most painfully This whitepaper discusses methods for improving Oracle® by database servers, which typically carry out far more I/O database performance using flash storage to accelerate the most transactions than other systems. Super fast processors and resource-intensive data that slows performance across the massive amounts of bandwidth are often wasted as storage board. devices take several milliseconds just to access the requested data. To this end, it discusses methods for identifying I/O performance bottlenecks, and it points out components that are the best candidates for migration to a flash storage appliance. An in-depth explanation of flash technology and possible implementations are also included. The problem of I/O wait time Often, additional processing power alone will do little or nothing to improve Oracle performance. This is because the processor, no matter how fast, finds itself constantly waiting on mechanical storage devices for its data. While every other component in the “data chain” moves in terms of computation times and the raw speed of electricity through a circuit, hard drives move mechanically, relying on physical movement around a magnetic platter to access information. In the last 20 years, processor speeds have increased at a Figure 1: Comparing processor and storage performance improvements geometric rate. At the same time, however, conventional storage access times have only improved marginally (see Figure 1). IBM Systems and Technololgy Group 3 When servers wait on storage, users wait on servers.
    [Show full text]
  • IBM DS8880: Hybrid Cloud Integration with Transparent Cloud Tiering
    Accelerate with IBM Storage: IBM DS8880: Hybrid cloud integration with Transparent Cloud Tiering Craig Gordon Consulting I/T Specialist IBM Washington Systems Center [email protected] © Copyright IBM Corporation 2018. Washington Systems Center - Storage Accelerate with IBM Storage Webinars The Free IBM Storage Technical Webinar Series Continues in 2018... Washington Systems Center – Storage experts cover a variety of technical topics. Audience: Clients who have or are considering acquiring IBM Storage solutions. Business Partners and IBMers are also welcome. To automatically receive announcements of upcoming Accelerate with IBM Storage webinars, Clients, Business Partners and IBMers are welcome to send an email request to [email protected]. Located in the Accelerate with IBM Storage Blog: https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en Also, check out the WSC YouTube Channel here: https://www.youtube.com/playlist?list=PLSdmGMn4Aud-gKUBCR8K0kscCiF6E6ZYD&disable_polymer=true 2018 Webinars: January 9 – DS8880 Easy Tier January 17 – Start 2018 Fast! What's New for Spectrum Scale V5 and ESS February 8 - VersaStack - Solutions For Fast Deployments February 16 - TS7700 R4.1 Phase 2 GUI with Live Demo February 22 - DS8880 Transparent Cloud Tiering Live Demo March 1 - Spectrum Scale/ESS Application Case Study Register Here: https://ibm2.webex.com/ibm2/onstage/g.php?MTID=e25146b6c1207cb081f4392087fb6f73a March 7 - Spectrum Storage Management, Control, Insights, Foundation; what’s the difference? Register Here: https://ibm2.webex.com/ibm2/onstage/g.php?MTID=e3165dfc6c698c8fcb83132c95ae6dfe7 March 15 - IBM FlashSystem A9000/R and SVC Configuration Best Practices Register Here: https://ibm2.webex.com/ibm2/onstage/g.php?MTID=e87d423c5cccbdefbbb61850d54f70f4b © Copyright IBM Corporation 2018.
    [Show full text]
  • IBM Power® Systems for SAS® Empowers Advanced Analytics Harry Seifert, Laurent Montaron, IBM Corporation
    Paper 4695-2020 IBM Power® Systems for SAS® Empowers Advanced Analytics Harry Seifert, Laurent Montaron, IBM Corporation ABSTRACT For over 40+ years of partnership between IBM and SAS®, clients have been benefiting from the added value brought by IBM’s infrastructure platforms to deploy SAS analytics, and now SAS Viya’s evolution of modern analytics. IBM Power® Systems and IBM Storage empower SAS environments with infrastructure that does not make tradeoffs among performance, cost, and reliability. The unified solution stack, comprising server, storage, and services, reduces the compute time, controls costs, and maximizes resilience of SAS environment with ultra-high bandwidth and highest availability. INTRODUCTION We will explore how to deploy SAS on IBM Power Systems platforms and unleash the full potential of the infrastructure, to reduce deployment risk, maximize flexibility and accelerate insights. We will start by reviewing IBM and SAS’s technology relationship and the current state of SAS products on IBM Power Systems. Then we will look at some of the infrastructure options to deploy SAS 9.4 on IBM Power Systems and IBM Storage, while maximizing resiliency & throughput by leveraging best practices. Next, we will look at SAS Viya, which introduces changes to the underlying infrastructure requirements while remaining able to be deployed alongside a traditional SAS 9.4 operation. We’ll explore the various deployment modes available. Finally, we’ll look at tuning practices and reference materials available for a deeper dive in deploying SAS on IBM platforms. SAS: 40 YEARS OF PARTNERSHIP WITH IBM IBM and SAS have been partners since the founding of SAS.
    [Show full text]
  • POWER® Processor-Based Systems
    IBM® Power® Systems RAS Introduction to IBM® Power® Reliability, Availability, and Serviceability for POWER9® processor-based systems using IBM PowerVM™ With Updates covering the latest 4+ Socket Power10 processor-based systems IBM Systems Group Daniel Henderson, Irving Baysah Trademarks, Copyrights, Notices and Acknowledgements Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Active AIX® POWER® POWER Power Power Systems Memory™ Hypervisor™ Systems™ Software™ Power® POWER POWER7 POWER8™ POWER® PowerLinux™ 7® +™ POWER® PowerHA® POWER6 ® PowerVM System System PowerVC™ POWER Power Architecture™ ® x® z® Hypervisor™ Additional Trademarks may be identified in the body of this document. Other company, product, or service names may be trademarks or service marks of others. Notices The last page of this document contains copyright information, important notices, and other information. Acknowledgements While this whitepaper has two principal authors/editors it is the culmination of the work of a number of different subject matter experts within IBM who contributed ideas, detailed technical information, and the occasional photograph and section of description.
    [Show full text]
  • Best Practices for IBM DS8000 and IBM Z/OS Hyperswap with IBM Copy Services Manager
    Front cover Best Practices for IBM DS8000 and IBM z/OS HyperSwap with IBM Copy Services Manager Thomas Luther Alexander Warmuth Marcelo Takakura Redbooks IBM Redbooks Best Practices for IBM DS8000 and IBM z/OS HyperSwap with IBM Copy Services Manager May 2019 SG24-8431-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (May 2019) This edition applies to IBM Copy Services Manager (CSM) V6.2.3 with IBM DS8000 Version 8.5. © Copyright International Business Machines Corporation 2019. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix Authors. ix Now you can become a published author, too! . .x Comments welcome. .x Stay connected to IBM Redbooks . .x Part 1. Introduction and planning . 1 Chapter 1. Introduction. 3 1.1 IBM Copy Services Manager overview . 4 1.2 CSM licenses . 5 1.2.1 CSM licenses for z/OS platforms . 5 1.2.2 CSM licenses for distributed server platforms. 6 1.3 z/OS HyperSwap overview . 7 1.3.1 z/OS HyperSwap: Not so basic anymore . 8 1.3.2 CSM sessions that support HyperSwap . 9 1.3.3 z/OS HyperSwap functions . 9 1.3.4 HyperSwap sequence. 11 1.3.5 Planned and unplanned HyperSwap . 12 1.4 CSM and HyperSwap communication flow . 13 1.5 GDPS Metro solutions. 14 1.6 IBM Resiliency Orchestration and CSM . 15 Chapter 2. IBM Copy Services Manager and IBM z/OS HyperSwap implementation topologies .
    [Show full text]
  • IBM Power Systems Performance Report Apr 13, 2021
    IBM Power Performance Report Power7 to Power10 September 8, 2021 Table of Contents 3 Introduction to Performance of IBM UNIX, IBM i, and Linux Operating System Servers 4 Section 1 – SPEC® CPU Benchmark Performance 4 Section 1a – Linux Multi-user SPEC® CPU2017 Performance (Power10) 4 Section 1b – Linux Multi-user SPEC® CPU2017 Performance (Power9) 4 Section 1c – AIX Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 5 Section 1d – Linux Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 6 Section 2 – AIX Multi-user Performance (rPerf) 6 Section 2a – AIX Multi-user Performance (Power8, Power9 and Power10) 9 Section 2b – AIX Multi-user Performance (Power9) in Non-default Processor Power Mode Setting 9 Section 2c – AIX Multi-user Performance (Power7 and Power7+) 13 Section 2d – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power8) 15 Section 2e – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power7 and Power7+) 20 Section 3 – CPW Benchmark Performance 19 Section 3a – CPW Benchmark Performance (Power8, Power9 and Power10) 22 Section 3b – CPW Benchmark Performance (Power7 and Power7+) 25 Section 4 – SPECjbb®2015 Benchmark Performance 25 Section 4a – SPECjbb®2015 Benchmark Performance (Power9) 25 Section 4b – SPECjbb®2015 Benchmark Performance (Power8) 25 Section 5 – AIX SAP® Standard Application Benchmark Performance 25 Section 5a – SAP® Sales and Distribution (SD) 2-Tier – AIX (Power7 to Power8) 26 Section 5b – SAP® Sales and Distribution (SD) 2-Tier – Linux on Power (Power7 to Power7+)
    [Show full text]
  • IBM Flashsystem A9000 • Product Overview
    IBM FlashSystem A9000 Version 12.2 Product Overview IBM GC27-8583-07 Note Before using this document and the product it supports, read the information in “Notices” on page 119. Edition notice Publication number: GC27-8583-07. This publication applies to IBM FlashSystem A9000 version 12.2 and to all subsequent releases and modifications until otherwise indicated in a newer publication. © Copyright IBM Corporation 2016, 2018. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures ................................... vii Tables .................................... ix About this document .............................. xi Intended audience ................................. xi Document conventions................................ xi Related information and publications ........................... xi IBM Publications Center ............................... xii Sending or posting your comments ........................... xii Getting information, help, and service .......................... xiii Chapter 1. Introduction ............................. 1 Architecture ................................... 2 Flash enclosure ................................. 3 Grid controllers ................................. 4 Back-end interconnect ............................... 4 Logical architecture ................................ 4 Functionality ................................... 5 Specifications ................................... 6 Performance ..................................
    [Show full text]
  • Towards a Portable Hierarchical View of Distributed Shared Memory Systems: Challenges and Solutions
    Towards A Portable Hierarchical View of Distributed Shared Memory Systems: Challenges and Solutions Millad Ghane Sunita Chandrasekaran Margaret S. Cheung Department of Computer Science Department of Computer and Information Physics Department University of Houston Sciences University of Houston TX, USA University of Delaware Center for Theoretical Biological Physics, [email protected],[email protected] DE, USA Rice University [email protected] TX, USA [email protected] Abstract 1 Introduction An ever-growing diversity in the architecture of modern super- Heterogeneity has become increasingly prevalent in the recent computers has led to challenges in developing scientifc software. years given its promising role in tackling energy and power con- Utilizing heterogeneous and disruptive architectures (e.g., of-chip sumption crisis of high-performance computing (HPC) systems [15, and, in the near future, on-chip accelerators) has increased the soft- 20]. Dennard scaling [14] has instigated the adaptation of heteroge- ware complexity and worsened its maintainability. To that end, we neous architectures in the design of supercomputers and clusters need a productive software ecosystem that improves the usability by the HPC community. The July 2019 TOP500 [36] report shows and portability of applications for such systems while allowing how 126 systems in the list are heterogeneous systems confgured every parallelism opportunity to be exploited. with one or many GPUs. This is the prevailing trend in current gen- In this paper, we outline several challenges that we encountered eration of supercomputers. As an example, Summit [31], the fastest in the implementation of Gecko, a hierarchical model for distributed supercomputer according to the Top500 list (June 2019) [36], has shared memory architectures, using a directive-based program- two IBM POWER9 processors and six NVIDIA Volta V100 GPUs.
    [Show full text]
  • Upgrade to POWER9 Planning Checklist
    Achieve your full business potential with IBM POWER9 Future-forward infrastructure designed to crush data-intensive workloads Upgrade to POWER9 Planning Checklist Using this checklist will help to ensure that your infrastructure strategy is aligned to your needs, avoiding potential cost overruns or capability shortfalls. 1 Determine current and future capacity 5 Identify all dependencies for major database requirements. Bring your team together, platforms, including Oracle, DB2, SAP HANA, assess your current application workload and open-source databases like EnterpriseDB, requirements and three- to five-year MongoDB, neo4j, and Redis. You’re likely outlook. You’ll then have a good picture running major databases on the Power of when and where application growth Systems platform; co-locating your current will take place, enabling you to secure servers may be a way to reduce expenditure capacity at the appropriate time on an and increase flexibility. as-needed basis. 6 Understand current and future data center 2 Assess operational efficiencies and identify environmental requirements. You may be opportunities to improve service levels unnecessarily overspending on power, while decreasing exposure to security and cooling and space. Savings here will help your compliancy issues/problems. With new organization avoid costs associated with data technologies that allow you to easily adjust center expansion. capacity, you will be in a much better position to lower costs, improve service Identify the requirements of your strategy for levels, and increase efficiency. 7 on- and off-premises cloud infrastructure. As you move to the cloud, ensure you 3 Create a detailed inventory of servers have a strong strategy to determine which across your entire IT infrastructure.
    [Show full text]
  • AC922 Data Movement for CORAL
    AC922 Data Movement for CORAL Roberts, Steve Ramanna, Pradeep Walthour, John IBM IBM IBM Cognitive Systems Cognitive Systems Cognitive Systems Austin, TX Austin, TX Austin, TX [email protected] [email protected] [email protected] Abstract—Recent publications have considered the challenge of and GPU processing elements associated with their respective movement in and out of the high bandwidth memory in attempt DRAM or high bandwidth memory (HBM2). Each processor to maximize GPU utilization and minimize overall application element creates a NUMA domain which in total encompasses wall time. This paper builds on previous contributions, [5] [17], which simulate software models, advocated optimization, and over > 2PB worth of total memory (see Table I for total suggest design considerations. This contribution characterizes the capacity). data movement innovations of the AC922 nodes IBM delivered to Oak Ridge National Labs and Lawrence Livermore National TABLE I Labs as part of the 2014 Collaboration of Oak Ridge, Argonne, CORAL SYSTEMS MEMORY SUMMARY and Livermore (CORAL) joint procurement activity. With a single HPC system able to perform up to 200PF of processing Lab Nodes Sockets DRAM (TB) GPUs HBM2 (TB) with access to 2.5PB of memory, this architecture motivates a ORNL 4607 9216 2,304 27648 432 careful look at data movement. The AC922 POWER9 system LLNL 4320 8640 2,160 17280 270 with NVIDIA V100 GPUs have cache line granularity, more than double the bandwidth of PCIe Gen3, low latency interfaces Efficient programming models call for accessing system and are interconnected by dual-rail Mellanox CAPI/EDR HCAs. memory with as little data replication as possible and As such, the bandwidth and latency assumptions from previous with low instruction overhead.
    [Show full text]