POWER8/9 Deep Dive

Total Page:16

File Type:pdf, Size:1020Kb

POWER8/9 Deep Dive POWER8/9 Deep Dive Jeff Stuecheli POWER Systems, IBM Systems © 2016 IBM Corporation POWER Processor Technology Roadmap POWER9 Family 14nm POWER8 Family Built for the Cognitive Era 22nm POWER7+ − Enhanced Core and Chip 32 nm Enterprise & Architecture Optimized for POWER7 BiG Data Optimized Emerging Workloads 45 nm Enterprise − Processor Family with - Up to 12 Cores Scale-Up and Scale-Out Enterprise - 2.5x Larger L3 cache - SMT8 - On-die acceleration - CAPI Acceleration Optimized Silicon - 8 Cores - Zero-power core idle state - High Bandwidth GPU Attach - SMT4 − Premier Platform for - eDRAM L3 Cache Accelerated Computing 1H10 2H12 1H14 – 2H16 2H17 – 2H18+ © 2016 IBM Corporation 2 POWER8 Processor Family Power E850 NVLINK GPU Enabled SuperCompute Node Power S8xxLC Power S8xx/S8xxL Power E870/E880 Enterprise Chip SuperCompute Chip Scale-Out Chip Enterprise Chip Entry SCM SC SCM Scale-Out DCM Enterprise SCM SinGle LarGe Chip Dual Small Chips SinGle LarGe Chip Up to 12 cores Up to 2 x 6 cores Up to 12 cores NVLINK GPU Attach Up to 4 socket SMP Up to 4 socket SMP Up to 16 socket SMP Half memory Up to 48x PCI lanes Up to 32x PCI lanes Cost reduced Full Memory Full Memory © 2016 IBM Corporation IBM S822LC with Nvidia P100 and NVLink · Up to four P100 per system—extreme performance; · OpenPOWER hardware design—leveraging technologies from IBM and OpenPOWER partners; · Focused on high performance applications—advanced analytics, Deep Neural Networks, Machine Learning; © 2016 IBM Corporation Nvidia Tesla P100 – First GPU with NVLink · Extreme performance—powering HPC, deep learning, and many more GPU Computing areas; · NVLink™—NVIDIA’s new high speed, high bandwidth interconnect for maximum application scalability; · HBM2—Fastest, high capacity, extremely efficient stacked GPU memory architecture; · Unified Memory and Compute Preemption—significantly improved programming model; · 16nm FinFET—enables more features, higher performance, and improved power efficiency. NVLink significantly increases performance for both GPU-to-GPU communications, and for GPU access to system memory. 40 GB/s versus 16 GB/s of PCIe Gen3 5.3 TFLOPS of double precision floating point (FP64) performance 10.6 TFLOPS of single precision (FP32) performance 21.2 TFLOPS of half-precision (FP16)performance Up to 3 times the performance of previous generation 3 times memory bandwidth of K40/M40 © 2016 IBM Corporation NVIDIA Roadmap on POWER Kepler – K40, K80 Pascal – P100 Volta CUDA 5.5 – 7.0 CUDA 8 CUDA 9 Unified Memory Kepler Pascal Volta SXM2 SXM2 PCIe NVLink NVLink 2.0 POWER8 POWER8+ POWER9 Buffered Memory POWER8 POWER8 NVLink POWER9 © 2016 IBM Corporation POWER9 Family – Deep Workload Optimizations EmerGinG Analytics, AI, CoGnitive - New core for stronger thread performance - Delivers 2x compute resource per socket - Built for acceleration – OpenPOWER solution enablement DB2 BLU Technical / HPC - Highest bandwidth GPU attach - Advanced GPU/CPU interaction and memory sharing - High bandwidth direct attach memory Cloud / HSDC - Power / Packaging / Cost optimizations for a range of platforms - Superior virtualization features: security, power management, QoS, interrupt - State of the art IO technology for network and storage performance Enterprise - Large, flat, Scale-Up Systems - Buffered memory for maximum capacity - Leading RAS - Improved caching © 2016 IBM Corporation 7 POWER9 Processor – Common Features Leadership New Core Microarchitecture SMP/Accelerator Signaling Memory Signaling Hardware Acceleration Platform • Stronger thread performance Core Core Core Core Core Core Core Core • Enhanced on-chip acceleration • Efficient agile pipeline L2 L2 L2 L2 • Nvidia NVLink 2.0: High bandwidth • POWER ISA v3.0 L3 Region L3 Region L3 Region L3 Region and advanced new features (25G) L3 Region L3 Region L3 Region L3 Region • CAPI 2.0: Coherent accelerator and Enhanced Cache Hierarchy L2 L2 L2 L2 storage attach (PCIe G4) • 120MB NUCA L3 architecture Core Core Core Core Core Core Core Core • New CAPI: Improved latency and bandwidth, open interface (25G) SMP Signaling SMP • 12 x 20-way associative regions Signaling PCIe PCIe On-Chip Accel • Advanced replacement policies L3 Region L3 Region SMP Interconnect& L3 Region L3 Region State of the Art I/O Subsystem L2 L2 Enablement Accelerator Chip L2 L2 - • Fed by 7 TB/s on-chip bandwidth Off • PCIe Gen4 – 48 lanes Core Core Core Core Core Core Core Core Cloud + Virtualization Innovation SMP/Accelerator Signaling Memory Signaling HiGh Bandwidth SiGnalinG TechnoloGy • Quality of service assists 14nm finFET Semiconductor Process • 16 Gb/s interface • New interrupt architecture • Improved device performance and – Local SMP • Workload optimized frequency reduced energy • 25 Gb/s Common Link interface • Hardware enforced trusted execution • 17 layer metal stack and eDRAM – Accelerator, remote SMP • 8.0 billion transistors © 2016 IBM Corporation 8 POWER9 Processor Family Four targeted implementations Core Count / Size SMT4 Core SMT8 Core 24 SMT4 Cores / Chip 12 SMT8 Cores / Chip SMP scalability / Memory subsystem Linux Ecosystem Optimized PowerVM Ecosystem Continuity Scale-Out – 2 Socket Optimized Robust 2 socket SMP system Direct Memory Attach • Up to 8 DDR4 ports • Commodity packaging form factor Scale-Up – Multi-Socket Optimized Scalable System TopoloGy / Capacity • Large multi-socket Buffered Memory Attach • 8 Buffered channels © 2016 IBM Corporation 9 New POWER9 Cores Optimized for StronGer Thread Performance and Efficiency • Increased execution bandwidth efficiency for a range of workloads including commercial, cognitive and analytics • Sophisticated instruction scheduling and branch prediction for unoptimized applications and interpretive languages • Adaptive features for improved efficiency and performance especially in lower memory bandwidth systems Available with SMT8 or SMT4 Cores 8 or 4 threaded core built from modular execution slices Branch Instruction Cache Branch Instruction Cache Predict 16i Predict 8i POWER9 SMT8 Core Instruction Buffer POWER9 SMT4 Core Instruction Buffer • PowerVM Ecosystem Continuity 12i 6i Decode/ • Linux Ecosystem Focus Decode/ • Strongest Thread Dispatch • Core Count / Socket Dispatch • Optimized for Large Partitions • Virtualization Granularity QP QP QP Crypto Crypto - BRU BRU - BRU 64b 64b 64b 64b 64b 64b 64b 64b 64b 64b 64b 64b DFU DFU VSU VSU VSU VSU VSU VSU VSU VSU VSU VSU VSU VSU DW DW DW DW DW DW DW DW DW DW DW DW LSU LSU LSU LSU LSU LSU LSU LSU LSU LSU LSU LSU SMT8 Core SMT4 Core © 2016 IBM Corporation 10 POWER9 Core Execution Slice Microarchitecture Modular Execution Slices 4 x 128b 2 x 128b 128b 64b DFU Super-slice Super-slice Super-slice Slice VSU ISU FXU ISU ISU Exec Exec Exec Exec Exec Exec 64b 64b 64b Slice Slice Slice Slice Slice Slice VSU VSU VSU IFU IFU LSU DW DW DW IFU LSU LSU LSU LSU LSU POWER8 SMT8 Core POWER9 SMT8 Core POWER9 SMT4 Core Re-factored Core Provides Improved Efficiency & Workload AliGnment • Enhanced pipeline efficiency with modular execution and intelligent pipeline control • Increased pipeline utilization with symmetric data-type engines: Fixed, Float, 128b, SIMD • Shared compute resource optimizes data-type interchange © 2016 IBM Corporation 11 POWER9 Core Pipeline Efficiency Shorter Pipelines with Reduced Disruption Improved application performance for modern codes POWER8 Pipeline • Shorten fetch to compute by 5 cycles IF Fetch to Compute IC Reduced by 5 cycles • Advanced branch prediction IXFR ED IFB POWER9 Pipeline Higher performance and pipeline utilization GF1 IF GF2 IC • Improved instruction manaGement DCD D1 MRG D2 – Removed instruction grouping and reduced cracking GXFR CRACK/FUSE – Enhanced instruction fusion DISP PD0 MAP PD1 – Complete up to 128 (64 – SMT4 Core) instructions per cycle B0 V0 X0 LS0 XFER B1 V1 X1 LS1 MAP RES V2 X2 LS2 LS0 VS0 B0 V3 X3 LS3 LS1 VS1 B1 Reduced latency and improved scalability V4 ALU AGEN AGEN ALU RES F1 CA BRD F2 • Local pipe control of load/store operations F2 FMT CA F3 – Improved hazard avoidance F3 FMT F4 F4 CA+2 F5 – Local recycles – reduced hazard disruption F5 Reduced Hazard F6 CA – Improved lock management Disruption © 2016 IBM Corporation 12 POWER9 – Core Compute Symmetric EnGines Per Data-Type for SMT4 Core Resources HiGher Performance on Diverse Workloads Fetch / Branch • 32kB, 8-way Instruction Cache x8 • 8 fetch, 6 decode Predecode L1 Instruction $ IBUF Decode / Crack SMT4 Core • 1x branch execution Branch Dispatch: Allocate / Rename Instruction / Iop Prediction Completion Table Slices issue VSU and AGEN x6 • 4x scalar-64b / 2x vector-128b Branch Slice Slice 0 Slice 1 Slice 2 Slice 3 • 4x load/store AGEN ALU ALU ALU ALU BRU AGEN XS XS AGEN AGEN XS XS AGEN FP FP FP FP MUL MUL MUL MUL Vector Scalar Unit (VSU) Pipes CRYPT XC XC XC XC • 4x ALU + Simple (64b) PM PM • 4x FP + FX-MUL + Complex (64b) QFX QP/DFU QFX • 2x Permute (128b) DIV DIV • 2x Quad Fixed (128b) ST-D ST-D ST-D ST-D 128b • 2x Fixed Divide (64b) Super-slice • 1x Quad FP & Decimal FP L1D$ 0 L1D$ 1 L1D$ 2 L1D$ 3 • 1x Cryptography LRQ 0/1 LRQ 2/3 Load Store Unit (LSU) Slices SRQ 0 SRQ 1 SRQ 2 SRQ 3 • 32kB, 8-way Data Cache • Up to 4 DW load or store Efficient Cores Deliver 2x Compute Resource per Socket © 2016 IBM Corporation 13 POWER ISA v3.0 New Instruction Set Architecture Implemented on POWER9 Broader data type support • 128-bit IEEE 754 Quad-Precision Float – Full width quad-precision for financial and security applications • Expanded BCD and 128b Decimal Integer – For database and native analytics • Half-Precision Float Conversion – Optimized
Recommended publications
  • Investigations on Hardware Compression of IBM Power9 Processors
    Investigations on hardware compression of IBM Power9 processors Jérome Kieffer, Pierre Paleo, Antoine Roux, Benoît Rousselle Outline ● The bandwidth issue at synchrotrons sources ● Presentation of the evaluated systems: – Intel Xeon vs IBM Power9 – Benchmarks on bandwidth ● The need for compression of scientific data – Compression as part of HDF5 – The hardware compression engine NX-gzip within Power9 – Gzip performance benchmark – Bitshuffle-LZ4 benchmark – Filter optimizations – Benchmark of parallel filtered gzip ● Conclusions – on the hardware – on the compression pipeline in HDF5 Page 2 HDF5 on Power9 18/09/2019 Bandwidth issue at synchrotrons sources Data analysis computer with the main interconnections and their associated bandwidth. Data reduction Upgrade to → Azimuthal integration 100 Gbit/s Data compression ! Figures from former generation of servers Kieffer et al. Volume 25 | Part 2 | March 2018 | Pages 612–617 | 10.1107/S1600577518000607 Page 3 HDF5 on Power9 18/09/2019 Topologies of Intel Xeon servers in 2019 Source: intel.com Page 4 HDF5 on Power9 18/09/2019 Architecture of the AC922 server from IBM featuring Power9 Credit: Thibaud Besson, IBM France Page 6 HDF5 on Power9 18/09/2019 Bandwidth measurement: Xeon vs Power9 Computer Dell R840 IBM AC922 Processor 4 Intel Xeon (12 cores) 2 IBM Power9 (16 cores) 2.6 GHz 2.7 GHz Cache (L3) 19 MB 8x 10 MB Memory channels 4x 6 DDR4 2x 8 DDR4 Memory capacity → 3TB → 2TB Memory speed theory 512 GB/s 340 GB/s Measured memory speed 160 GB/s 270 GB/s Interconnects PCIe v3 PCIe v4 NVlink2 & CAPI2 GP-GPU co-processor 2Tesla V100 PCIe v3 2Tesla V100 NVlink2 Interconnect speed 12 GB/s 48 GB/s CPU ↔ GPU Page 8 HDF5 on Power9 18/09/2019 Strength and weaknesses of the OpenPower architecture While amd64 is today’s de facto standard in HPC, it has a few competitors: arm64, ppc64le and to a less extend riscv and mips64.
    [Show full text]
  • Supermicro GPU Solutions Optimized for NVIDIA Nvlink
    SuperServers Optimized For NVIDIA® Tesla® GPUs Maximizing Throughput and Scalability 16 Tesla® V100 GPUs With NVLink™ and NVSwitch™ Most Powerful Solution for Deep Learning Training • New Supermicro NVIDIA® HGX-2 based platform • 16 Tesla® V100 SXM3 GPUs (512GB total GPU memory) • 16 NICs for GPUDirect RDMA • 16 hot-swap NVMe drive bays • Fully configurable to order SYS-9029GP-TNVRT 8 Tesla® V100 GPUs With NVLink™ 4 Tesla® V100 GPUs With NVLink™ SYS-1029GQ-TVRT SYS-4029GP-TVRT www.supermicro.com/GPU March 2019 Maximum Acceleration for AI/DL Training Workloads PERFORMANCE: Highest Parallel peak performance with NVIDIA Tesla V100 GPUs THROUGHPUT: Best in class GPU-to-GPU bandwidth with a maximum speed of 300GB/s SCALABILITY: Designed for direct interconections between multiple GPU nodes FLEXIBILITY: PCI-E 3.0 x16 for low latency I/O expansion capacity & GPU Direct RDMA support DESIGN: Optimized GPU cooling for highest sustained parallel computing performance EFFICIENCY: Redundant Titanium Level power supplies & intelligent cooling control Model SYS-1029GQ-TVRT SYS-4029GP-TVRT • Dual Intel® Xeon® Scalable processors with 3 UPI up to • Dual Intel® Xeon® Scalable processors with 3 UPI up to 10.4GT/s CPU Support 10.4GT/s • Supports up to 205W TDP CPU • Supports up to 205W TDP CPU • 8 NVIDIA® Tesla® V100 GPUs • 4 NVIDIA Tesla V100 GPUs • NVIDIA® NVLink™ GPU Interconnect up to 300GB/s GPU Support • NVIDIA® NVLink™ GPU Interconnect up to 300GB/s • Optimized for GPUDirect RDMA • Optimized for GPUDirect RDMA • Independent CPU and GPU thermal zones
    [Show full text]
  • IBM Power System POWER8 Facts and Features
    IBM Power Systems IBM Power System POWER8 Facts and Features April 29, 2014 IBM Power Systems™ servers and IBM BladeCenter® blade servers using IBM POWER7® and POWER7+® processors are described in a separate Facts and Features report dated July 2013 (POB03022-USEN-28). IBM Power Systems™ servers and IBM BladeCenter® blade servers using IBM POWER6® and POWER6+™ processors are described in a separate Facts and Features report dated April 2010 (POB03004-USEN-14). 1 IBM Power Systems Table of Contents IBM Power System S812L 4 IBM Power System S822 and IBM Power System S822L 5 IBM Power System S814 and IBM Power System S824 6 System Unit Details 7 Server I/O Drawers & Attachment 8 Physical Planning Characteristics 9 Warranty / Installation 10 Power Systems Software Support 11 Performance Notes & More Information 12 These notes apply to the description tables for the pages which follow: Y Standard / Supported Optional Optionally Available / Supported N/A or - Not Available / Supported or Not Applicable SOD Statement of General Direction announced SLES SUSE Linux Enterprise Server RHEL Red Hat Enterprise Linux a One x8 PCIe slots must contain a 4-port 1Gb Ethernet LAN available for client use b Use of expanded function storage backplane uses one PCIe slot Backplane provides dual high performance SAS controllers with 1.8 GB write cache expanded up to 7.2 GB with c compression plus Easy Tier function plus two SAS ports for running an EXP24S drawer d Full benchmark results are located at ibm.com/systems/power/hardware/reports/system_perf.html e Option is supported on IBM i only through VIOS.
    [Show full text]
  • High Performance Computing and AI Solutions Portfolio
    Brochure High Performance Computing and AI Solutions Portfolio Technology and expertise to help you accelerate discovery and innovation Discovery and innovation have always started with great minds Go ahead. dreaming big. As artificial intelligence (AI), high performance computing (HPC) and data analytics continue to converge and Dream big. evolve, they are fueling the next industrial revolution and the next quantum leap in human progress. And with the help of increasingly powerful technology, you can dream even bigger. Dell Technologies will be there every step of the way with the technology you need to power tomorrow’s discoveries and the expertise to bring it all together, today. 463 exabytes The convergence of HPC and AI is driven by data. The data‑driven age is dramatically reshaping industries and reinventing the future. As of data will be created each day by 20251 vast amounts of data pour in from increasingly diverse sources, leveraging that data is both critical and transformational. Whether you’re working to save lives, understand the universe, build better machines, neutralize financial risks or anticipate customer sentiment, 44X ROI data informs and drives decisions that impact the success of your organization — and Average return on investment (ROI) shapes the future of our world. for HPC2 AI, HPC and data analytics are technologies designed to unlock the value of your data. While they have long been treated as separate, the three technologies are converging as 83% of CIOs it becomes clear that analytics and AI are both big‑data problems that require the power, Say they are investing in AI scalable compute, networking and storage provided by HPC.
    [Show full text]
  • BRKIOT-2394.Pdf
    Unlocking the Mystery of Machine Learning and Big Data Analytics Robert Barton Jerome Henry Distinguished Architect Principal Engineer @MrRobbarto Office of the CTAO CCIE #6660 @wirelessccie CCDE #2013::6 CCIE #24750 CWNE #45 BRKIOT-2394 Cisco Webex Teams Questions? Use Cisco Webex Teams to chat with the speaker after the session How 1 Find this session in the Cisco Events Mobile App 2 Click “Join the Discussion” 3 Install Webex Teams or go directly to the team space 4 Enter messages/questions in the team space BRKIOT-2394 © 2020 Cisco and/or its affiliates. All rights reserved. Cisco Public 3 Tuesday, Jan. 28th Monday, Jan. 27th Wednesday, Jan. 29th BRKIOT-2600 BRKIOT-2213 16:45 Enabling OT-IT collaboration by 17:00 From Zero to IOx Hero transforming traditional industrial TECIOT-2400 networks to modern IoT Architectures IoT Fundamentals 08:45 BRKIOT-1618 Bootcamp 14:45 Industrial IoT Network Management PSOIOT-1156 16:00 using Cisco Industrial Network Director Securing Industrial – A Deep Dive. Networks: Introduction to Cisco Cyber Vision PSOIOT-2155 Enhancing the Commuter 13:30 BRKIOT-1775 Experience - Service Wireless technologies and 14:30 BRKIOT-2698 BRKIOT-1520 Provider WiFi at the Use Cases in Industrial IOT Industrial IoT Routing – Connectivity 12:15 Cisco Remote & Mobile Asset speed of Trains and Beyond Solutions PSOIOT-2197 Cisco Innovates Autonomous 14:00 TECIOT-2000 Vehicles & Roadways w/ IoT BRKIOT-2497 BRKIOT-2900 Understanding Cisco's 14:30 IoT Solutions for Smart Cities and 11:00 Automating the Network of Internet Of Things (IOT) BRKIOT-2108 Communities Industrial Automation Solutions Connected Factory Architecture Theory and 11:00 Practice PSOIOT-2100 BRKIOT-1291 Unlock New Market 16:15 Opening Keynote 09:00 08:30 Opportunities with LoRaWAN for IOT Enterprises Embedded Cisco services Technologies IOT IOT IOT Track #CLEMEA www.ciscolive.com/emea/learn/technology-tracks.html Cisco Live Thursday, Jan.
    [Show full text]
  • POWER® Processor-Based Systems
    IBM® Power® Systems RAS Introduction to IBM® Power® Reliability, Availability, and Serviceability for POWER9® processor-based systems using IBM PowerVM™ With Updates covering the latest 4+ Socket Power10 processor-based systems IBM Systems Group Daniel Henderson, Irving Baysah Trademarks, Copyrights, Notices and Acknowledgements Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Active AIX® POWER® POWER Power Power Systems Memory™ Hypervisor™ Systems™ Software™ Power® POWER POWER7 POWER8™ POWER® PowerLinux™ 7® +™ POWER® PowerHA® POWER6 ® PowerVM System System PowerVC™ POWER Power Architecture™ ® x® z® Hypervisor™ Additional Trademarks may be identified in the body of this document. Other company, product, or service names may be trademarks or service marks of others. Notices The last page of this document contains copyright information, important notices, and other information. Acknowledgements While this whitepaper has two principal authors/editors it is the culmination of the work of a number of different subject matter experts within IBM who contributed ideas, detailed technical information, and the occasional photograph and section of description.
    [Show full text]
  • Openpower AI CERN V1.Pdf
    Moore’s Law Processor Technology Firmware / OS Linux Accelerator sSoftware OpenStack Storage Network ... Price/Performance POWER8 2000 2020 DRAM Memory Chips Buffer Power8: Up to 12 Cores, up to 96 Threads L1, L2, L3 + L4 Caches Up to 1 TB per socket https://www.ibm.com/blogs/syst Up to 230 GB/s sustained memory ems/power-systems- openpower-enable- bandwidth acceleration/ System System Memory Memory 115 GB/s 115 GB/s POWER8 POWER8 CPU CPU NVLink NVLink 80 GB/s 80 GB/s P100 P100 P100 P100 GPU GPU GPU GPU GPU GPU GPU GPU Memory Memory Memory Memory GPU PCIe CPU 16 GB/s System bottleneck Graphics System Memory Memory IBM aDVantage: data communication and GPU performance POWER8 + 78 ms Tesla P100+NVLink x86 baseD 170 ms GPU system ImageNet / Alexnet: Minibatch size = 128 ADD: Coherent Accelerator Processor Interface (CAPI) FPGA CAPP PCIe POWER8 Processor ...FPGAs, networking, memory... Typical I/O MoDel Flow Copy or Pin MMIO Notify Poll / Int Copy or Unpin Ret. From DD DD Call Acceleration Source Data Accelerator Completion Result Data Completion Flow with a Coherent MoDel ShareD Mem. ShareD Memory Acceleration Notify Accelerator Completion Focus on Enterprise Scale-Up Focus on Scale-Out and Enterprise Future Technology and Performance DriVen Cost and Acceleration DriVen Partner Chip POWER6 Architecture POWER7 Architecture POWER8 Architecture POWER9 Architecture POWER10 POWER8/9 2007 2008 2010 2012 2014 2016 2017 TBD 2018 - 20 2020+ POWER6 POWER6+ POWER7 POWER7+ POWER8 POWER8 P9 SO P9 SU P9 SO 2 cores 2 cores 8 cores 8 cores 12 cores w/ NVLink
    [Show full text]
  • IBM Power System E850 the Most Agile 4-Socket System in the Marketplace, Optimized for Performance, Reliability and Expansion
    IBM Systems Data Sheet IBM Power System E850 The most agile 4-socket system in the marketplace, optimized for performance, reliability and expansion Businesses today are demanding faster insights that analyze more data in Highlights new ways. They need to implement applications in days versus months, and they need to achieve all these goals while reducing IT costs. This is ●● ●●Designed for data and analytics, delivers creating new demands on IT infrastructures, requiring new levels of per- secure, reliable performance in a compact, 4-socket system formance and the flexibility to respond to new business opportunities, all at an affordable price. ●● ●●Can flexibly scale to rapidly respond to changing business needs The IBM® Power® System E850 server offers a unique blend of ●● ●●Can reduce IT costs through application enterprise-class capabilities in a space-efficient, 4-socket system with consolidation, higher availability and excellent price performance. With up to 48 IBM POWER8™ processor virtualization to yield over 70 percent utilization cores, advanced IBM PowerVM® virtualization that can yield over 70 percent system utilization and Capacity on Demand (CoD), no other 4-socket system in the industry delivers this combination of performance, efficiency and business agility. These capabilities make the Power E850 server an ideal platform for medium-size businesses and as a departmental server or data center building block for large enterprises. Designed for the demands of big data and analytics Businesses are amassing a wealth of data and IBM Power Systems™, built with innovation to support today’s data demands, can store it, secure it and, most important, extract actionable insight from it.
    [Show full text]
  • IBM Power Systems Performance Report Apr 13, 2021
    IBM Power Performance Report Power7 to Power10 September 8, 2021 Table of Contents 3 Introduction to Performance of IBM UNIX, IBM i, and Linux Operating System Servers 4 Section 1 – SPEC® CPU Benchmark Performance 4 Section 1a – Linux Multi-user SPEC® CPU2017 Performance (Power10) 4 Section 1b – Linux Multi-user SPEC® CPU2017 Performance (Power9) 4 Section 1c – AIX Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 5 Section 1d – Linux Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 6 Section 2 – AIX Multi-user Performance (rPerf) 6 Section 2a – AIX Multi-user Performance (Power8, Power9 and Power10) 9 Section 2b – AIX Multi-user Performance (Power9) in Non-default Processor Power Mode Setting 9 Section 2c – AIX Multi-user Performance (Power7 and Power7+) 13 Section 2d – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power8) 15 Section 2e – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power7 and Power7+) 20 Section 3 – CPW Benchmark Performance 19 Section 3a – CPW Benchmark Performance (Power8, Power9 and Power10) 22 Section 3b – CPW Benchmark Performance (Power7 and Power7+) 25 Section 4 – SPECjbb®2015 Benchmark Performance 25 Section 4a – SPECjbb®2015 Benchmark Performance (Power9) 25 Section 4b – SPECjbb®2015 Benchmark Performance (Power8) 25 Section 5 – AIX SAP® Standard Application Benchmark Performance 25 Section 5a – SAP® Sales and Distribution (SD) 2-Tier – AIX (Power7 to Power8) 26 Section 5b – SAP® Sales and Distribution (SD) 2-Tier – Linux on Power (Power7 to Power7+)
    [Show full text]
  • IBM Energyscale for POWER8 Processor-Based Systems
    IBM EnergyScale for POWER8 Processor-Based Systems November 2015 Martha Broyles Christopher J. Cain Todd Rosedahl Guillermo J. Silva Table of Contents Executive Overview...................................................................................................................................4 EnergyScale Features.................................................................................................................................5 Power Trending................................................................................................................................................................5 Thermal Reporting...........................................................................................................................................................5 Fixed Maximum Frequency Mode...................................................................................................................................6 Static Power Saver Mode.................................................................................................................................................6 Fixed Frequency Override...............................................................................................................................................6 Dynamic Power Saver Mode...........................................................................................................................................7 Power Management's Effect on System Performance................................................................................................7
    [Show full text]
  • NVIDIA Gpudirect RDMA (GDR) Host Memory Host Memory • Pipeline Through Host for Large Msg 2 IB IB 4 2
    Efficient and Scalable Communication Middleware for Emerging Dense-GPU Clusters Ching-Hsiang Chu Advisor: Dhabaleswar K. Panda Network Based Computing Lab Department of Computer Science and Engineering The Ohio State University, Columbus, OH Outline • Introduction • Problem Statement • Detailed Description and Results • Broader Impact on the HPC Community • Expected Contributions Network Based Computing Laboratory SC 19 Doctoral Showcase 2 Trends in Modern HPC Architecture: Heterogeneous Multi/ Many-core High Performance Interconnects Accelerators / Coprocessors SSD, NVMe-SSD, Processors InfiniBand, Omni-Path, EFA high compute density, NVRAM <1usec latency, 100Gbps+ Bandwidth high performance/watt Node local storage • Multi-core/many-core technologies • High Performance Storage and Compute devices • High Performance Interconnects • Variety of programming models (MPI, PGAS, MPI+X) #1 Summit #2 Sierra (17,280 GPUs) #8 ABCI #22 DGX SuperPOD (27,648 GPUs) #10 Lassen (2,664 GPUs) (4,352 GPUs) (1,536 GPUs) Network Based Computing Laboratory SC 19 Doctoral Showcase 3 Trends in Modern Large-scale Dense-GPU Systems • Scale-up (up to 150 GB/s) • Scale-out (up to 25 GB/s) – PCIe, NVLink/NVSwitch – InfiniBand, Omni-path, Ethernet – Infinity Fabric, Gen-Z, CXL – Cray Slingshot Network Based Computing Laboratory SC 19 Doctoral Showcase 4 GPU-enabled HPC Applications Lattice Quantum Chromodynamics Weather Simulation Wave propagation simulation Fuhrer O, Osuna C, Lapillonne X, Gysi T, Bianco M, Schulthess T. Towards GPU-accelerated operational weather
    [Show full text]
  • Towards a Portable Hierarchical View of Distributed Shared Memory Systems: Challenges and Solutions
    Towards A Portable Hierarchical View of Distributed Shared Memory Systems: Challenges and Solutions Millad Ghane Sunita Chandrasekaran Margaret S. Cheung Department of Computer Science Department of Computer and Information Physics Department University of Houston Sciences University of Houston TX, USA University of Delaware Center for Theoretical Biological Physics, [email protected],[email protected] DE, USA Rice University [email protected] TX, USA [email protected] Abstract 1 Introduction An ever-growing diversity in the architecture of modern super- Heterogeneity has become increasingly prevalent in the recent computers has led to challenges in developing scientifc software. years given its promising role in tackling energy and power con- Utilizing heterogeneous and disruptive architectures (e.g., of-chip sumption crisis of high-performance computing (HPC) systems [15, and, in the near future, on-chip accelerators) has increased the soft- 20]. Dennard scaling [14] has instigated the adaptation of heteroge- ware complexity and worsened its maintainability. To that end, we neous architectures in the design of supercomputers and clusters need a productive software ecosystem that improves the usability by the HPC community. The July 2019 TOP500 [36] report shows and portability of applications for such systems while allowing how 126 systems in the list are heterogeneous systems confgured every parallelism opportunity to be exploited. with one or many GPUs. This is the prevailing trend in current gen- In this paper, we outline several challenges that we encountered eration of supercomputers. As an example, Summit [31], the fastest in the implementation of Gecko, a hierarchical model for distributed supercomputer according to the Top500 list (June 2019) [36], has shared memory architectures, using a directive-based program- two IBM POWER9 processors and six NVIDIA Volta V100 GPUs.
    [Show full text]