Upgrading from POWER5 to POWER6 Planning and Preparation for a POWER6 Upgrade: One Customer's Experience

Total Page:16

File Type:pdf, Size:1020Kb

Upgrading from POWER5 to POWER6 Planning and Preparation for a POWER6 Upgrade: One Customer's Experience Upgrading from POWER5 to POWER6 Planning and preparation for a POWER6 upgrade: One customer's experience Skill Level: Introductory Chris Gibson ([email protected]) AIX Specialist Australia Post 09 Jun 2009 Read about and learn from my experiences in upgrading a POWER5® 595 to a new POWER6® 595. Introduction This article discusses my experiences when upgrading a POWER5 595 to a new POWER6 595. This is not intended as an official "how-to" guide, but a discussion on how I performed the upgrade and what decisions and considerations I made during the planning and execution phases. I hope that this information will help others who need to perform similar tasks within their own organizations or those of their customers. Let me start off by stating that each environment is different. Most sites customize their AIX® operating system and POWER® hardware configuration to meet their requirements, so what I describe here may not match what you have in your environment. So please use your best judgment and apply what you need from this article, but only if it is appropriate for your site. Only you can make this call, as you know more about how your AIX and POWER infrastructure is configured (and why) than anyone else! An important note about my AIX environment: All of my LPARs were virtualized; that is, they were all micro-partitioned and they all used Virtual I/O (VIO) for all disk and Upgrading from POWER5 to POWER6 © Copyright IBM Corporation 2009. All rights reserved. Page 1 of 21 developerWorks® ibm.com/developerWorks network devices. None of my AIX LPARs had any dedicated physical hardware. All physical devices were owned by the Virtual I/O servers (VIOS). I will outline the steps I performed before and after the POWER6 upgrade. The focus is on the VIOS, AIX, and HACMP tasks I executed after the hardware had been physically upgraded to POWER6. Upgrade overview I needed to upgrade my existing System p® 595 (9119-595) landscape to the new POWER6 595 (9119-FHA). Please refer to Figure 1 below. When I say upgrade, I mean this was an MES upgrade. MES stands for Miscellaneous Equipment Specification. An MES upgrade includes any server hardware change, which can be an addition, improvement, removal, or any combination of these. An important feature of an MES upgrade is that the systems serial number does not change. Figure 1. 9119-595 and the 9119-FHA Essentially, our upgrade from POWER5 to POWER6 involved moving the existing I/O drawers (including internal disks, FC, and Ethernet adapters) from the POWER5 frame to the POWER6 frame. Once this was completed, the POWER6 system would be powered up and the IBM® CE (Customer Engineer) would then hand back the Upgrading from POWER5 to POWER6 Page 2 of 21 © Copyright IBM Corporation 2009. All rights reserved. ibm.com/developerWorks developerWorks® system to me. Then I would attempt to bring up the LPARs on the new POWER6 server. This was the first time I had migrated to a newer POWER platform using the MES upgrade method, and I had concerns. In the past I had migrated AIX systems to newer platforms with both the old and new systems sitting side by side. For example, several years ago, when migrating from POWER4 to POWER5, we purchased a new 9119-595 and sat it next to the old p4 p690. We connected the new 595 to our SAN and network and started moving LPARs from the p690 (one at a time) by restoring a mksysb using Network Installation Manager (NIM). The advantage with this method was if we had an issue on the new 595, we could easily fallback to the p690, as the original LPAR was still available. It also allowed us time to test the 595 before we unleashed any workload onto the system. This gave us greater confidence that all our components were compatible (such as software and firmware) and functioning as expected. It essentially gave us time to shake out any bugs or issues with the new hardware. This method was what I considered, at the time, my preferred way of performing the migration to POWER6. With the MES upgrade method, the old p5 system would be shut down, rolled out the door, and the new p6 moved into its place. The IBM CE would then transfer the I/O drawers, configure the system, verify it was OK, hand it back to me, and walk away (so to speak!). With the 'big bang' upgrade approach, I would not be able to rehearse or test the upgrade process and there was potential to be caught out by unknown issues. My main concern here was that if there was a problem with the 9119-FHA, we did not have a way to easily back out to the old system. We could not simply power up the old p5 and start the LPARs. Nor could we test that the new hardware was functioning OK, well in advance, before activating the LPARs and running production workload. Given that this was an MES upgrade and that wasn't going to change, I set about planning for the upgrade. Planning and preparation The most pressing decision I had to make was what migration approach I was going to use for my AIX LPARs. I had two choices here; I could either rebuild the VIOS and the LPARs from a mksysb restore or attempt to boot them from disk. I understood that the only documented and official method to migrate LPARs to newer or different hardware was using a "mksysb clone" operation, which means Upgrading from POWER5 to POWER6 © Copyright IBM Corporation 2009. All rights reserved. Page 3 of 21 developerWorks® ibm.com/developerWorks taking a mksysb of the LPAR and restoring it on the new p6 system. However, I was interested in simply booting the LPARs on the new p6 595. This was not guaranteed to work and I could certainly understand why. In order to boot on the new system, you would need the appropriate device driver filesets to support the new platform. This meant you would need to ensure that all your systems were installed with Enable System Backups to install any system set to Yes. This enables systems to be installed on any other system (using cloning) by installing all devices and kernels. No guarantee is implied by this setting. However, when I think about how Live Partition Mobility works, and the fact that you can move a Virtual I/O client (VIOC) LPAR from one physical system to another (without a mksysb restore), I wonder if this may change in the future? This is the default setting when installing AIX and I had always ensured it was set when loading the operating system. You can refer to the /usr/lpp/bosinst/bosinst.template.README file for more details. Some evidence on the AIX forums suggested that this method may work. One customer had reported using this method when they upgraded from a p5 570 to a p6 570. Some other considerations when choosing the migration approach were around the I/O bus numbering and LPAR profiles. According to the 9119-595 to 9119-FHA MES upgrade instructions, the I/O bus numbering did not change after the upgrade. Were my LPAR profiles going to be recovered, and intact, on the new p6, or would I need to rebuild? The MES upgrade instructions stated the IBM CE should perform a Recover Partition Data operation, using the HMC, after the upgrade. This meant I would not have to recreate all of my LPAR profiles from scratch (either using the System Planning tool (SPT) or a scripting method). I also knew that the system serial number was guaranteed not to change, so I wasn't going to have application licensing problems because the system ID had changed. I finally settled on my approach to the upgrade. I would boot my LPARs (including my VIOS) and use mksysb restore only if I had serious issues bringing up the systems (in a clean state). My procedure would be: • Document the current virtual device mappings on each VIOS. I used lsmap –alland lsmap –all –net for this task. • Collect PVID and LUNID information for all the Virtual target devices backed by physical disks. • Verify that all Virtual Slot IDs were greater than 10. Starting with HMC v7, the HMC reserves the first ten virtual adapter slots on each VIOS for internal HMC use. • Take a mksysb of all LPARs and VIOS. Use these mksysb images to restore from, if required. Upgrading from POWER5 to POWER6 Page 4 of 21 © Copyright IBM Corporation 2009. All rights reserved. ibm.com/developerWorks developerWorks® • Back up the managed systems partition profile data. • IBM CE to perform the hardware upgrade to POWER6. • IBM CE to restore the managed systems profile data from the previous backup. • Verify the partition profile data for each AIX LPAR and VIOS is correct on the HMC. • Upon successful verification of the LPAR and VIOS profiles, boot each VIOS. Enter the SMS menu, confirm the boot list, and boot the VIOS. • Verify the virtual device configuration and status on each VIOS. Perform a health check on each VIOS. If the health check is not successful, then restore the VIOS from mksysb using NIM. • Upon successful verification of each VIOS, boot each LPAR. Enter the SMS menu, confirm the boot list, and boot the LPAR. • If booting an LPAR failed, then restore the LPAR from a mksysb image. • Correct the boot list on each LPAR. Start functional verification of the environment, such as VIOS failover and application startup and test.
Recommended publications
  • Wind Rose Data Comes in the Form >200,000 Wind Rose Images
    Making Wind Speed and Direction Maps Rich Stromberg Alaska Energy Authority [email protected]/907-771-3053 6/30/2011 Wind Direction Maps 1 Wind rose data comes in the form of >200,000 wind rose images across Alaska 6/30/2011 Wind Direction Maps 2 Wind rose data is quantified in very large Excel™ spreadsheets for each region of the state • Fields: X Y X_1 Y_1 FILE FREQ1 FREQ2 FREQ3 FREQ4 FREQ5 FREQ6 FREQ7 FREQ8 FREQ9 FREQ10 FREQ11 FREQ12 FREQ13 FREQ14 FREQ15 FREQ16 SPEED1 SPEED2 SPEED3 SPEED4 SPEED5 SPEED6 SPEED7 SPEED8 SPEED9 SPEED10 SPEED11 SPEED12 SPEED13 SPEED14 SPEED15 SPEED16 POWER1 POWER2 POWER3 POWER4 POWER5 POWER6 POWER7 POWER8 POWER9 POWER10 POWER11 POWER12 POWER13 POWER14 POWER15 POWER16 WEIBC1 WEIBC2 WEIBC3 WEIBC4 WEIBC5 WEIBC6 WEIBC7 WEIBC8 WEIBC9 WEIBC10 WEIBC11 WEIBC12 WEIBC13 WEIBC14 WEIBC15 WEIBC16 WEIBK1 WEIBK2 WEIBK3 WEIBK4 WEIBK5 WEIBK6 WEIBK7 WEIBK8 WEIBK9 WEIBK10 WEIBK11 WEIBK12 WEIBK13 WEIBK14 WEIBK15 WEIBK16 6/30/2011 Wind Direction Maps 3 Data set is thinned down to wind power density • Fields: X Y • POWER1 POWER2 POWER3 POWER4 POWER5 POWER6 POWER7 POWER8 POWER9 POWER10 POWER11 POWER12 POWER13 POWER14 POWER15 POWER16 • Power1 is the wind power density coming from the north (0 degrees). Power 2 is wind power from 22.5 deg.,…Power 9 is south (180 deg.), etc… 6/30/2011 Wind Direction Maps 4 Spreadsheet calculations X Y POWER1 POWER2 POWER3 POWER4 POWER5 POWER6 POWER7 POWER8 POWER9 POWER10 POWER11 POWER12 POWER13 POWER14 POWER15 POWER16 Max Wind Dir Prim 2nd Wind Dir Sec -132.7365 54.4833 0.643 0.767 1.911 4.083
    [Show full text]
  • Ray Tracing on the Cell Processor
    Ray Tracing on the Cell Processor Carsten Benthin† Ingo Wald Michael Scherbaum† Heiko Friedrich‡ †inTrace Realtime Ray Tracing GmbH SCI Institute, University of Utah ‡Saarland University {benthin, scherbaum}@intrace.com, [email protected], [email protected] Abstract band”) architectures1 to hide memory latencies, at exposing par- Over the last three decades, higher CPU performance has been allelism through SIMD units, and at multi-core architectures. At achieved almost exclusively by raising the CPU’s clock rate. Today, least for specialized tasks such as triangle rasterization, these con- the resulting power consumption and heat dissipation threaten to cepts have been proven as very powerful, and have made modern end this trend, and CPU designers are looking for alternative ways GPUs as fast as they are; for example, a Nvidia 7800 GTX offers of providing more compute power. In particular, they are looking 313 GFlops [16], 35 times more than a 2.2 GHz AMD Opteron towards three concepts: a streaming compute model, vector-like CPU. Today, there seems to be a convergence between CPUs and SIMD units, and multi-core architectures. One particular example GPUs, with GPUs—already using the streaming compute model, of such an architecture is the Cell Broadband Engine Architecture SIMD, and multi-core—become increasingly programmable, and (CBEA), a multi-core processor that offers a raw compute power CPUs are getting equipped with more and more cores and stream- of up to 200 GFlops per 3.2 GHz chip. The Cell bears a huge po- ing functionalities. Most commodity CPUs already offer 2–8 tential for compute-intensive applications like ray tracing, but also cores [1, 9, 10, 25], and desktop PCs with more cores can be built requires addressing the challenges caused by this processor’s un- by stacking and interconnecting smaller multi-core systems (mostly conventional architecture.
    [Show full text]
  • IBM System P5 Quad-Core Module Based on POWER5+ Technology: Technical Overview and Introduction
    Giuliano Anselmi Redpaper Bernard Filhol SahngShin Kim Gregor Linzmeier Ondrej Plachy Scott Vetter IBM System p5 Quad-Core Module Based on POWER5+ Technology: Technical Overview and Introduction The quad-core module (QCM) is based on the well-known POWER5™ dual-core module (DCM) technology. The dual-core POWER5 processor and the dual-core POWER5+™ processor are packaged with the L3 cache chip into a cost-effective DCM package. The QCM is a package that enables entry-level or midrange IBM® System p5™ servers to achieve additional processing density without increasing the footprint. Figure 1 shows the DCM and QCM physical views and the basic internal architecture. to I/O to I/O Core Core 1.5 GHz 1.5 GHz Enhanced Enhanced 1.9 MB MB 1.9 1.9 MB MB 1.9 L2 cache L2 L2 cache switch distributed switch distributed switch 1.9 GHz POWER5 Core Core core or CPU or core 1.5 GHz L3 Mem 1.5 GHz L3 Mem L2 cache L2 ctrl ctrl ctrl ctrl Enhanced distributed Enhanced 1.9 MB Shared Shared MB 1.9 Ctrl 1.9 GHz L3 Mem Ctrl POWER5 core or CPU or core 36 MB 36 MB 36 L3 cache L3 cache L3 DCM 36 MB QCM L3 cache L3 to memory to memory DIMMs DIMMs POWER5+ Dual-Core Module POWER5+ Quad-Core Module One Dual-Core-chip Two Dual-Core-chips plus two L3-cache-chips plus one L3-cache-chip Figure 1 DCM and QCM physical views and basic internal architecture © Copyright IBM Corp. 2006.
    [Show full text]
  • POWER® Processor-Based Systems
    IBM® Power® Systems RAS Introduction to IBM® Power® Reliability, Availability, and Serviceability for POWER9® processor-based systems using IBM PowerVM™ With Updates covering the latest 4+ Socket Power10 processor-based systems IBM Systems Group Daniel Henderson, Irving Baysah Trademarks, Copyrights, Notices and Acknowledgements Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Active AIX® POWER® POWER Power Power Systems Memory™ Hypervisor™ Systems™ Software™ Power® POWER POWER7 POWER8™ POWER® PowerLinux™ 7® +™ POWER® PowerHA® POWER6 ® PowerVM System System PowerVC™ POWER Power Architecture™ ® x® z® Hypervisor™ Additional Trademarks may be identified in the body of this document. Other company, product, or service names may be trademarks or service marks of others. Notices The last page of this document contains copyright information, important notices, and other information. Acknowledgements While this whitepaper has two principal authors/editors it is the culmination of the work of a number of different subject matter experts within IBM who contributed ideas, detailed technical information, and the occasional photograph and section of description.
    [Show full text]
  • Openpower AI CERN V1.Pdf
    Moore’s Law Processor Technology Firmware / OS Linux Accelerator sSoftware OpenStack Storage Network ... Price/Performance POWER8 2000 2020 DRAM Memory Chips Buffer Power8: Up to 12 Cores, up to 96 Threads L1, L2, L3 + L4 Caches Up to 1 TB per socket https://www.ibm.com/blogs/syst Up to 230 GB/s sustained memory ems/power-systems- openpower-enable- bandwidth acceleration/ System System Memory Memory 115 GB/s 115 GB/s POWER8 POWER8 CPU CPU NVLink NVLink 80 GB/s 80 GB/s P100 P100 P100 P100 GPU GPU GPU GPU GPU GPU GPU GPU Memory Memory Memory Memory GPU PCIe CPU 16 GB/s System bottleneck Graphics System Memory Memory IBM aDVantage: data communication and GPU performance POWER8 + 78 ms Tesla P100+NVLink x86 baseD 170 ms GPU system ImageNet / Alexnet: Minibatch size = 128 ADD: Coherent Accelerator Processor Interface (CAPI) FPGA CAPP PCIe POWER8 Processor ...FPGAs, networking, memory... Typical I/O MoDel Flow Copy or Pin MMIO Notify Poll / Int Copy or Unpin Ret. From DD DD Call Acceleration Source Data Accelerator Completion Result Data Completion Flow with a Coherent MoDel ShareD Mem. ShareD Memory Acceleration Notify Accelerator Completion Focus on Enterprise Scale-Up Focus on Scale-Out and Enterprise Future Technology and Performance DriVen Cost and Acceleration DriVen Partner Chip POWER6 Architecture POWER7 Architecture POWER8 Architecture POWER9 Architecture POWER10 POWER8/9 2007 2008 2010 2012 2014 2016 2017 TBD 2018 - 20 2020+ POWER6 POWER6+ POWER7 POWER7+ POWER8 POWER8 P9 SO P9 SU P9 SO 2 cores 2 cores 8 cores 8 cores 12 cores w/ NVLink
    [Show full text]
  • Implementing Powerpc Linux on System I Platform
    Front cover Implementing POWER Linux on IBM System i Platform Planning and configuring Linux servers on IBM System i platform Linux distribution on IBM System i Platform installation guide Tips to run Linux servers on IBM System i platform Yessong Johng Erwin Earley Rico Franke Vlatko Kosturjak ibm.com/redbooks International Technical Support Organization Implementing POWER Linux on IBM System i Platform February 2007 SG24-6388-01 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. Second Edition (February 2007) This edition applies to i5/OS V5R4, SLES10 and RHEL4. © Copyright International Business Machines Corporation 2005, 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team that wrote this redbook. ix Become a published author . xi Comments welcome. xi Chapter 1. Introduction to Linux on System i platform . 1 1.1 Concepts and terminology . 2 1.1.1 System i platform . 2 1.1.2 Hardware management console . 4 1.1.3 Virtual Partition Manager (VPM) . 10 1.2 Brief introduction to Linux and Linux on System i platform . 12 1.2.1 Linux on System i platform . 12 1.3 Differences between existing Power5-based System i and previous System i models 13 1.3.1 Linux enhancements on Power5 / Power5+ . 14 1.4 Where to go for more information . 15 Chapter 2. Configuration planning . 17 2.1 Concepts and terminology . 18 2.1.1 Processor concepts .
    [Show full text]
  • From Blue Gene to Cell Power.Org Moscow, JSCC Technical Day November 30, 2005
    IBM eServer pSeries™ From Blue Gene to Cell Power.org Moscow, JSCC Technical Day November 30, 2005 Dr. Luigi Brochard IBM Distinguished Engineer Deep Computing Architect [email protected] © 2004 IBM Corporation IBM eServer pSeries™ Technology Trends As frequency increase is limited due to power limitation Dual core is a way to : 2 x Peak Performance per chip (and per cycle) But at the expense of frequency (around 20% down) Another way is to increase Flop/cycle © 2004 IBM Corporation IBM eServer pSeries™ IBM innovations POWER : FMA in 1990 with POWER: 2 Flop/cycle/chip Double FMA in 1992 with POWER2 : 4 Flop/cycle/chip Dual core in 2001 with POWER4: 8 Flop/cycle/chip Quadruple core modules in Oct 2005 with POWER5: 16 Flop/cycle/module PowerPC: VMX in 2003 with ppc970FX : 8 Flops/cycle/core, 32bit only Dual VMX+ FMA with pp970MP in 1Q06 Blue Gene: Low frequency , system on a chip, tight integration of thousands of cpus Cell : 8 SIMD units and a ppc970 core on a chip : 64 Flop/cycle/chip © 2004 IBM Corporation IBM eServer pSeries™ Technology Trends As needs diversify, systems are heterogeneous and distributed GRID technologies are an essential part to create cooperative environments based on standards © 2004 IBM Corporation IBM eServer pSeries™ IBM innovations IBM is : a sponsor of Globus Alliances contributing to Globus Tool Kit open souce a founding member of Globus Consortium IBM is extending its products Global file systems : – Multi platform and multi cluster GPFS Meta schedulers : – Multi platform
    [Show full text]
  • IBM Powerpc 970 (A.K.A. G5)
    IBM PowerPC 970 (a.k.a. G5) Ref 1 David Benham and Yu-Chung Chen UIC – Department of Computer Science CS 466 PPC 970FX overview ● 64-bit RISC ● 58 million transistors ● 512 KB of L2 cache and 96KB of L1 cache ● 90um process with a die size of 65 sq. mm ● Native 32 bit compatibility ● Maximum clock speed of 2.7 Ghz ● SIMD instruction set (Altivec) ● 42 watts @ 1.8 Ghz (1.3 volts) ● Peak data bandwidth of 6.4 GB per second A picture is worth a 2^10 words (approx.) Ref 2 A little history ● PowerPC processor line is a product of the AIM alliance formed in 1991. (Apple, IBM, and Motorola) ● PPC 601 (G1) - 1993 ● PPC 603 (G2) - 1995 ● PPC 750 (G3) - 1997 ● PPC 7400 (G4) - 1999 ● PPC 970 (G5) - 2002 ● AIM alliance dissolved in 2005 Processor Ref 3 Ref 3 Core details ● 16(int)-25(vector) stage pipeline ● Large number of 'in flight' instructions (various stages of execution) - theoretical limit of 215 instructions ● 512 KB L2 cache ● 96 KB L1 cache – 64 KB I-Cache – 32 KB D-Cache Core details continued ● 10 execution units – 2 load/store operations – 2 fixed-point register-register operations – 2 floating-point operations – 1 branch operation – 1 condition register operation – 1 vector permute operation – 1 vector ALU operation ● 32 64 bit general purpose registers, 32 64 bit floating point registers, 32 128 vector registers Pipeline Ref 4 Benchmarks ● SPEC2000 ● BLAST – Bioinformatics ● Amber / jac - Structure biology ● CFD lab code SPEC CPU2000 ● IBM eServer BladeCenter JS20 ● PPC 970 2.2Ghz ● SPECint2000 ● Base: 986 Peak: 1040 ● SPECfp2000 ● Base: 1178 Peak: 1241 ● Dell PowerEdge 1750 Xeon 3.06Ghz ● SPECint2000 ● Base: 1031 Peak: 1067 Apple’s SPEC Results*2 ● SPECfp2000 ● Base: 1030 Peak: 1044 BLAST Ref.
    [Show full text]
  • IBM Power Systems Performance Report Apr 13, 2021
    IBM Power Performance Report Power7 to Power10 September 8, 2021 Table of Contents 3 Introduction to Performance of IBM UNIX, IBM i, and Linux Operating System Servers 4 Section 1 – SPEC® CPU Benchmark Performance 4 Section 1a – Linux Multi-user SPEC® CPU2017 Performance (Power10) 4 Section 1b – Linux Multi-user SPEC® CPU2017 Performance (Power9) 4 Section 1c – AIX Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 5 Section 1d – Linux Multi-user SPEC® CPU2006 Performance (Power7, Power7+, Power8) 6 Section 2 – AIX Multi-user Performance (rPerf) 6 Section 2a – AIX Multi-user Performance (Power8, Power9 and Power10) 9 Section 2b – AIX Multi-user Performance (Power9) in Non-default Processor Power Mode Setting 9 Section 2c – AIX Multi-user Performance (Power7 and Power7+) 13 Section 2d – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power8) 15 Section 2e – AIX Capacity Upgrade on Demand Relative Performance Guidelines (Power7 and Power7+) 20 Section 3 – CPW Benchmark Performance 19 Section 3a – CPW Benchmark Performance (Power8, Power9 and Power10) 22 Section 3b – CPW Benchmark Performance (Power7 and Power7+) 25 Section 4 – SPECjbb®2015 Benchmark Performance 25 Section 4a – SPECjbb®2015 Benchmark Performance (Power9) 25 Section 4b – SPECjbb®2015 Benchmark Performance (Power8) 25 Section 5 – AIX SAP® Standard Application Benchmark Performance 25 Section 5a – SAP® Sales and Distribution (SD) 2-Tier – AIX (Power7 to Power8) 26 Section 5b – SAP® Sales and Distribution (SD) 2-Tier – Linux on Power (Power7 to Power7+)
    [Show full text]
  • Progress Codes
    Power Systems Progress codes Power Systems Progress codes Note Before using this information and the product it supports, read the information in “Notices,” on page 109, “Safety notices” on page v, the IBM Systems Safety Notices manual, G229-9054, and the IBM Environmental Notices and User Guide, Z125–5823. This edition applies to IBM Power Systems™ servers that contain the POWER6® processor and to all associated models. © Copyright IBM Corporation 2007, 2009. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Safety notices ............v Chapter 13. (CAxx) Partition firmware progress codes ...........79 Chapter 1. Progress codes overview . 1 Chapter 14. (CF00) Linux kernel boot Chapter 2. AIX IPL progress codes . 3 progress codes ...........91 Chapter 3. AIX diagnostic load Chapter 15. (D1xx) Service processor progress indicators .........29 firmware progress codes .......93 Chapter 4. Dump progress indicators Chapter 16. (D1xx) Service processor (dump status codes) .........33 status progress codes ........95 Chapter 5. AIX crash progress codes Chapter 17. (D1xx) Service processor (category 1) ............35 dump status progress codes .....97 Chapter 6. AIX crash progress codes Chapter 18. (D1xx) Platform dump (category 2) ............37 status progress codes .......101 Chapter 7. AIX crash progress codes Chapter 19. (D2xx) Partition status (category 3) ............39 progress codes ..........103 Chapter 8. (C1xx) Service processor Chapter 20. (D6xx) General status progress codes ...........41 progress codes ..........105 Chapter 9. (C2xx) Virtual service Chapter 21. (D9xx) General status processor progress codes ......63 progress codes ..........107 Chapter 10. (C3xx, C5xx, C6xx) IPL Appendix. Notices .........109 status progress codes ........67 Trademarks ..............110 Electronic emission notices .........111 Chapter 11.
    [Show full text]
  • IBM's POWER5 Micro Processor Design and Methodology
    IBM's POWER5 Micro Processor Design and Methodology Ron Kalla IBM Systems Group © 2003 IBM Corporation IBM’s POWER5 Micro Processor Design and Methodology Outline POWER5 Overview Design Process Power UT CS352 , April 2005 © 2003 IBM Corporation IBM’s POWER5 Micro Processor Design and Methodology POWER Server Roadmap 2001 2002-3 2004* 2005* 2006* POWER4 POWER4+ POWER5 POWER5+ POWER6 90 nm 65 nm 130 nm 130 nm Ultra high 180 nm >> GHz >> GHz frequency cores Core Core 1.7 GHz 1.7 GHz > GHz > GHz L2 caches Core Core Core Core 1.3 GHz 1.3 GHz Shared L2 Advanced System Features Core Core Distributed Switch Shared L2 Shared L2 Distributed Switch Distributed Switch Shared L2 Simultaneous multi-threading Distributed Switch Sub-processor partitioning Reduced size Dynamic firmware updates Enhanced scalability, parallelism Chip Multi Processing Lower power High throughput performance - Distributed Switch Larger L2 Enhanced memory subsystem - Shared L2 More LPARs (32) Dynamic LPARs (16) Autonomic Computing Enhancements *Planned to be offered by IBM. All statements about IBM’s future direction and intent are* subject to change or withdrawal without notice and represent goals and objectives only. UT CS352 , April 2005 © 2003 IBM Corporation IBM’s POWER5 Micro Processor Design and Methodology POWER5 Technology: 130nm lithography, Cu, SOI 389mm2 276M Transistors Dual processor core 8-way superscalar Simultaneous multithreaded (SMT) core Up to 2 virtual processors per real processor UT CS352 , April 2005 © 2003 IBM Corporation IBM’s POWER5 Micro
    [Show full text]
  • Microprocessor
    MICROPROCESSOR www.MPRonline.com THE REPORTINSIDER’S GUIDE TO MICROPROCESSOR HARDWARE POWER5 TOPS ON BANDWIDTH IBM’s Design Is Still Elegant, But Itanium Provides Competition By Kevin Krewell {12/22/03-02} On large multiprocessing systems, often the dominant attributes needed to assure good incremental processor performance include memory coherency issues, aggregate memory bandwidth, and I/O performance. The Power4 processor, now shipping from IBM, nicely balances integration with performance. The Power4 has an processor speed, but are eight bytes wide, offering more than eight-issue superscalar core, 12.8GB/s of memory bandwidth, 4GB/s of bandwidth per bus. The fast buses are used to 1.5MB of L2 cache, and 128MB of external L3 cache. The enable what IBM terms aggressive cache-to-cache transfers. recently introduced Power5 steps up integration and per- This tightly coupled multiprocessing is designed to allow formance by integrating the distributed switch fabric between processing threads to operate in parallel over the processor memory controller and core/caches. (See MPR 10/14/03-01, array. With eight modules, Power5 supports up to 128-way “IBM Raises Curtain on Power5.”) The Power5 has an on-die multithreaded processing. memory controller that will support both DDR and DDR2 The bus structure and distributed switch fabric also SDRAM memory. The Power5 also improves system per- allow IBM to create a 64-way (processor cores) configuration. formance, as each processor core is now multithreaded. (See MPR 9/08/03-02, “IBM Previews Power5.”) IBM has packed four processor die (eight processor cores) and four L3 cache chips on one 95- × 95mm MCM, seen in Figure 1.
    [Show full text]