IBM System z10 Introduction and Hardware Overview

Klaus-Dieter Müller August 2009 kmuller@de..com

© 2009 IBM Corporation

Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AlphaBlox* GDPS* Processor Resource/Systems Manager System z10 APPN* RACF* System/30 CICS* HyperSwap Redbooks* Tivoli* Cool Blue IBM* Resource Link Tivoli Storage Manager DB2* IBM eServer RETAIN* TotalStorage* DFSMS IBM logo* REXX VSE/ESA DFSMShsm IMS RMF VTAM* DFSMSrmm Language Environment* S/370 WebSphere* DFSORT* Lotus* S/390* xSeries* DirMaint Multiprise* Scalable Architecture for Financial Reporting z9* DRDA* MVS Sysplex Timer* z10 DS6000 OMEGAMON* Systems Director Active Energy Manager z10 BC DS8000 Parallel Sysplex* System p* z10 EC ECKD Performance Toolkit for VM System Storage z/Architecture* ESCON* POWER6 System x z/OS* FICON* PowerPC* System z z/VM* FlashCopy* PR/SM System z9* z/VSE * Registered trademarks of IBM Corporation zSeries* The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce. * All other products may be trademarks or registered trademarks of their respective companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. © 2009 IBM Corporation z10 Hardware IBM System z

1 IBM z10 EC Continues the CMOS Mainframe Heritage 4000 4.4 3500 GHz

3000 1.7 GHz 2500

1.2 MHz 2000 GHz 770 MHz 1500 550 420 300 MHz 1000 MHz MHz

500

0 1997 1998 1999 2000 2003 2005 2008 G4 G5 G6 z900 z990 z9 EC z10 EC

ƒ G4 –1st full-custom CMOS S/390® ƒ z900 – Full 64-bit z/Architecture® ƒ z10 EC – Architectural ƒ G5 – IEEE-standard BFP; branch target prediction ƒ z990 – Superscalar CISC pipeline extensions ƒ G6 – Copper Technology (Cu BEOL) ƒ z9 EC – System level scaling

© 2009 IBM Corporation z10 Hardware IBM System z

Do GHz matter? ƒ GHz does matter – It is the "rising tide that lifts all boats" – It is especially important for CPU-intensive applications ƒ GHz is not the only dimension that matters – System z focus is on balanced system design across many factors • Frequency, pipeline efficiency, energy efficiency, cache / memory design, I/O design ƒ System performance is not linear with frequency – Need to use LSPR + System z capacity planning tools for real client / workload sizing ƒ System z has been on consistent path while others have oscillated between extremes – Growing frequency steadily, with occasional jumps/step functions (G4 in 1997, z10 in 2008) ƒ z10 leverages technology to get the most out of high-frequency design – Low-latency pipeline – Dense packaging (MCM) allows MRU cooling which yields more power-efficient operation – Virtualization technology (etc.) allows consistent performance at high utilization, which makes CPU power-efficiency a much smaller part of the system/data-center power consumption picture

© 2009 IBM Corporation z10 Hardware IBM System z

2 IBM z10 EC Instruction Set Architecture z/Architecture ƒ Continues line of upward-compatible mainframe processors – Application compatibility since 1964 64-bit – Supports all z/Architecture-compliant OSes ESA/390 addressing

370/ESA

Binary Floating Point 370/XA

Sysplex S/370™

31-bit addressing S/360 Virtual addressing

24-bit addressing

1964 1970s 1980s 1990s 2000s

© 2009 IBM Corporation z10 Hardware IBM System z

z10 EC Architecture

ƒ Continues line of upward-compatible mainframe processors ƒ Rich CISC Instruction Set Architecture (ISA) – 894 instructions (668 implemented entirely in hardware) – 24, 31, and 64-bit addressing modes – Multiple address spaces robust inter-process security – Multiple arithmetic formats – Industry-leading virtualization support • High-performance logical partitioning via PR/SM™ • Fine-grained virtualization via z/VM scales to 1000’s of images – Precise, model-independent definition of hardware/software interface ƒ Architectural extensions for IBM z10 EC – 50+ instructions added to improve compiled code efficiency – Enablement for software/hardware cache optimization – Support for 1 MB page frames – Full hardware support for Hardware Decimal Floating-point Unit (HDFU)

© 2009 IBM Corporation z10 Hardware IBM System z

3 z10 EC Chip Relationship to POWER6™ Enterprise Quad Core z10 ƒ New Enterprise Quad Core z10 EC processor chip processor chip ƒ Siblings, not identical twins ƒ Share lots of DNA – IBM 65nm Silicon-On-Insulator (SOI) technology – Design building blocks: • Latches, SRAMs, regfiles, dataflow elements – Large portions of Fixed Point Unit (FXU), Binary Floating- point Unit. (BFU), Hardware Decimal Floating-point Unit (HDFU), Memory Controller (MC), I/O Bus Controller (GX) – Core pipeline design style • High-frequency, low-latency, mostly-in-order – Many System z and System p® designers and engineers working together Dual Core POWER6 ƒ Different personalities processor chip – Very different Instruction Set Architectures (ISAs) • very different cores – Cache hierarchy and coherency model – SMP topology and protocol – Chip organization – IBM z10 EC Chip optimized for Enterprise Data Serving Hub

© 2009 IBM Corporation z10 Hardware IBM System z

IBM System z: System Design Comparison System I/O Bandwidth 288 GB/sec* Balanced System CPU, nWay, Memory, I/O Bandwidth*

172.8 GB/sec*

96 GB/sec ITR for Memory 24 GB/sec 1-way

1.5 TB** 512 GB 256 GB 64 GB 300 450 ~600 ~920

16-way

32-way z10 EC

z9 EC

54-way zSeries 990 *Servers exploit a subset of its designed I/O capability 64-way zSeries 900 ** Up to 1 TB per LPAR Processors © 2009 IBM Corporation z10 Hardware IBM System z

4 Scalability: System-Structures optimized for data The key problem of current -systems: Memory access does not scale with CPU-cycletime !

150..200x Memory Memory Memory Memory (... GB)

Level 2 Level 2 Level 2 Level 2 10..20x Cache Cache Cache Cache (... MB)

Daten Instr. 1x Cache Cache (... KB) (... KB) CPU ... CPU ... CPU ... CPU

© 2009 IBM Corporation z10 Hardware IBM System z

z10-EC Systemstructure: z10-EC Single Book

Mainstorage (up to 384 GB)

IFB Gal2 IFB 2 x 6 GB/s

IFB Gal2 IFB 2 x 6 GB/s

IFB Gal2 IFB 2 x 6 GB/s

IFB Gal2 2 x 6 GB/s Level 2 Cache (48 MB) IFB IFB Gal2 IFB 2 x 6 GB/s

IFB Gal2 IFB 2 x 6 GB/s L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 L1.5 PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU PU IFB Gal2 IFB 2 x 6 GB/s

IFB Gal2 IFB 2 x 6 GB/s

96 GB/s

© 2009 IBM Corporation z10 Hardware IBM System z

5 z10 EC Overview ƒ Machine Type – 2097 ƒ 5 Models – E12, E26, E40, E56 and E64 ƒ Processor Units (PUs) – 17 (17 and 20 for Model E64) PU cores per book – Up to 11 SAPs per system, standard – 2 spares designated per system – Dependant on the H/W model - up to 12, 26, 40, 56 or 64 PU cores available for characterization • Central Processors (CPs), Integrated Facility for Linux (IFLs), Internal (ICFs), System z10 Application Assist Processors (zAAPs), System z10 Integrated Information Processor (zIIP), optional - additional System Assist Processors (SAPs) ƒ Memory – System Minimum of 16 GB – Up to 384 GB per book – Up to 1.5 TB for System and up to 1 TB per LPAR • Fixed HSA, standard • 16/32/48/64 GB increments ƒ I/O – Up to 48 I/O Interconnects per System @ 6 GBps each – Up to 4 Logical Channel Subsystems (LCSSs) ƒ ETR Feature, standard

© 2009 IBM Corporation z10 Hardware IBM System z

z10 EC – Under the covers (Model E56 or E64)

Internal Processor Books, Batteries Memory, MBA and HCA cards (optional) Ethernet cables for Power internal System LAN Supplies connecting Flexible Service Processor 2 x Support (FSP) cage controller Elements cards

InfiniBand I/O Interconnects 3x I/O 2 x Cooling cages Units

© 2009 IBM Corporation z10 Hardware IBM System z

6 z10 EC Multi-Chip Module (MCM) ƒ 96mm x 96mm MCM ƒ CMOS 11s chip Technology – 103 Glass Ceramic layers – PU, SC, S chips, 65 nm – 7 chip sites – 5 PU chips/MCM – Each up to 4 cores – 7356 LGA connections • One memory control (MC) per PU chip – 17 and 20 way MCMs • 21.97 mm x 21.17 mm • 994 million transistors/PU chip • L1 cache/PU core – 64 KB I-cache – 128 KB D-cache • L1.5 cache/PU core – 3 MB • 4.4 GHz PU 2 PU 1 PU 0 • 0.23 ns Cycle Time • 6 km of wire – 2 Storage Control (SC) chip • 21.11 mm x 21.71 mm SC 1 SC 0 • 1.6 billion transistors/chip • L2 Cache 24 MB per SC chip (48 MB/Book) • L2 access to/from other MCMs • 3 km of wire S 2 S 0 – 4 SEEPROM (S) chips PU 4 PU 3 • 2 x active and 2 x redundant S 3 S 1 • Product data for MCM, chips and other engineering information – Clock Functions – distributed across PU and SC chips • Master Time-of-Day (TOD) and 9037 (ETR) functions are on the SC © 2009 IBM Corporation z10 Hardware IBM System z

z10 EC – Enterprise Quad Core z10 PU Chip

Core Core ƒ Up to Four cores per PU L1 + L1.5 COP L1 + L1.5 – 4..4 GHz & & – L1 cache/PU core HDFU HDFU • 64 KB I-cache • 128 KB D-cache MC L2 Intf L2 Intf GX – 3 MB L1.5 cache/PU core – Each core with its own Hardware Decimal Core Core L1 + L1.5 L1 + L1.5 Floating Point Unit (HDFU) & COP & ƒ Two Co-processors (COP) HDFU HDFU – Accelerator engines • Data compression • Cryptographic functions – Includes 16 KB cache – Shared by two cores ƒ L2 Cache interface – Shared by all four cores – Even/odd line (256B) split ƒ I/O Bus Controller (GX) – Interface to fanout – Compatible with System z9 MBA ƒ Memory Controller (MC) – Interface to controller on memory DIMMs

© 2009 IBM Corporation z10 Hardware IBM System z

7 z10 EC Additional Details for PU Core ƒ Each core is a superscalar processor with these characteristics: – The basic cycle time is approximately 230 Enterprise Quad Core picoseconds z10 processor chip – Up to two instructions may be decoded per cycle – Maximum is two operations/cycle for execution as well as for decoding – Memory accesses might not be in the same PU 2 PU 1 PU 0 instruction order – Most instructions flow through a pipeline with different numbers of steps for various types of instructions. Several instructions may be in progress SC 1 SC 0 at any instant, subject to the maximum number of decodes and completions per cycle – Each PU core has an L1 cache divided into a 64 KB S 2 S 0 cache for instructions and a 128 KB cache for data PU 4 PU 3 – Each PU core also has a L1.5 cache. This cache is S 3 S 1 3MB in size. Each L1 cache has a Translation Look- aside Buffer (TLB) of 512 entries associated with it

© 2009 IBM Corporation z10 Hardware IBM System z

z10 Compression and Cryptography Accelerator

ƒ Data compression engine – Static dictionary compression and expansion – Dictionary size up to 64 KB (8K entries) • Local 16 KB caches for dictionary data Core 0 Core 1 ƒ CP Assist for Cryptographic Function (CPACF) 2nd Level – DES (DEA, TDEA2, TDEA3) Cache – SHA-1 (160 bit) IB OB TLB TLB OB IB – SHA-2 (224, 256, 384, 512 bit) – AES (128, 192, 256 bit) – PRNG Cmpr 16K 16K Cmpr ƒ Accelerator unit shared by two cores Exp Exp – Independent compression engines – Shared cryptography engines

Crypto Crypto Cipher Hash

© 2009 IBM Corporation z10 Hardware IBM System z

8 z10 Hardware Decimal Floating Point Unit ƒ Decimal arithmetic widely used in commercial and financial applications – Computations often handled in software – Avoids rounding and other problems with binary/decimal conversions ƒ On IBM System z9 delivered in millicode – brought improved precision and function ƒ On IBM System z10 integrated on every core – giving a performance boost to execution of decimal arithmetic ƒ Growing industry support for hardware decimal floating point standardization – Open standard definition led by IBM, endorsed by key ISVs including Microsoft and SAP – Java BigDecimal, C#, XML, C/C++, GCC, DB2 V9, Enterprise PL/1, Assembler ƒ z/OS V1.9 Hardware Decimal Floating Point support requires: – High Level Assembler (z/OS V1.8) – Enterprise PL/1 – XL C/C++ with PTF – Debug tool (in support of C/C++, PL/1, and HLASM) – dbx (in support of C/ C++) – DB2 9 for z/OS (allows you to define DFP data in DB2) HDFU

Bringing high performance computing benefits to commercial workloads Single PU core © 2009 IBM Corporation z10 Hardware IBM System z

z10 EC SC Hub Chip

ƒ Connects multiple z10 PU chips – 48 GB/Sec bandwidth per processor ƒ Shared Level 2 cache – 24 MB SRAM Cache – Extended directory • Partial-inclusive discipline – Hub chips can be paired • 48 MB shared cache ƒ Low-latency SMP coherence fabric – Robust SMP scaling – Strongly-ordered architecture ƒ Multiple hub chips/pairs allow further SMP scaling

© 2009 IBM Corporation z10 Hardware IBM System z

9 z10 EC Processor/Memory/HCA

and Book SC CHIP

PU CHIP

Core Core L1 + L1.5 COP L1 + L1.5 & & Front View HDFU HDFU HCA2-O MC L2 L2 GX HCA2-O FSP Core Core L1 + L1.5 L1 + L1.5 FSP COP & & HCA2-C HDFU HDFU PU SC HCA2-C PU HCA2-C HCA2-C PU MBA PU MBA ƒ Up to 8 Hot pluggable SC HCA fanout cards PU HCA2-O LR fanout not shown ƒ Plugging rules apply and dependant on Model Note: ICB-4 use MBAs

© 2009 IBM Corporation z10 Hardware IBM System z

z10 EC Book Layout

M MC Memory Fanout Front Rear Cards

Memory DCA Power Supplies

Cooling from/to MRU

© 2009 IBM Corporation z10 Hardware IBM System z

10 z10 EC Book Layout – Under the covers Fanouts

MCM HCA2-O (InfiniBand)

Memory FSP cards

HCA2-C (I/O cages) DCA Power Supplies MBA (ICB-4)

Note: Chart shows an example MRU of how and where different Connections fanouts are installed. The quantities installed will depend HCA2-O LR fanout not shown on the actual I/O configuration

© 2009 IBM Corporation z10 Hardware IBM System z

20 PU MCM Structure Memory Memory Memory Memory 2 GX 2 GX 2 GX 2 GX

4 PU cores 4 PU cores 4 PU cores 4 PU cores 4 PU cores 4x3MB L1.5 4x3MB L1.5 4x3MB L1.5 4x3MB L1.5 4x3MB L1.5 COP COP COP COP COP MC, GX MC, GX MC, GX MC, GX MC, GX

24MB L2 24MB L2 SC SC

Off- Book Off- Book Off- Book Interconnect Interconnect Interconnect

© 2009 IBM Corporation z10 Hardware IBM System z

11 z10 EC – Inter Book Communications – Model E64

ƒ The z10 EC Books are fully 77-way CEC interconnected in a point to point topology as shown in the diagram 17-way 20-way ƒ Data transfers are direct First Book Second Book between Books via the Level 2 Cache chip in each MCM. ƒ Level 2 Cache is shared by all PU chips on the MCM

20-way 20-way Third Book Fourth Book

© 2009 IBM Corporation z10 Hardware IBM System z

z10 EC – Inter Book and I/O Communications – Models E54/E64

L2 L2 L2 L2 Memory Memory Memory Memory Memory Memory Memory Processor Book 1 Book Processor Processor Book 3 Book Processor Processor Book 0 Book Processor 2 Book Processor

HCA2-Cs HCA2-Cs HCA2-Cs HCA2-Cs

12x IB-DDR to I/O card domains 6 GB/sec

Slot 14 Slot 23 Slot 28 Note: Slot 5 Mux 0 Mux 1 Mux 2 Mux 3 Mux 4 Mux 5 Mux 6 Mux 7

Interconnect Interconnect Interconnect Interconnect ƒ HCA2-C for I/O domain ƒ HCA2-O for PSIFB Coupling Slot 01 Slot 10 Slot 19 Slot 29 Domain 0 Domain 4 Domain 6 Domain Domain 2 Domain Links Slot 03 Slot 12 Slot 21 Slot 30 ƒ HCA2-O LR for extended Slot 06 Slot 15 Slot 24 Slot 31 distance PSIFB Coupling Link Slot 08 Slot 17 Slot 26 Slot 32 ƒ MBA for ICB-4 ƒ Each HCA2-C and HCA2-O has

I/O Cage 1 Slot 02 Slot 11 Slot 20 Domain 1 Domain 3 Domain Domain 5 Domain 2 ports Slot 04 Slot 13 Slot 22 ƒ IFB connectivity is balanced Slot 07 Slot 16 Slot 25 across all installed Books Slot 09 Slot 18 Slot 27 ƒ HCA2-C and HCA2-O supports 6 GB/sec © 2009 IBM Corporation z10 Hardware IBM System z

12 I/O Subsystem host bus interconnect speeds in GBps

InfiniBand I/O Bus z10 2008 6 GBps

STI z9 2005 2.7 GBps

STI z990/z890 2003 2 GBps

STI z900/z800 1 GBps 200x

STI: Self-Timed Interconnect

© 2009 IBM Corporation z10 Hardware IBM System z

Connectivity for Coupling and I/O

ƒ Up to 8 fanout cards per book – Up to 16 ports per book • 48 Port System Maximum ƒ Fanout cards - InfiniBand pairs dedicated to function IFB IFB • HCA2-C fanout – I/O Interconnect HCA2-C – Supports all I/O, OSA, ISC-3 and Crypto Express2 cards in I/O cage domains Up to 16 CHPIDs – across 2 ports • HCA2-O fanout – InfiniBand coupling links IFB IFB – New CHPID type – CIB for Coupling HCA2-O – Fiber optic external coupling link – 150 m Up to 16 CHPIDs – across 2 ports • HCA2-O LR fanout – InfiniBand coupling links – Long Range IFB IFB – New CHPID type – CIB for Coupling HCA2-O LR – Fiber optic external coupling link – 10 Km 2 CHPIDs – 1 per port (Unrepeated) • MBA fanout (Not available on Model E64) STI STI – ICB-4 MBA – New connector and cables

© 2009 IBM Corporation z10 Hardware IBM System z

13 z10 EC FICON Express4 ƒ FICON Enhancements – High Performance FICON for System z 1, 2, 4 (zHPF) for FICON Express4 Gbps – Improved performance at extended distance for FICON Express4 (and FICON Express2) features 1, 2, 4 Gbps ƒ 1, 2, 4 Gbps auto-negotiated ƒ Up to 336 ƒ LX 10 KM, LX 4 KM and SX features 1, 2, 4 – A 10 KM LX transceiver is designed to Gbps interoperate with a 4 KM LX transceiver ƒ Concurrent repair of optics ƒ Personalize as: 1, 2, 4 – FC Gbps • Native FICON • Channel-To-Channel (CTC) – z/OS, z/VM, z/VSE, z/TPF, TPF, FC 3321 FICON Express4 10 KM LX Linux on System z – FCP (Fibre Channel Protocol) FC 3324 FICON Express4 4 KM LX • Support of SCSI devices – z/VM, z/VSE, Linux on System z FC 3322 FICON Express4 SX

© 2009 IBM Corporation z10 Hardware IBM System z

z10 OSA-Express3

ƒ Double density of ports compared to OSA-Express2 – Reduced CHPIDs to manage – Reduced I/O slots – Reduced I/O cages or I/O drawers – Up to 96 LAN ports versus 48 ƒ Designed to reduce the minimum round-trip networking time between z10 BC & z10 EC systems (reduced latency) – Designed to improve round trip at the TCP/IP application layer • OSA-Express3 10 GbE – 45% improvement compared to the OSA-Express2 10 GbE • OSA-Express3 GbE – 45% improvement compared to the OSA-Express2 GbE – Designed to improve throughput (mixed inbound/outbound) • OSA-Express3 10 GbE – 1.0 GBytes/ps @ 1492 MTU – 1.1 GBytes/ps @ 8992 MTU – 3-4 times the throughput of OSA-Express2 10 GbE – 0.90 of Ethernet line speed sending outbound 1506- frames – 1.25 of Ethernet line speed sending outbound 4048-byte frames

The above statements are based on OSA-Express3 performance measurements performed in a test environment on a System z10 EC and do not represent actual field measurements. Results may vary. © 2009 IBM Corporation z10 Hardware IBM System z

14 z10 OSA-Express3 – 10 GbE ƒ 10 Gigabit Ethernet LR (Long Reach) and SR (Short Reach) – One port per PCI-E adaptor – Two ports per feature PCI-E LC Duplex SM – Small form factor connector (LC Duplex) • LR = Single Mode 9 micron fiber • SR = Multimode 50 or 62.5 micron fiber PCI-E LC Duplex SM – Two CHPIDs, one port each • Type OSD (QDIO TCPIP and Layer 2) ƒ New Microprocessor and hardware data 10 GbE – LR, 2 ports router – Large send packet construction, inspection and routing preformed in hardware instead of firmware PCI-E LC Duplex MM – Large send for IPv4 traffic – Checksum offload – Concurrent LIC update LC Duplex MM – Designed to improve performance for PCI-E standard (1492 byte) and jumbo frames (8992 byte) 10 GbE – SR, 2 ports ƒ Up to 45% reduction in latency compared to OSA-Express2 10 GbE

© 2009 IBM Corporation z10 Hardware IBM System z

z10 EC OSA-Express3 GbE – 4 ports feature ƒ Gigabit Ethernet LX and SX – Four ports per feature options – Two ports* per PCI-E adaptor/CHPID • OS PTF required to use 2nd port – CHPIDs support • OSD (QDIO TCPIP and Layer 2) • OSN (OSA-Express for NCP) LC Duplex SM PCI-E – Small form factor connector (LC Duplex) 4 LX ports ƒ New microprocessor and hardware data or router 4 SX ports – Large send packet construction, inspection PCI-E and routing preformed in hardware instead LC Duplex MM of firmware GbE - 4 ports – Large send for IPv4 traffic – Checksum offload – Concurrent LIC update ƒ Up to 45% reduction in latency compared to OSA-Express2 GbE

* NOTE: To use 2-Ports per PCI-E adaptor, the following is required – z/OS V1.9+, z/VM V5.2+, z/VSE V4.1+, zTPF 1.1 PUT 4 with APARs. If this support isn’t installed, only port zero on a PCI-E adaptor is ‘visible’ to the © 2009 IBM Corporation z10 Hardware IBM System z

15 z10 Cryptographic Support ƒ CP Assist for Cryptographic Function (CPACF) – Standard on every CP and IFL – Supports the following algorithms: • DES, TDES, AES-128, AES-192, AES-256 • SHA-1, SHA-224, SHA-256, SHA 384 & SHA 512 • Pseudo random Number Generation (PRNG) • SHA-1, SHA-256, and SHA-512 are shipped enabled Coprocessor – UP to 4096-bit RSA keys or – Random Number Generation Long (8 to 8096 bits) Accelerator ƒ Crypto Express2 – Two features – 1 (z10 BC only) and 2 Coprocessor option Coprocessor (minimum of 2 features) or – Two configuration modes Accelerator • Coprocessor (default) – Federal Information Processing Standard (FIPS) 140-2 Level 4 certified • Accelerator (configured from the HMC) Crypto Express2 – Three configuration options (Two for 1 Coprocessor option) • Default set to Coprocessor – Concurrent Patch – Secure Key AES (128, 192 and 256 bit) support – Support for 13 through19 Personal Account Numbers ƒ Dynamic Add Crypto to LPAR – No recycling of LPAR – No POR required

© 2009 IBM Corporation z10 Hardware IBM System z

Just in time capacity gives you control ƒ Permanent and temporary offerings – with you in charge – Permanent offerings – Capacity Upgrade on Demand (CUoD), Customer Initiated Upgrade (CIU) – Temporary offerings include On/Off Capacity on Demand (On/Off CoD), Capacity Backup Upgrade (CBU) and a new one – Capacity for Planned Event (CPE) ƒ No customer interaction with IBM at time of activation – Broader customer ability to order temporary capacity ƒ Multiple offerings can be in use simultaneously – All offerings on Resource Link – Each offering independently managed and priced ƒ Flexible offerings may be used to solve multiple situations – Configurations based on real time circumstances – Ability to dynamically move to any other entitled configuration ƒ Offerings can be reconfigured or replenished dynamically – Modification possible even if offering is currently active – Some permanent upgrades permitted while temporary offerings are active ƒ Policy based automation capabilities – Using Capacity Provisioning Manager with z/OS 1.9 – Using scheduled operations via HMC

© 2009 IBM Corporation z10 Hardware IBM System z

16 z10 CoD Offerings

ƒ On-line Permanent Upgrade – Permanent upgrade performed by customer (previously referred to Customer Initiated Upgrade - CIU) ƒ Capacity Backup (CBU) – For disaster recovery – Concurrently add CPs, IFLs, ICFs, zAAPs, , SAPs – Pre-paid ƒ Capacity for Planned Event (CPE) – To replace capacity for short term lost within the enterprise due to a planned event such as a facility upgrade or system relocation – Predefined capacity for a fixed period of time (three days) – Pre-paid ƒ On/Off Capacity on Demand (On/Off CoD) – Production Capacity – Supported through software offering – Capacity Provisioning Manager (CPM) – Payment: • Post-paid or Pre-paid by purchase of capacity tokens • Post-paid with unlimited capacity usage • On/Off CoD records and capacity tokens configured on Resource Link ƒ Customer Initiated Upgrade (CIU) – Process/tool for ordering temporary and permanent upgrades via Resource Link – Permanent upgrade support: • Un-assignment of currently active capacity • Reactivation of unassigned capacity • Purchase of all PU types physically available but not already characterized • Purchase of installed but not owned memory © 2009 IBM Corporation z10 Hardware IBM System z

Protecting with IBM’s world-class Business Resiliency solutions

ƒ Preplanning capabilities to avoid future planned outages, e.g. dynamic LPAR allocation without a system outage and plan ahead memory ƒ 100 available capacity settings ƒ Integrated enterprise level resiliency for heterogeneous data center disaster recovery management ƒ Policy driven flexibility to add capacity and backup processors ƒ Basic HyperSwap improves storage availability * ƒ Integrated cryptographic accelerator ƒ Tamper-resistant Crypto Express2 feature with enhanced secure key AES support and capability for increased Personal Account Numbers ƒ Audit logging on new Trusted Key Entry (TKE) 5.3 with optional Smart Card reader ƒ System z – the only platform that is EAL5 certified

* All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represents goals and objectives only. © 2009 IBM Corporation z10 Hardware IBM System z

17 Tracking energy consumption within the infrastructure ƒ Resource Link provides tools to estimate server energy requirements before you purchase a new system or an upgrade ƒ Has energy efficiency monitoring tool – Introduced on IBM System z9 platform in April 2007 – Power and thermal information displayed via the System Activity Display (SAD) ƒ IBM Systems Director Active Energy Manager™ (AEM) for Linux on System z V3.1 – Offers a single view of actual energy usage across multiple heterogeneous IBM platforms within the infrastructure – AEM V3.1 energy management data can be exploited by Tivoli enterprise solutions such as IBM Tivoli Monitoring, IBM Tivoli Usage and Accounting Manager, and IBM Tivoli OMEGAMON® XE on z/OS – AEM V3.1 is a key component of IBM’s Cool Blue™ portfolio within Project Big Green

z10 EC 64-way offers a 15% improvement in performance per kWh over z9 EC 54-way © 2009 IBM Corporation z10 Hardware IBM System z

Consolidation with Linux gets a “green light”

System z servers may help customers become more energy efficient: ƒ Deploy energy efficient technologies – reduce energy consumption and save floor space

Economics of IFLs and z/VM help to drive down the cost of IT ƒ IFLs attractively priced, have no impact on z/OS license fees, and z/VM and Linux software priced at real engine capacity ƒ ‘No charge’ MES upgrades available when upgrading to new technology

Over 2450 LINUX applications are supported on System z, 15% growth in 2008

Integrated Facility for Linux – IFL © 2009 IBM Corporation z10 Hardware IBM System z

18 IBM konsolidert verteilte Server und erzielt Einsparungen Erwartetes Ergebnis bei IBM: ƒ Entschlackte Umgebung mit deutlich weniger Hardware $250M+ Ersparnisse – 3.900 verteilte Server-Images werden auf in 5 Jahren 15-20 System z10 konsolidiert – Wesentlicher Anstieg der durchschnittlichen IBM Global Account (IGA) IT-Kosten Auslastung 3.900 verteilte Serverworkloads ƒ Weniger Personalkosten durch IT-Kostenstudie über 5 Jahre Virtualisierung Gesamtkosten ƒ Wesentliche Senkung der Softwareausgaben Wesentlich ƒ 85% weniger Stellflächenbedarf im niedrigere Rechenzentrum durch konsolidierte Server IT-Betriebskosten – Ermöglicht weiteres Wachstum – Bessere Ausnutzung der Anlagen

ƒ 80% weniger Energieverbrauch Kosten verteilt Kosten Linux auf z9 ƒ Möglichkeit, mehr Anwendungen auf (kumulativ) (kumulativ) System z einzusetzen

© 2009 IBM Corporation z10 Hardware IBM System z

System z and Cell Broadband Engine – The Vision A ‘Marriage’ of Two Technologies that Perfectly Complement Each Other

Cell Blade Integrated and / or Networked Cell (NG)

QS20, QS21, QS2x

z today z tomorrow Preserves the same programming model between Network and Integrated

Aerospace and Financial Chemicals and Digital Video Digital Information Electronic Design Defense Services Sector Petroleum Surveillance Media Based Medicine Automation

© 2009 IBM Corporation z10 Hardware IBM System z

19 Exploiting System z for New Workloads

ƒ Financial analytics - POC

System z Machine Blade System Excel Client Financial Portfolio Cell Task Creation Data Serving Workload Management European Options Analysis Pricing

zLinux z/OS Monte Carlo Simulation Compute-Intensive Mathematical Model

© 2009 IBM Corporation z10 Hardware IBM System z

The right level of business continuity protection for your business …..GDPS family of offerings

Continuous Continuous Availability Disaster Recovery at Continuous Availability Availability of Data / Extended Distance Regionally and Disaster within a Data Center Disaster Recovery Recovery Extended Metropolitan Region Distance

Single Data Center Two Data Centers Two Data Centers Three Data Centers Applications remain active Systems remain active

Near-continuous Automated D/R across Automated Data availability availability to data site or storage failure Disaster Recovery No data loss No data loss “seconds” of Data Loss Extended distances

AB

C GDPS/ PPRC GDPS®/PPRC GDPS/GM GDPS/MGM HyperSwap Manager HyperSwap Manager GDPS/PPRC GDPS/XRC GDPS/MzGM

© 2009 IBM Corporation z10 Hardware IBM System z

20 Operating systems z/OS z/TPF z/VSE™ ƒ Providing intelligent dispatching on ƒ Support for 64+ processors ƒ Interoperability with Linux z10 EC for performance ƒ Workload charge pricing on System z ƒ Up to 64-way support ƒ Exploit encryption technology ƒ Exploit encryption ƒ Simplified capacity provisioning on technology z10 EC ƒ MWLC pricing with ƒ New high availability disk solution sub-capacity option with simplified management ƒ Enabling extreme storage volume scaling ƒ Facilitating new zIIP exploitation z/VM ƒ Consolidation of many virtual images in a single LPAR ƒ Enhanced management Linux on System z functions for virtual images ƒ Large Page Support improves performance ƒ Larger workloads with more ƒ Linux CPU Node Affinity is designed to avoid cache pollution scaleability ƒ Software support for extended CP Assist instructions AES & SHA

© 2009 IBM Corporation z10 Hardware IBM System z

System z – The Ultimate Virtualization Resource Core WebSphere® JavaJava™ Native Linux Native Linux ERP Appl. Appl Busine ® DB2 DB2 CICS® ss Java Linux C++ DB2 Linux C++ DB2 DB2® BusinessCIC Object Appl Java Java ™ IMS DB2 ObjectsS s Linux for Test Linux for z/OS IMS System z Linux Linux System z Linux Linux JVMJVM for for for for z/OS z/OS z/VM® System z System z z/VM System z System z HiperSockets™ – virtual networking and switching Processor Resource/Systems Manager™ (PR/SM™)

CP 1 CP 2 CP n IFL 1 IFL n Memory

ƒ Massive, robust consolidation platform; virtualization is built in, not added on ƒ Up to 60 logical partitions on PR/SM; 100’s to 1000’s of virtual servers on z/VM ƒ Virtual networking for memory-speed communication, as well as virtual layer 2 and layer 3 networks supported by z/VM ƒ Most sophisticated and complete hypervisor function available ƒ Intelligent and autonomic management of diverse workloads and system resources based on business policies and workload performance objectives

© 2009 IBM Corporation z10 Hardware IBM System z

21 IBM System z10 Business Class

Hardware Overview

© 2009 IBM Corporation

Businesses are realizing the value of the mainframe within their IT infrastructure – and their business

ƒ A highly utilized, virtualized, scalable, optimized resource for consolidating workloads to help lower overall operating cost and improve energy efficiency

ƒ A highly secure enterprise data server – when your infrastructure is secure, your business is secure

ƒ A backbone for an enterprise SOA hub to enable integration of applications and processes, and add flexibility – when your infrastructure is flexible, your business is flexible

ƒ Integrates new workloads including Linux®, Java™ and Open Standards with demonstrably lower Total Cost of Ownership (TCO)

Today’s Mainframe: The Smart choice for an optimized, scalable, secure, resilient infrastructure

© 2009 IBM Corporation z10 Hardware IBM System z

22 IBM z10 BC continues the CMOS Mainframe

heritage 3.5 GHz 3500

3000

1.4 2500 GHz 1.0

MHz 2000 625 GHz MHz 1500 139 413 MHz 1000 MHz

500

0 1997 1999 2002 2004 2006 2008 Multiprise® Multiprise z800 z890 z9 BC z10 BC 2000 3000

ƒ Multiprise 2000 – 1st full-custom Mid- ƒ IBM eServer zSeries 800 (z800) - Full 64-bit z/Architecture® ƒ z10 BC - Architectural range CMOS S/390 ƒ IBM eServer zSeries 890 (z890) - Superscalar CISC pipeline extensions ƒ Multiprise 3000 – Internal disk, IFL ƒ z9 BC - System level scaling ƒ Higher frequency CPU introduced on midrange © 2009 IBM Corporation z10 Hardware IBM System z

z10 BC Overview ƒ Machine Type – 2098 ƒ Single Model – E10 – Single frame, air cooled – Non-raised floor option available ƒ Processor Units (PUs) – 12 PU cores per System – 2 SAPs, standard – Zero spares when all PUs characterized – Up to 10 PUs available for characterization • Central Processors (CPs), Integrated Facility for Linux (IFLs), Internal Coupling Facility (ICFs), System z10 Application Assist Processors (zAAPs), System z10 Integrated Information Processor (zIIP), optional - additional System Assist Processors (SAPs) ƒ Memory – System Minimum of 4 GB – Up to 128 GB for System, including HSA (up to 256 GB, June 30, 2009) • 8 GB Fixed HSA, standard • Up to 120 GB for customer use (up to 248 GB, June 30, 2009) • 4, 8 and 32 GB increments (32 GB increment, June 30, 2009) ƒ I/O – Up to 12 I/O Interconnects per System @ 6 GBps each – 2 Logical Channel Subsystems (LCSSs) – Fiber Quick Connect for ESCON and FICON LX – New OSA-Express3 Features – ETR feature, standard

© 2009 IBM Corporation z10 Hardware IBM System z

23 z10 BC – Under the covers Front View Internal Power Battery Supplies (optional)

CPC (SCMs, Memory, MBA, HCA and FSP ) Drawer 2 x Support Elements I/O Drawer #3

I/O Drawer #2 Fiber Quick Connect (FQC) Feature (optional – not shown) I/O Drawer #1

I/O Drawer #4

4 x I/O Drawers

© 2009 IBM Corporation z10 Hardware IBM System z

z10 BC CPC and Memory Drawer Layout 2 Air Moving Devices 32x DIMM slots (not shown)

3 DCA’s

ƒ Two new types of Single Chip Modules (SCMs)

ƒ Processor – PU (4 SCM’s x 3 cores = 12 PU’s)

ƒ System Controller – SC (2)

2 Flexible Support Processor (FSP) card slots providing support for the Service Network subsystem (hot swappable)

6 fanout card slots providing support for the I/O subsystem and/or coupling

2 card slots for the oscillator/ETR function (standard) – dynamic switchover support

© 2009 IBM Corporation z10 Hardware IBM System z

24 z10 BC PU/SC SCM Components Assembled SCM Heatsink with heatsink

PU Chip PU Chip LGA

SCM Components

CPC Drawer with 4 PU SCMs, 2 SC SCMs

SC Chip © 2009 IBM Corporation z10 Hardware IBM System z

z10 BC SCM Vs z10 EC MCM Comparison z10 BC SCMs z10 EC MCM ƒ PU SCM ƒ MCM – 50mm x 50mm in size – fully assembled – 96mm x 96mm in size – Quad core chip with 3 active cores – 5 PU chips per MCM – 4 PU SCMs per System with total of 12 cores • Quad core chips with 3 or 4 active cores – PU Chip size 21.97 mm x 21.17 mm • PU Chip size 21.97 mm x 21.17 mm ƒ SC SCM – 2 SC chips per MCM – 61mm x 61mm in size – fully assembled • 24 MB L2 cache per chip – 2 SC SCMs per System • SC Chip size 21.11 mm x 21.71 mm – 24 MB L2 cache per chip – Up to 4 MCMs for System – SC Chip size 21.11 mm x 21.71 mm

PU 2 PU 1 PU 0 Single PU Chip Single SC Chip without heatsink without heatsink

SC 1 SC 0

S 2 S 0 PU 4 PU 3 S 3 S 1

© 2009 IBM Corporation z10 Hardware IBM System z

25 z10 BC CPC Drawer Components Up to 32 DIMMS

PU SC PU 4 x PU and 2 x SC pluggable PU SC PU SCMs

PU chip, SC chips, Land Grid Array (LGA) socket, Indium foil

2 x OSC/ETR Cards DCA Power I/O Hub for fanout slots

© 2009 IBM Corporation z10 Hardware IBM System z

z10 BC Oscillator/ETR Cards

ƒ Oscillator/ETR Cards (2 in 1 CPC Drawer - FRONT function) – Quantity 2 ƒ Mother card – OSC-Function ƒ Daughter card: – ETR Function/Interface Oscillator/ETR Cards – MT-RJ Connector for Sysplex Timer® – BNC connector for Pulse Per Second (PPS)

Port for Sysplex Timer

© 2009 IBM Corporation z10 Hardware IBM System z

26 z10 BC Sub-capacity Processor Granularity ƒ The z10 BC has 26 CP capacity levels (26 x 5 = 130) Number of z10 BC Base Ratio Ratio z9 BC CPs – Up to 5 CPs at any capacity level to z10 BC • All CPs must be the same capacity level 1 CP z9 BC Z01 1.40 ƒ The one for one entitlement to purchase one zAAP 2 CPs z9 BC Z02 1.36 and/or one zIIP for each CP purchased is the same for 3 CPs z9 BC Z03 1.30 CPs of any speed. 4 CPs z9 BC Z04 1.28 – All specialty engines run at full speed 5 CPs Z9 BC Z04 1.54 – Processor Unit Value for IFL = 120 5-way 2760 MIPs

1-way 673 MIPs Capacity

FULL size Specialty Engine

y X Y Z 1-way a T U V W -W ay O P Q R S 5 y L M N (sub-capacity -W a G H I J K 4 W y E F 3- a A B C D 26 MIPs) # -W ay En 2 -W gin 1 level es Capacity © 2009 IBM Corporation z10 Hardware IBM System z

z10 BC Upgrade Paths z9 BC z10 BC z10 EC ƒ Can enable dynamic and flexible capacity R07 growth for mainframe S07 servers ƒ Temporary capacity upgrade available through On/Off Capacity on Demand ƒ Temporary, E10 E12 nondisruptive addition of CP processors, IFLs, ICFs, zAAPs or zIIPs ƒ New options for reconfiguring specialty engines A04 if the business demands it ƒ New options for changing On/Off CoD ƒ Full upgrades within the z10 BC configurations ƒ Any to any upgrade from the z9 BC ƒ Any to any upgrade from z890 ƒ Subcapacity CBU z890 engines ƒ No charge MES upgrades on IFLs, zAAPs and zIIPs

© 2009 IBM Corporation z10 Hardware IBM System z

27 z10 BC System Power ƒ z10 BC maximum configuration calculated AC input power (Statistical Maximum) – All systems should draw less power than this – Typical systems will draw less power than this

1 I/O Drawer 2 I/O Drawers 3 I/O Drawers 4 I/O Drawers normal room (<28 degC 3.686 kW 4.542 kW 5.308 kW 6.253 kW warm room 4.339 kW 5.315 kW 6.291 kW 7.266 kW (>=28 degC) ƒ 30 Amp plug capacity (208 VAC) – 5.5 kW single phase or unbalanced 3 phase • Supports up to 2 I/O drawers – 8.9 kW balanced 3 phase • Supports all system configurations – have balanced 3 phase feature • Plug 2 additional BPR’s per side

Always refer to the z10 BC IMPP (GC28-6875) for detailed planning information © 2009 IBM Corporation z10 Hardware IBM System z

Noch Fragen?

Danke !

© 2009 IBM Corporation z10 Hardware IBM System z

28