®

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008 ®

The Future runs on System z and IBM System Storage SilvioSilvio’’’ss Corner z/News, Hints and Tips

Silvio Sasso IBM Switzerland ITS Service Delivery for z/OS [email protected]

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 1 Expertforum CH #68, 1.-2.4.2008, Thun

Objectives

The objective of this news session is to provide you with up-to-date and last minute technical information, hints and tips related to IBM mainframe hardware and software, such as IBM system z9, zSeries, z/Architecture and z/OS.

This corner allows me to present you...

ƒHardware news: IBM hardware announcements, new features and options

ƒSoftware news: z/OS and Parallel Sysplex update

ƒVarious resources containing additional documentation (e.g. product specific information or other interesting Websites etc.)

ƒSystem programmer goodies, tools, hints and tips

ƒUseful technical news and flashes

ƒRecommended readings: new Redbooks and whitepapers etc.

ƒTips for education, workshops and conferences

ƒand much more...

Note: the information contained in this document has not been submitted to any formal IBM test and is distributed on an "as is" basis without any warranty either expressed or implied. The use of this information or the implementation of any of the techniques or hints and tips described is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 2 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 1 Trademarks

The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. CICS* IBM* RACF* z/OS* DB2* IBM eServer S/390* z/VM* DB2 Universal Database IBM logo* System z9 z/VSE * zSeries* DirMaint IMS Tivoli* ESCON* NetView* TotalStorage* VSE/ESA* FICON* OMEGAMON* VTAM* GDPS* On Demand Business logo Parallel Sysplex* WebSphere* HiperSockets z/Architecture* HyperSwap

* Registered trademarks of IBM Corporation

The following are trademarks or registered trademarks of other companies.

Java and all Java-related trademarks and logos are trademarks of Sun Microsystems, Inc., in the United States and other countries Linux is a trademark of Linus Torvalds in the United States and other countries.. UNIX is a registered trademark of The Open Group in the United States and other countries. Microsoft is a registered trademark of Microsoft Corporation in the United States and other countries.

* All other products may be trademarks or registered trademarks of their respective companies.

Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 3 Expertforum CH #68, 1.-2.4.2008, Thun

Agenda

General Information  IBM Academic Initiative System z The Institute for Data Center Professionals (IDCP)  IBM Professional Certification Program  Redbooks Wiki  System z and z/OS related Education in Zürich Hardware News  System z10 EC Functional Overview Software News  z/OS Support Summary  A Spotlight on selected z/OS 1.9 Enhancements  z/OS 1.10 Preview  GDPS 3.5 Enhancements APAR's of Interest (HIPER's and Red Alerts etc.)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 4 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 2 Agenda

Parallel Sysplex Update  RMF and XCF Delay Reporting

Hints and Tips, System Programmer "Goodies", Tools and Resources  Uncaptured CPU Time and the Capture Ratio  Sysplex Aggregation Verification  VSAM RLS Lock Contention Avoidance  Static System Symbol Updates using the IEASYMUP Utility

Redbooks and Redpapers Doc Juke Box

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 5 Expertforum CH #68, 1.-2.4.2008, Thun

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1.April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 6 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 3 IBM Academic Initiative System z

Teaching Enterprise Systems to a new Generation

• The IBM Academic Initiative, System z™ Program provides Faculty around the World with a broad Range of Resources and Support to help educate Students in Enterprise System Technologies

• Who can join? – Faculty members at accredited learning institutions all over the globe can join – Membership is available on an individual faculty basis – There is no limit on the number of faculty members from an institution that can join – High school teachers are also welcome to join

• Where to join? – You can apply for membership in the Academic Initiative program by accessing the IBM Academic Initiative Web site at ibm.com/university/academicinitiative – Click on “Join now.”

– During the signup process (which takes about ten minutes to complete), select “System z” in response to these questions:

• “Please select the IBM software you currently or plan to use in your courses or research” • “Please specify the IBM servers you currently or plan to use in your courses or research”

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 7 Expertforum CH #68, 1.-2.4.2008, Thun

IBM Academic Initiative System z

Teaching Enterprise Systems to a new Generation…

• What does it cost? – Your only “cost” to join is the time it takes you to fill out the registration forms and get approved – After that, the Academic Initiative offerings are available to you at no charge – This includes the ability to download all the available IBM technology and courseware, remote access – to mainframe systems, participation in technical web casts, electronic delivery of our newsletter and much more • Why join? – Members get access to a wider range of assets, increased placement opportunities for students and eligibility for additional discounts and faculty awards, all while building collaborative partnerships with IBM and other institutions • What are the goals of the IBM Academic Initiative System z program? – The value the mainframe delivers is legendary and compelling – We need to ensure we are building the next generation of mainframe skills to help more companies and organizations leverage the superior security, availability, scalability and other operational advantages of the mainframe – Our Goal is to:

• Encourage and assist schools worldwide to teach Large Systems Thinking, Enterprise Computing and IBM Mainframes • Connect clients, business partners, software vendors and the entire mainframe community with schools worldwide

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 8 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 4 IBM Academic Initiative System z

Teaching Enterprise Systems to a new Generation…

• What educational Resources are provided? – Courses, textbooks, instructor guides, e-Learning modules, hands-on lab exercises and tests – New enterprise computing courses are continually being developed and existing ones updated in the IBM Academic Initiative online course repository – Members of the IBM Academic Initiative can teach these courses “as is,” modify them or use them as source material in their own college or university courses – Assistance with adjuncts, guest lectures and research – Education for professors (workshops, seminars, Webcasts, podcasts) – Mastery tests – Relationships with industry clients

• How can faculty and students gain access to a mainframe system and software? – A significant benefit for members of the System z program of the IBM Academic Initiative is the no charge access to an IBM System z mainframe system (The “Knowledge Center” system) running z/OS® for educational and research use, allowing professors and students to explore and learn in a real mainframe environment – After enrolling in the Academic Initiative, faculty members can request the desired number of mainframe IDs by sending an email to [email protected] – A system programmer and an administrator are also available to assist you in your use of the Knowledge Center

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 9 Expertforum CH #68, 1.-2.4.2008, Thun

IBM Academic Initiative System z

Teaching Enterprise Systems to a new Generation…

• Can Students access the IBM Academic Initiative Program? – Yes, the Academic Initiative program provides resources for students too – A perfect resource, the Student Portal (ibm.com/education/students) provides students with information – regarding job opportunities (including the Student Opportunity System), the latest in research and IBM products, and news about upcoming contests, events, Webcasts, demos, tools and code

• Why teach enterprise systems? – Many businesses rely on IBM’s leading edge mainframe technology that’s unmatched in reliability, availability and flexibility – IBM clients that rely on IBM mainframes are always on the lookout for students who learn concepts and technology in enterprise systems – A large percent of the current generation of mainframe experts are eligible to retire – The job market in the mainframe community continues to grow, and the demand for students by top companies is growing – The mainframe or enterprise computing community is the largest of its type in the world – See “Careers in Mainframes: an interactive module” at ibm.com/university/systemz for more information

• What schools are already participating in the Academic Initiative, System z program? – Curious to see what schools are teaching Enterprise System Technologies? • See: ibm.com/university/systemz/

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 10 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 5 IBM Academic Initiative System z

Bringing new People into the Mainframe Skill Base – IBM Academic Initiative, System z Educators join the Initiative at: ibm.com/university/systemz

Download Course Materials

Request Mainframe Access 438 Schools Attend Faculty Education 25 Courses

6 Hubs for remote Access

Professor Seminars, Education Coupons

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 11 Expertforum CH #68, 1.-2.4.2008, Thun

IBM Academic Initiative System z

IBM Academic Initiative for System z

• Academic Initiative by the numbers: – Participation – 438 schools registered, over 47,000 students attended mainframe education – Courses – 25 Courses (plus more under development) & Mastery Exam Certification – zCommunity – Roundtables events with Clients / Schools / ISVs / Business Partners – Resources – Access to Mainframes world wide for teaching (6 Univ hubs) – Student MF Contests – 9 contests with 8,180 students, 1,136 schools…more planned WW – IBM zSkills ([email protected]) + over 300 IBM Mainframe ambassadors – Assist Professors – Education seminars, Faculty awards, education coupons • Web site - www.ibm.com/university/systemz

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 12 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 6 The Institute for Data Center Professionals

The Institute for Data Center Professionals • Marist College and IBM have jointly developed the On-Demand with Enterprise Systems Certificate Program

– This program contains a series of online training modules that will help you increase your knowledge of the System zTM platform through experienced-based learning – The program is available through the Marist IDCP • Institute of Data Center Professionals at www.idcp.org – It is a cohort program that began on September 10, 2007 – The certificate program is a three-tiered offering of training modules that will enable industry professionals to earn three System zTM IDCP certificates as outlined below* – The first two years of the program are the same for all participants – In the third year, students will choose a specific track of study: application development or systems administration – In addition to IDCP certificates, the first course (Introduction to the z/OS Operating System and Components) will prepare students to take the worldwide IBM entry-level System z system programmer mastery exam which is now available at www.ibm.com/certify under Mastery Tests

• Are you and experienced user of System z looking to obtain the Year 2 Professional? – IDCP is now offering a qualifying certificate exam for students interested in directly enterting the Year 2: Professional Level courses

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 13 Expertforum CH #68, 1.-2.4.2008, Thun

The Institute for Data Center Professionals

The Institute for Data Center Professionals…

– The qualifying exam requires advanced registration and a brief written description of your System z experience – Participants who qualify, based on the results of the exam and their prior System z experience, will be granted admission in the Year 2 courses – Download the registration form or contact Angelo Corridori at [email protected] to learn more

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 14 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 7 IBM Professional Certification Program

IBM Professional Certification Program • What you can get

– The IBM Professional Certification is both a journey and a destination – It's a business solution. A way for skilled IT professionals to demonstrate their expertise to the world – It validates your skills and demonstrates your proficiency in the latest IBM technology and solutions – The certification requirements are tough, but it's not rocket science, either – It's a rigorous process that differentiates you from everyone else – The mission of IBM Professional Certification is to: • Provide a reliable, valid and fair method of assessing skills and knowledge

• Provide IBM a method of building and validating the skills of individuals and organizations

• Develop a loyal community of highly skilled certified professionals who recommend, sell, service, support and/or use IBM products and solutions

• For detailed Information refer to the following Web Site: – http://www-03.ibm.com/certify/index.shtml

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 15 Expertforum CH #68, 1.-2.4.2008, Thun

IBM Professional Certification Program

IBM Professional Certification Program…

Th is Docum ent at ion will be provided on the GSE z/ OS Expertforum CH Homepage for Download as Part of Silvio‘s Corner Doc Jukebox

Click on t his Lin k for an Overview of the IBM Certification Test s currently available

IBM Professional Certification Homepage: http://www-03.ibm.com/certify/index.shtml

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 16 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 8 Redbooks Wiki

Redbooks Wiki Pilot

• New "Discuss this book" Feature! • On March 10, ITSO launched the new Discuss this book feature on the IBM Redbooks Web site • This feature allows all readers of IBM Redbooks publications, after registering with IBM, to post comments and discuss various aspects of a specific book with each other Future direction* • This is an open, unmoderated, public channel allowing readers to participate in technical discussions regarding a particular IBM Redbooks publication Extension of • The goal is to have the IBM Redbooks user community as a whole engage in dialogs that will capabilities* help focus and improve our publications and, ultimately, our products Today’s • Through theseCapabilities dialogs, the community will be able to engage in conversations about a book's content, offer helpful information about the topic, and share relevant experiences • Note that these dialogs are not directed to IBM official support channels, and no official support is implied

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 17 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Redbooks Wiki

Redbooks Wiki Pilot

• The usual Terms of Use for all IBM Web pages is also in effect for feedback comments made using the Discuss this book feature • This new feedback channel is available for all new Redbooks and Redpapers, as well as those published in the past year Future • It is being made available as a public service to our customers and partners direction*

Extension of capabilities* Today’s Capabilities

Tell your colleagues and bookmark the home page: http://www.ibm.com/redbooks/redwiki

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 18 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 9 System z and z/OS related Education in Zürich

IBM z/OS Diagnostics and Analysis

Objectives This course describes problem diagnosis fundamentals and analysis methodology's for the z/OS system. It provides guidelines for the collection of relevant diagnostic data, tips for analyzing the data, and techniques to assist in identifying and resolving of Language Environment, CICS, CICSPlex/SM, MQSeries, VTAM, and DB2 problems. Also described are some diagnostic procedures that are not purely z/OS, but that are related to the various platforms (UNIX and Windows) where IBM software executes and interacts with z/OS in a Client/ServerFuture or distributed framework topology. direction*

We will show you how to : - Adopt a systematic and thorough approach to dealing with problemsExtension of - Identify the different types of problems capabilities* - Determine whereToday’s to look for diagnostic information and how to obtain it - Interpret and analyzeCapabilities the diagnostic data collected - Escalate problems to the IBM Support Center when necessary

Diagnostic data collection and analysis is a dynamic and complex process. This redbook shows you how to identify and document problems, collect and analyze pertinent diagnostic data and obtain help as needed, to speed you on your way to problem resolution.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 19 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

System z and z/OS related Education in Zürich

IBM z/OS Diagnostics and Analysis…

Agenda Introduction CICSPlex SM diagnostic procedures z/OS problem diagnosis fundamentals DB2 problem diagnosis What version/release am I running? IMS diagnostic data collection Fundamental sources of diagnostic data VTAM diagnostic procedures Future Common problem types TCP/IP component and packet trace direction* MVS messages and codes CICS Transaction Gateway on z/OS SYS1.PARMLIB diagnostic parameters WebSphere MQSeries z/OS diagnostic procedures Cancelling tasks and taking dumps WebSphereExtension Business of Integration Message Broker on z/OS zArchitecture and addressing WebSpherecapabilities* Application Server for z/OS z/OS trace facilities Distributed platform problem determination Today’s Interactive Problem Control System (IPCS) Capabilities CICS problem diagnosis z/OS Language Environment

• Course Number: ITS763CH • 2 Days, 6.-7.10.2008, Zürich

Fore more Information see: http://www.redbooks.ibm.com/workshops/GR6782?Open

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 20 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 10 System z and z/OS related Education in Zürich

Cross Site Data Sharing

Objectives Cross-site data sharing is still a relatively new technology. As a result, there is limited documentation to help customers identify the aspects that must be taken into consideration. In particular, there is very little information about how and why distance will impact business applications. This workshop will provide participants with an understanding of WHY distance can impact your application performance and system utilization, and what attributes of an application's behaviour cause it to be more or less impacted by the distance. We will step throughFuture a sample measurement run, discussing the work that needs to be done to set up a repeatable test, and identifyingdirection* the metrics that need to be collected to gain an insight into the performance impact. We will also discuss the considerations for setting up a configuration that can be used to predict the impact of the planned distance, but with the minimal possible disrption and cost. While the focus will be on the performance impact of distance, we will also briefly describe the availability considerations for such a configuration.Extension of capabilities* Description Today’s here is a huge levelCapabilities of interest in implementing multi-site sysplex data sharing. However, the precise performance impact of spanning a sysplex over a number of kilometers is difficult, if not impossible to accurately predict. This workshop will discuss the findings of a recent ITSO residency to investigate the impact of various distances on CICS/DB2, IMS, and CICS/VSAM RLS workloads. Information will be presented on the tools and metrics that were used to understand how transaction response time, batch job elapsed time, and overall system utiliation were affected. We will also discuss the various aspects of application behaviour and how they contribute to the results that will be encountered. This workshop would be particularly well suited to a customer that is at the start of a project to assess the viability of multi site data sharing for their business.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 21 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

System z and z/OS related Education in Zürich

Cross Site Data Sharing…

Target Audiance System programmers and hardware planners from companies interested in implementing multi site data sharing

Prerequisites Attendees are expected to be familiar with sysplex concepts, disaster recovery planning, and disk mirroring Future direction* • Course Number: ITS746CH

• 2 Days, 3Q/2008, Zürich Extension of capabilities* Fore moreToday’s Information see: http://www.redbooks.ibm.com/workshops/GR6698?Open Capabilities

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 22 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 11 GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

The Future runs on System z ...

... and IBM System Storage

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 23 Expertforum CH #68, 1.-2.4.2008, Thun

IBM System z10 EC Functional Overview

Introducing the IBM System z10™ Enterprise Class (z10™ EC) … A Marriage of Evolution and Revolution

Evolution Revolution – Scalability and virtualization to – 4.4 GHz chip to deliver improved reduce cost and complexity performance for CPU intensive Future – Improved efficiency to further workloads direction* reduce energy consumption – ‘Just in time’ deployment of capacity – Improved security and resiliency resources Extension of to reduce risk – Vision to expand System z capabilities capabilities* ™ – New heights in storage scalability with Cell Broadband Engine and dataToday’s protection technology Capabilities

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 24 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 12 IBM System z10 EC Functional Overview

Designed for improved Server Performance and Scalability with faster and more Processors and improved Dispatching Synergy

• The z10 EC can deliver, on average, up to 50% more performance in a n-way configuration than an IBM System z9® Enterprise Class (z9™ EC) n-way – The uniprocessor can deliver up to 62% more performance than z9 EC uniprocessor * Future • The z10 EC 64-way can deliver up to 70% more server capacity than the largest z9 EC** direction* • Introducing HiperDispatch for improved synergy with z/OS® operating system to help deliver scalability and performance Extension of capabilities* Today’s z10 EC 4.4 GHz processor chip Capabilities Hardware Decimal Floating Point IFL z9 EC Crypto zIIP z990 zAAP

Customer Engines z900 Capacity Significant capacity for traditional growth and consolidation * LSPR mixed workload average running z/OS 1.8 - z10 EC 701 versus z9 EC 701 ** This is a comparison of the z10 EC 64-way and the z9 EC S54 and is based on LSPR mixed workload average running z/OS 1.8 * All performance information was determined in a controlled environment.

ZürichThe | 26. Oktober Future 2004 runs onSilvio System Sasso's z and Corner, IBM SystemGSE z/OS Storage 25 Expertforum CH #68, 1.-2.4.2008, Thun

IBM System z10 EC Functional Overview

IBM z10 EC continues the CMOS Mainframe Heritage

4000 4.4 GHz 3500 Future 3000 1.7 direction* GHz 2500

Extension1.2 of MHz 2000 GHz 770 capabilities* MHz 1500 Today’s 550 420 Capabilities300 MHz 1000 MHz MHz

500

0 1997 1998 1999 2000 2003 2005 2008 G4 G5 G6 z900 z990 z9 EC z10 EC  G4 - 1st full-custom CMOS S/390®  IBM eServer™ zSeries® 900 (z900) - Full 64-bit z/Architecture®  z10 EC - Architectural  G5 - IEEE-standard BFP; branch  IBM eServer zSeries 990 (z990) - Superscalar CISC pipeline extensions target prediction  z9 EC - System level scaling  G6 - Cu BEOL

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 26 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 13 IBM System z10 EC Functional Overview

Making high Performance a Reality

• New Enterprise Quad Core z10 processor chip – 4.4 GHz - additional throughput means improved price/performance – Cache rich environment optimized for data serving – 50+ instructions added to improve compiled Future code efficiency direction* – Support for 1MB page frames Extension of • Hardware accelerators on the chip capabilities* – HardwareToday’s data compression Capabilities – Cryptographic functions – Hardware Decimal Floating point

• CPU intensive workloads get performance Enterprise Quad Core z10 processor chip improvements from new core pipeline design

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 27 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

IBM System z Family

Future direction*

Extension of capabilities* Today’s Capabilities

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 28 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 14 IBM System z10 EC Functional Overview

Do GHz matter?

• GHz does matter – It is the "rising tide that lifts all boats" – It is especially important for CPU-intensive applications • GHz is not the only dimension that matters Future – System z focus is on balanced system design across many factors direction* • Frequency, pipeline efficiency, energy efficiency, cache / memory design, I/O design • System performance is not linear with frequency Extension of – Need to use LSPR + System z capacity planning toolscapabilities* for real client / workload sizing • System z hasToday’s been on consistent path while others have oscillated between extremes Capabilities – Growing frequency steadily, with occasional jumps/step functions (G4 in 1997, z10 in 2008) • z10 leverages technology to get the most out of high-frequency design – Low-latency pipeline – Dense packaging (MCM) allows MRU cooling which yields more power-efficient operation – Virtualization technology (etc.) allows consistent performance at high utilization, which makes CPU power-efficiency a much smaller part of the system/data-center power consumption picture

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 29 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

IBM z10 EC Instruction Set Architecture z/Architecture • Continues Line of upward-compatible Mainframe Processors – Application compatibility since 1964 64-bit ESA/390 addressing – Supports all z/Architecture-compliant Future OSes 370/ESA direction*

Binary Floating Point 370/XA Extension of capabilities* Sysplex Today’s ™ CapabilitiesS/370 31-bit addressing S/360 Virtual addressing

24-bit addressing

1964 1970s 1980s 1990s 2000s

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 30 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 15 IBM System z10 EC Functional Overview

z10 EC Architecture

• Continues Line of upward-compatible Mainframe Processors • Rich CISC Instruction Set Architecture (ISA) – 894 instructions (668 implemented entirely in hardware) – 24, 31, and 64-bit addressing modes Future – Multiple address spaces robust inter-process security direction* – Multiple arithmetic formats – Industry-leading virtualization support • High-performance logical partitioning via PR/SMExtension of • Fine-grained virtualization via z/VM scales to 1000’scapabilities* of images – Precise,Today’s model-independent definition of hardware/software interface • ArchitecturalCapabilities Extensions for IBM z10 EC – 50+ instructions added to improve compiled code efficiency – Enablement for software/hardware cache optimization – Support for 1MB page frames – Full hardware support for Hardware Decimal Floating-point Unit (HDFU)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 31 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC Chip Relationship to POWER6™ Enterprise Quad Core z10 • New Enterprise Quad Core z10 EC processor chip processor chip • Siblings, not identical twins • Share lots of DNA – IBM 65nm Silicon-On-Insulator (SOI) technology Future – Design building blocks: direction* • Latches, SRAMs, regfiles, dataflow elements – Large portions of Fixed Point Unit (FXU), Binary Floating-point Unit. (BFU), Hardware Decimal Floating-point Unit (HDFU),Extension of Memory Controller (MC), I/O Bus Controller (GX) capabilities* – Core pipeline design style • High-frequency,Today’s low-latency, mostly-in-order – Many SystemCapabilities z and System p designers and engineers working together Dual Core POWER6 processor chip • Different personalities – Very different Instruction Set Architectures (ISAs) • very different cores – Cache hierarchy and coherency model – SMP topology and protocol – Chip organization – IBM z10 EC Chip optimized for Enterprise Data Serving Hub

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 32 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 16 IBM System z10 EC Functional Overview

Industry’s Approach to Integrated Systems Performance

Languages, Application Software Tuning, Efficient Programming  frequency will no Middleware longer be the dominant driver of Dynamic optimization, system level performance Future Assist Threads,  Scale-out and small SMPs will direction* System Level Morphing Support, continue to out pace scale-up Fast Computation Migration, Power Optimization, Compiler growth Support Extension of capabilities*Systems will increasingly rely on Compiler Support, modular components for continued Today’s Morphing, performance leadership Capabilities Multiple Cores, SMT,  Systems will be designed with the Chip Level Accelerators, ability to dynamically manage and Power Shifting, optimize power Interconnect Circuits  Integration over the entire stack, Packaging, from semiconductor technology to Efficient Cooling, end-user applications, will replace Technology Dense SRAM, eDRAM scaling as the major driver of Optics increased system performance Memory

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 33 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

IBM System z: System Design Comparison System I/O Bandwidth 288 GB/sec* Balanced System CPU, nWay, Memory, I/O Bandwidth*

172.8 GB/sec* Future direction*

96 GB/sec Extension of ITR for 1-way Memory 24 GB/seccapabilities* 1.5 TB** 450 ~920 Today’s 512 GB 256 GB 64 GB 300 ~600 Capabilities

16-way

32-way z10 EC

z9 EC

54-way zSeries 990 *Servers exploit a subset of its designed I/O capability 64-way zSeries 900 ** Up to 1 TB per LPAR Processors

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 34 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 17 IBM System z10 EC Functional Overview

z10 EC Overview • Machine Type – 2097 • 5 Models – E12, E26, E40, E56 and E64 • Processor Units (PUs) – 17 (17 and 20 for Model E64) PU cores per book – Up to 11 SAPs per system, standard Future – 2 spares designated per system direction* – Dependant on the H/W model - up to 12, 26, 40, 56 or 64 PU cores available for characterization • Central Processors (CPs), Integrated Facility for Linux Extension(IFLs), Internal of Coupling Facility (ICFs), System z10 capabilities*Application Assist Processors (zAAPs), System z10 Integrated Information Processor (zIIP), optional - Today’s additional System Assist Processors (SAPs) Capabilities • Memory – System Minimum of 16 GB – Up to 384 GB per book – Up to 1.5 TB GB for System and up to 1 TB per LPAR • Fixed HSA, standard • 16/32/48/64 GB increments • I/O – Up to 48 I/O Interconnects per System @ 6 GBps each – Up to 4 Logical Channel Subsystems (LCSSs) • ETR Feature, standard

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 35 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC New Functions and Features (February 2008)

Five hardware models 6.0 GBps InfiniBand HCA to I/O interconnect Faster Uni Processor FCP Performance Improvement Up to 64 customer PUs SCSI IPL included in Base LIC 36 CP Subcapacity Settings Future OSA-Express3 10 GbE (2Q08)* Star Book Interconnect direction* HiperSockets Multi Write Facility Up to 1.5 TB memory enhancements Fixed HSA as standard Extension of InfiniBand Coupling Links Large Page Support (1 MB) capabilities* (2Q08)* HiperDispatchToday’s STP using InfiniBand (2Q08)* Enhanced CPACFCapabilities SHA 512, Capacity Provisioning Support AES 192 and 256-bit keys Scheduled Outage Reduction Hardware Decimal Floating SOD: InfiniBand Coupling Links for z9 EC & BC for non- Improved RAS Point dedicated CF Models* FICON LX Fiber Quick Connect New Capacity on Demand architecture and Power Monitoring support enhancements

No support for Japanese Compatibility Mode (JCM) * All statements regarding IBM's plans, directions, and intent are subject to change ™ or withdrawal without notice. Any reliance on these Statements of General Direction No support for MVS Assist instructions is at the relying party's sole risk and will not create liability or obligation for IBM. * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 36 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 18 IBM System z10 EC Functional Overview

z10 EC – Under the Covers (Model E56 or E64)

Internal Processor Books, Batteries Memory, MBA and HCA cards (optional) Ethernet cables for Power Future internal System LAN Supplies direction* connecting Flexible Service Processor 2 x Support Extension of (FSP) cage controller Elements capabilities* cards

Today’s InfiniBand I/O Capabilities Interconnects 3x I/O 2 x Cooling cages Units

Fiber Quick Connect FICON & (FQC) Feature ESCON (optional) FQC

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 37 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

System z10 EC PU Core Characterization • The type of Processor Unit (PU) cores that can be ordered on z10 EC: – Central Processor (CP) • Provides processing capacity for z/Architecture and ESA/390 instruction sets • Runs z/OS, z/VM, z/VSE, TPF4, z/TPF, Linux for System z and Linux under z/VM or Coupling Facility – IBM System z10 Application Assist Processor (zAAP) Future • Under z/OS, the Java Virtual Machine (JVM) assists with Java processing to a zAAP direction* • z/OS XML System Services* • z/VM 5.3 support is provided for z/OS guest exploitation – IBM System z10 Integrated Information Processor (zIIP)Extension of capabilities* • Provides processing capacity for selected workloads e.g., DB2 for z/OS V8 workloads executing in SRB mode,Today’s IPSec encryption, z/OS XML System Services* and z/OS Global Mirror* (formally Extended Remote Copy, CapabilitiesXRC) • z/VM 5.3 support is provided for z/OS guest exploitation – Integrated Facility for Linux (IFL) • Provides additional processing capacity for Linux workloads – Internal Coupling Facility (ICF) • Provides additional processing capacity for the execution of the Coupling Facility Control Code (CFCC) in a CF LPAR – Optional System Assist Processors (SAP) • SAP manages the start and ending of I/O operations for all Logical Partitions and all attached I/O * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 38 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 19 IBM System z10 EC Functional Overview

z10 EC Multi-Chip Module (MCM) • 96mm x 96mm MCM • CMOS 11s chip Technology – 103 Glass Ceramic layers – PU, SC, S chips, 65 nm – 7 chip sites – 5 PU chips/MCM – Each up to 4 cores • One memory control (MC) per PU chip – 7356 LGA connections • 21.97 mm x 21.17 mm – 17 and 20 way MCMs • 994 million transistors/PU chip Future • L1 cache/PU core – 64 KB I-cache direction* – 128 KB D-cache • L1.5 cache/PU core –3 ExtensionMB of PU 2 PU 1 PU 0 • 4.4 GHz • 0.23 nscapabilities* Cycle Time • 6 km of wire Today’s Capabilities – 2 Storage Control (SC) chip SC 1 SC 0 • 21.11 mm x 21.71 mm • 1.6 billion transistors/chip • L2 Cache 24 MB per SC chip (48 MB/Book) • L2 access to/from other MCMs S 2 S 0 • 3 km of wire PU 4 PU 3 – 4 SEEPROM (S) chips S 3 S 1 • 2 x active and 2 x redundant • Product data for MCM, chips and other engineering information – Clock Functions – distributed across PU and SC chips • Master Time-of-Day (TOD) and 9037 (ETR) functions are on the SC

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 39 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC – Enterprise Quad Core z10 PU Chip

• Up to Four cores per PU Core Core L1 + L1.5 COP L1 + L1.5 – 4.4 GHz & & – L1 cache/PU core HDFU HDFU • 64 KB I-cache • 128 KB D-cache MC L2 Intf L2 Intf GX – 3MB L1.5 cache/PU core Future – Each core with its own Hardware Decimal Floating Point Unitdirection* (HDFU) Core Core L1 + L1.5 L1 + L1.5 • Two Co-processors (COP) COP & & – Accelerator engines HDFU HDFU Extension of • Data compression capabilities* • Cryptographic functions Today’s – Includes 16KB cache Capabilities – Shared by two cores • L2 Cache interface – Shared by all four cores – Even/odd line (256B) split • I/O Bus Controller (GX) – Interface to Host Channel Adapter (HCA) – Compatible with System z9 MBA • Memory Controller (MC) – Interface to controller on memory DIMMs

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 40 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 20 IBM System z10 EC Functional Overview

z10 EC Additional Details for PU Core

• Each core is a superscalar Processor with these Characteristics: –The basic cycle time is approximately 230 picoseconds Enterprise Quad Core –Up to two instructions may be decoded per cycle z10 processor chip Future –Maximum is two operations/cycle for execution as well as for direction* decoding

–Memory accesses might not be in the same instruction order PU 2 PU 1 PU 0 Extension of –Most instructions flow through a pipeline with differentcapabilities* numbers of steps for various types of instructions. Several Today’s instructions may be in progress at any instant, subject to the SC 1 SC 0 maximumCapabilities number of decodes and completions per cycle –Each PU core has an L1 cache divided into a 64 KB cache for instructions and a 128 KB cache for data S 2 S 0 PU 4 PU 3 –Each PU core also has a L1.5 cache. This cache is 3MB in S 3 S 1 size. Each L1 cache has a Translation Look-aside Buffer (TLB) of 512 entries associated with it

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 41 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC Compression and Cryptography Accelerator

• Data compression engine – Static dictionary compression and expansion – Dictionary size up to 64KB (8K entries) • Local 16KB caches for dictionary data Core 0 Core 1 Future • CP Assist for Cryptographic Function (CPACF) direction* 2nd Level – DES (DEA, TDEA2, TDEA3) Cache – SHA-1 (160 bit) IBOB OB IB – SHA-2 (224, 256, 384, 512 bit) Extension of TLB TLB capabilities* – AES (128, 192, 256 bit) – PRNG Today’s Capabilities Cmpr 16K 16K Cmpr • Accelerator unit shared by 2 cores Exp Exp – Independent compression engines – Shared cryptography engines Crypto Crypto Cipher Hash

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 42 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 21 IBM System z10 EC Functional Overview

z10 EC Hardware Decimal Floating Point Accelerator • Meets requirements of business and human-centric applications – Performance, Precision, Function – Avoids rounding and other problems with binary/decimal conversions – Improved numeric functionality over legacy Binary Coded Decimal (BCD) operations – Much of commercial computing is dominated by decimal data and decimal operations Future direction* • IBM z10 EC Hardware Decimal Floating Point Unit co- developedExtension (HDFU) with of POWER6 – Commoncapabilities* architecture operations and semantics Today’s – Common dataflow elements Capabilities – Mainframe legacy Binary Coded Decimal (BCD) operations mapped onto HDFU in z10 EC • Growing industry support for DFP standardization – Java BigDecimal, C#, XML, XL C/C++, GCC, DB2 9 , Enterprise PL/1, AssemblerEndorsed by key software vendors including Microsoft® and SAP – Open standard definition led by IBM Single PU core

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 43 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC SC Hub Chip

• Connects multiple z10 PU chips – 48 GB/Sec bandwidth per processor • Shared Level 2 cache Future – 24MB SRAM Cache direction* – Extended directory • Partial-inclusive discipline – Hub chips canExtension be paired of • 48MB sharedcapabilities* cache Today’s • Low-latency SMP coherence fabric Capabilities – Robust SMP scaling – Strongly-ordered architecture • Multiple hub chips/pairs allow further SMP scaling

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 44 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 22 IBM System z10 EC Functional Overview

20 PU MCM Structure Memory Memory Memory Memory 2 GX 2 GX 2 GX 2 GX

4 PU cores 4 PU cores 4 PU cores 4 PU cores 4 PU cores Future 4x3MB L1.5 4x3MB L1.5 4x3MB L1.5 4x3MB L1.5 4x3MB L1.5 direction* COP COP COP COP COP MC, GX MC, GX MC, GX MC, GX MC, GX Extension of capabilities* Today’s Capabilities

24MB L2 24MB L2 SC SC

Off- Book Off- Book Off- Book Interconnect Interconnect Interconnect

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 45 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC – Inter Book Communications – Model E64

• The z10 EC Books are fully interconnected in a point to point topology as shown in the diagram 77-way CEC Future • Data transfers are direct between direction* Books via the Level 2 Cache chip in 17-way 20-way each MCM. First Book Second Book Extension of • Level 2 Cache is shared by all PU capabilities* chips on the MCM Today’s Capabilities

20-way 20-way Third Book Fourth Book

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 46 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 23 IBM System z10 EC Functional Overview

z10 EC Model Structure • Model number indicates PU cores available for characterization – Single serial number – PU core characterization is identified by number of features ordered • 2 spares standard per server • z10 EC Capacity models – 700, 401 to 412, 501 to 512, 601 to 612 and 701 to 764 Future – nxx, where n = the capacity level of the engine, and xx = the number of PU cores characterized direction* as CPs in the CEC – Once xx exceeds 12, then all CP engines are full capacity – Specialty Engines are always FULL capacity • One machine type – 2097 – five hardware models Extension of capabilities* Max Customer Standard Standard CP/IFL/ Min-Max Max Models MCMsToday’s Subcapacity PU cores SAPs Spares ICF/zAAP/zIIP 3 Memory 4 Channels Capabilities CPs E121 1 12 12 3 2 12 16-352 GB 960 2 E261 2 26 12 6 2 26 16-752 GB 1024 E401 3 40 12 9 2 40 16-1136 GB 1024 E561 4 56 12 10 2 56 16-1520 GB 1024 E641 4 64 12 11 2 64 16-1520 GB 1024

Notes: 1 Must have a minimum of 1 CP, IFL or ICF 2 There is a max of 64 ESCON® features/960 active channels and a max of 64 FICON features/256 channels on Model E12 3 For each zAAP and/or zIIP installed there must be a corresponding CP. The CP may satisfy the requirement for both the zAAP and/or zIIP. The combined number of zAAPs and/or zIIPs can not be more than 2x the number of general purpose processors (CPs). Maximum of 16 ICFs 4 Excludes the standard 16 GB of HSA

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 47 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC Orderable Processor Features and Memory

MCM Standard Flexible PU IFLs Std Std Model size & CPs zAAPs zIIPs ICFs Memory Memory uIFLs SAPs Spares Qtys cores GB* GB

0 - 12 E12 1 x 17 17 0 - 12 0 - 6 0 - 6 0-12 3 2 16 - 352 NA 0 - 11 Future direction* 0 - 26 E26 2 x 17 34 0 - 26 0 - 13 0 - 13 0 -16 6 2 16 - 752 32 - 352 0 - 25

0 - 40 Extension of E40 3 x 17 51 0 - 40 0 - 20 0 - 20 0-16 9 2 16 - 1136 32 - 752 0 - 39 capabilities* Today’s 0 - 56 E56 4 x 17 68 0 - 56 0 - 28 0 - 28 0-16 10 2 16 - 1520 32 - 1136 Capabilities 0 - 55

1 x17 0 - 64 E64 77 0 - 64 0 - 32 0 - 32 0-16 11 2 16 - 1520 32 - 1136 3 x 20 0 - 63

 A minimum of one CP, IFL, or ICF must be purchased on every model.  One zAAP and one zIIP may be purchased for each CP purchased  Optional SAP numbers not shown * Note: Fixed HSA is not included

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 48 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 24 IBM System z10 EC Functional Overview

z10 EC Concurrent PU Core Conversions

• Must order (characterize one PU core as) a CP, an ICF or an IFL • Concurrent model upgrade (book add) is supported – E12 to E26 to E40 to E56 – Upgrades to Model E64 are disruptive • Concurrent processor upgrade is supported if PU cores are available – Add CP, IFL, unassigned IFL, ICF, zAAP, zIIP or optional SAP< Future direction* Unassigned Opt From/To-> CP IFL ICF zAAP zIIP IFL SAP Extension of CP x Yes Yes capabilities*Yes Yes Yes Yes IFLToday’s Yes x Yes Yes Yes Yes Yes UnassignedCapabilities Yes Yes x Yes Yes Yes Yes IFL ICF Yes Yes Yes x Yes Yes Yes zAAP Yes Yes Yes Yes x Yes Yes zIIP Yes Yes Yes Yes Yes x Yes Optional SAP Yes Yes Yes Yes Yes Yes x

Exceptions: Disruptive if ALL current PU cores are converted to different types May require individual LPAR disruption if dedicated PU cores are converted.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 49 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC permanent Capacity Upgrades

• There are three means by which z10 EC Processor permanent upgrades can be performed – All three are designed to be preformed concurrently • Concurrent conversion of previously purchased inactive CPs to active CPs – Done by the Customer Engineer (CE) through a LICCC only MES ordered through eConfig or by the customer with CIU authorization and Web Tool access (Resource Link) – When ordered and installed via Resource Link a paperExtension only follow-up of order must be placed through eConfig for billing and AAS record updatecapabilities* • ConcurrentToday’s add of CPs from un-purchased PU cores on book (if available) – Done byCapabilities the CE through a LICCC only MES ordered through eConfig or by the customer with CIU authorization and Web Tool access (Resource Link) – When ordered and installed via Resource Link a paper only follow-up order must be placed through eConfig for billing and AAS record update • Concurrent add of book hardware to perform model upgrade – Changes the model and increases the total number of available PUs – Requires a CE to add additional book hardware and rebalance the HCAs

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 50 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 25 IBM System z10 EC Functional Overview

z10 EC Models and physical Upgrades

M/T First Book Second Book Third Book Fourth Book 2097 Total Cust Cust Cust Cust MCM MCM MCM MCM Model PU PU SAPs Spare PU SAPs Spare PU SAPs Spare PU SAPs Spare cores cores Size cores Size cores Size cores Future Size direction* E12 17 12 3 2 17

E26 34 13 3 1 17 13 3 1 Extension17 of capabilities*

E40 51 13 Today’s3 1 17 13 3 1 17 14 3 0 17 Capabilities

E56 68 14 3 0 17 14 3 0 17 14 3 0 17 14 1 2 17

E64 77 16 1 0 17 16 3 1 20 16 3 1 20 16 4 0 20

Note: While LICC definitions generated from eConfig will assign up to a maximum of 12 customer PU cores for the first Book, assuming 2 spares resident on first Book, actual millicode PU assignments may result in the spares being resident on any installed book.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 51 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC – HSA Considerations

 HSA of 16GB provided as standard  The HSA has been designed to eliminate planning for HSA. Preplanning for HSA expansion for configurations will be eliminated as HCD/IOCP will, via the IOCDS process, always reserve: Future direction* ►4 CSS’s

►15 LPs in each CSS (total of 60 LPs) Extension of ►Subchannel set-0 with 63.75k devices in eachcapabilities* CSS ►SubchannelToday’s set-1 with 64k devices in each CSS ►All the aboveCapabilities are designed to be activated and used with dynamic I/O changes

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 52 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 26 IBM System z10 EC Functional Overview

z10 EC – Memory Definition/Usage per LPAR  The maximum real memory available for a z10 EC Model E54 or E64 server is 1.5 TB  The maximum real memory supported on a z10 in any LPAR is 1 TB  z/OS 1.8 and higher is designed to support up to 4 TB real memory, but "only" up to 1 TB in any LPAR on the z10 EC servers Future  The limiting factor is the amount of storage the channel cards can address direction*  It is enforced by the HMC code for Initial + Reserved in the Image Profile Extension of capabilities* Today’s Capabilities

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 53 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Large Page Support • Issue: Translation Lookaside Buffer (TLB) Coverage shrinking as % of memory size – Over the past few years application memory sizes have dramatically increased due to support for 64-bit addressing in both physical and virtual memory – TLB sizes have remained relatively small due to low access time requirements and hardware space limitations Future direction* – TLB coverage today represents a much smaller fraction of an application’s working set size leading to a larger number of TLB misses – Applications can suffer a significant performance Extensionpenalty resulting of from an increased number of TLB misses as well as the increased cost of eachcapabilities* TLB miss • Solution: Today’sIncrease TLB coverage without proportionally enlarging the TLB size by using largeCapabilities pages – Large Pages allow for a single TLB entry to fulfill many more address translations – Large Pages will provide exploiters with better TLB coverage • Benefit: – Designed for better performance by decreasing the number of TLB misses that an application incurs

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 54 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 27 IBM System z10 EC Functional Overview

Large Page Performance Considerations • Large Page is a special purpose performance improvement feature • It is not recommended for general use • Large page usage provides performance value to a select set of applications – These are primarily long running memory access intensive applications Future • Not all applications benefit from using large pages direction* • Some applications can be severely degraded by the use of large pages – Short lived processes with small working sets areExtension usually not ofgood candidates for large pages capabilities* • Factors to consider when trying to either estimate the potential benefit or understandToday’s measured performance differences of using larger pages instead of 4K pages include:Capabilities – Memory Usage – A workload’s page translation overhead • Large Page Exploiters – A future* release of DB2 will support Large Pages for bufferpools. Default is 4K pages – Java 6.0 SR1 for z/OS is planned* to support Large Pages – Large pages can be used to back the object heap. Default is 4K pages * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 55 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC HiperDispatch • HiperDispatch – z10 EC unique function – Dispatcher Affinity (DA) - New z/OS Dispatcher – Vertical CPU Management (VCM) - New PR/SM Support

• Mitigate impact of scaling differences between processor and memory Future – Access to memory and remote caches not scaling with processor speed direction* – Increased performance sensitivity to cache misses in multi-processor system • Optimize performance by redispatching unitsExtension of work of to same processor group – Keep processes running near their cached instructionscapabilities* and data – MinimizeToday’s transfers of data ownership among processors / books Capabilities • Tight collaboration across entire z10 EC hardware/firmware/OS stack – Concentrate logical processors around shared L2 caches – Communicate effective cache topology for partition to OS – Dynamically optimize allocation of logical processors and units of work

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 56 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 28 IBM System z10 EC Functional Overview

z10 EC HiperDispatch – z/OS Dispatcher Functionality • New z/OS Dispatcher – Logical processors (target is 4) assigned a node with consideration for book boundaries based on PR/SM guidance

– z/OS uses this to assign logical processors to nodes and work to those nodes Future – Periodic rebalancing of task assignments direction* – Assign work to the minimum number of logicals needed to use weight • Expand use remaining Logical ProcessorsExtension (LPs) to ofuse white space – May require "tightening up" of WLM policies capabilities*for important work • InitializationToday’s Capabilities – Single HIPERDISPATCH=YES z/OS IEAOPTxx parameter dynamically activates HiperDispatch (full S/W and H/W collaboration) without IPL • With HIPERDISPATCH=YES, IRD management of LPs is turned OFF – Customer value increases as system and LPAR size increases (i.e. crosses multiple books)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 57 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC HiperDispatch – z/OS Dispatcher Functionality… • Workload Variability Issues – Intermediate - Balancing workload across multiple Affinity Nodes – Short Term - Dealing with transient utilization spikes – Long Term - Mapping z/OS workload requirements to available physical Future resources via dynamic expansion into Vertical Low Logical Processors (LPs) direction* • Addressing Workload Variability – The load balancer assigns workload across ExtensionAffinity Nodes of by priority to keep the work evenly distributed capabilities* – A short Today’sterm overload is addressed by “help” from LPs assigned to another node of the sameCapabilities type • Standard LPs may help zAAP/zIIP LPs when [IFA/IIP]HONORPRIORITY=YES in IEAOPTxx – CPU used by other LPARs (white space) is monitored to dynamically address longer term workload requirements • Adds / removes vertical low LPs to / from existing Affinity Nodes when both the z/OS workload warrants it, and the partner LPARs allow it

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 58 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 29 IBM System z10 EC Functional Overview

z10 EC HiperDispatch – PR/SM Functionality • New PR/SM Support – Topology information exchanged with z/OS • z/OS uses this to construct its dispatching queues – Classes of logicals Future • Vertical High LPs have 100% share - dispatched when they are ready direction* – Tight tie of logical processor to physical processor • Vertical Medium LPs have a share less than 100% - the remainder from converting weight to physical processor equivalent Extension of • Vertical Low generally run only to consume whitecapabilities* space • Firmware Today’sSupport (PR/SM, millicode) – New z/OSCapabilities invoked instruction to cause PR/SM to enter “Vertical mode” to assign vertical LPs subset and their associated LP to physical CP mapping (based upon LPAR weight) – Enables z/OS to concentrate its work on fewer logical processors – Key in PR/SM overcommitted environments to reduce the LP competition for physical CP resources – Vertical LPs are assigned High, Medium, and Low attributes – Vertical low LPs shouldn’t be used unless there is logical white space within the CEC and demand within LPAR

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 59 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Summary of Key Factors to consider with HiperDispatch

• Dispatcher Affinity – z/OS 1.9, will be available for 1.7 with zIIP Web deliverable • Must have both z/OS and z10 EC to enable HiperDispatch • Dynamic on/off switch through z/OS parmlib member • Provides a varying amount of capacity gain – Workload sensitive – N-way sensitive (for N=1, the benefit is ZERO) • Customers do not make major S/W and H/W changes simultaneously – May not turn on HiperDispatch in production for weeks/months after H/W install • Some high priority workloads may need to be monitored at first – May need to turn off HiperDispatch for a while for specific workloads – May need further WLM tuning at specific workloads

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 60 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 30 IBM System z10 EC Functional Overview

z10 EC Channel Type and Crypto Overview • FICON/FCP • Coupling Links – FICON Express4 – InfiniBand Coupling Links* – FICON Express2 (carry forward on – ISC-3 (Peer mode only) upgrade) –ICB-4 (Not available on Model E64) – FICON Express (carry forward on – IC (Define only) Future direction* upgrade for FCV) • Crypto • Networking – Crypto Express2 – OSA-Express3 Extension• of Configurable Coprocessor or • 10 Gigabit Ethernet LR capabilities*Accelerator – OSA-Express2 Today’s • Channel types not supported: • 1000BASE-TCapabilities Ethernet – FICON (pre-FICON Express) • Gigabit Ethernet LX and SX – OSA-Express • 10 Gigabit Ethernet LR –ICB-2 – HiperSockets (Define only) –ICB-3 • ESCON – ISC-3 Links in Compatibility Mode • STP – PCIXCC and PCICA – Parallel (use ESCON Converter) Note: ICB-4 cables are available as features. All other cables are sourced separately * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 61 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

SAPs, I/O Buses, Links, and I/O Connectivity

Maximum Maximum Maximum I/O Max Books/PU Std Opt Model I/O Fanouts/ PSIFB+ICB4 Cards + FICON/ ESCON cores SAPS SAPs Buses Links + I/O cards PSIFB/ICB4 Links CHPIDs 64 + 0 256/ E12 1/17 3 0-3 8/16 16 + 0 Cards PSIFB + ICB4 960Future direction* 32 + 0 Cards ICB- 84 + 8 336/ E26 2/34 6 0-7 16/32 4 limit 16 PSIFB + ICB4 1024 32 + 32 Cards 84 + 16 336/ E40 3/51 9 0-11 20/40 Extension of ICB-4 limit 16 PSIFB + ICB4 1024 capabilities* 32 + 64 Cards 84 + 24 336/ E56 4/68 10 0-18 24/48 Today’s ICB-4 limit 16 PSIFB + ICB4 1024 Capabilities 32 + 64 Cards 336/ E64 4/77 11 0-21 24/48 84 + 24 PSIFB No ICB-4 1024

Note: Only TPF may need Opt SAPs for normal workload Note: Include Crypto Express2 cards in I/O card count Limits: Note: PSIFB and ICB4 do not reside in I/O cages a. 4 LCSSs maximum Note: Plan Ahead for up to 2 additional I/O cages b. 15 partitions maximum per LCSS, 60 maximum This assumes no PSC24V power sequence cards c. 256 CHPIDs maximum per LCSS, 1024 maximum a. 0 to 24 I/O cards – 1 cage b. 25 to 48 I/O cards – 2 cages c. 49 to 84 I/O cards – 3 cages

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 62 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 31 IBM System z10 EC Functional Overview

z10 EC FICON/FCP Enhancements • Extended Distance FICON (CHPID type FC) performance enhancements – Enhancement to the industry standard FICON architecture (FC-SB-3) • Implements a new protocol for ‘persistent’ Information Unit (IU) pacing that can help to optimize link utilization Future • Requires supporting Control Unit(s) (e.g. DS8000 at new level) direction* – Designed to improve performance at extended distance • May benefit z/OS Global Mirror (previously called XRC) • May simplify requirements for channel extensionExtension equipment of – Transparent to operating systems capabilities* – Applies toToday’s FICON Express4 and Express2 channels Capabilities • Enhancements for Fibre Channel Protocol (FCP) performance – Designed to support up to 80% more I/O operations per second for 4K block sizes with FICON Express4, compared to System z9 – Transparent to operating systems – Applies to all FICON Express4 and Express2 channels (CHPID type FCP) communicating to SCSI devices

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 63 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC HiperSockets Performance Enhancements • HiperSockets Multiple Write Facility – Performance improvements • For the streaming of bulk data over a HiperSockets link between LPARs • Allows receiving LPARs to process a much larger amount of data per I/O interrupt Future – z/OS V1.10* direction* » Transparent to software in receiving LPARs • HiperSockets Layer 2 support Extension of – Hosting of new workloads capabilities* – Host non-IP protocols (IPX, NetBIOS, SNA) Today’s – Bridge fromCapabilities and into distributed switched fabrics – Supports broadcast, unicast, or multicast – VLANs: In Layer 2 the same rules apply as for Layer 3 VLAN handling • Linux on System z – Layer 3 applications cannot communicate with Layer 2 applications – z/VM 5.2 or higher – Guest support

High speed connectivity between LPARs “Network in a box”

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 64 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 32 IBM System z10 EC Functional Overview

OSA-Express3 – 10 GbE

• Double the port density • Improved throughput • 10 Gigabit Ethernet LR (Long Reach) Future – Two ports per feature direction* – Small form factor connector (LC Duplex) PCI-E Extension of LC Duplex SM – CHPID type OSD (QDIO) capabilities* – DesignedToday’s to Improve Performance for standardCapabilities and jumbo frames PCI-E LC Duplex SM

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 65 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC Coupling Link Options

• PSIFB* - 12x IB-DDR for high speed communication at medium distance – New CHPID – CIB (Coupling using InfiniBand) – New 50 micron OM3 (2000 MHz-km) multimode fiber with MPO connectors – Up to 150m at 6 GBps Future direction* • ICB-4 for short distances over copper cabling – New ICB-4 cables are required • z10 EC to z10 EC and z10 EC to SystemExtension z9/z990/z890 of capabilities* – 10 meter distance remains Today’s • ISC-3 for extendedCapabilities distance over fiber optic cabling – No change to current cabling • Internal Coupling channels (IC)

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 66 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 33 IBM System z10 EC Functional Overview

System z CF Link Connectivity – Peer Mode only

Connectivity Options z10 EC z10 EC z10 EC ISC-3 ICB-4 PSIFB* z10 EC/z9/z990/z890 2 Gbps N/A N/A ISC-3 Future z10 EC/z9/z990/z890 direction* N/A 2 GBps N/A ICB-4 z9 Dedicated CF 2 Gbps N/A N/A with ISC-3 Extension of capabilities* z9 Dedicated CF N/A 2 GBps N/A Today’s with ICB-4 Capabilities z9 Dedicated CF N/A N/A 3 GBps with PSIFB*

z10 EC PSIFB* N/A N/A 6 GBps

 N-2 Server generation connections allowed  Theoretical maximum rates shown

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 67 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

System z – Configuring CF Links

PSIFB** System ICB-4 ICB-3 ISC-3 IC Max # Links (2Q2008) 16* z10 EC 32* Except E64 N/A 48 32 64 Future direction* z9 Dedicated CF 16 16 16 48 32 64

Any z9 SOD** 16 16 48 32 64 Extension of z990 N/A 16 capabilities*16 48 32 64 Today’s Capabilitiesz890 N/A 8 16 48 32 64

*Maximum of 32 PSIFB + ICB4 links on System z10 EC. ICB-4 not supported on Model E64

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 68 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 34 IBM System z10 EC Functional Overview

System z – RAS Design Improvements/Focus

Future direction*

Extension of capabilities* Today’s Capabilities

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 69 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC – Hardware RAS Improvements for Outage Avoidance

Sources of Outages - Pre z9 -Hrs/Year/Syst-

Future direction*

Extension of capabilities* Today’s Scheduled (CIE+DisruptiveCapabilities Patches + ECs) Planned - (M ES + Driver Up g rad es) Unsched uled (UIRA )

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 70 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 35 IBM System z10 EC Functional Overview

z10 EC Enhancements designed to avoid Outages

Future direction*

Today’s Capabilities

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 71 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC and z/OS Capacity Provisioning

Future direction* Capacity Provisioning Control Center - CPCC HMC Extension of capabilities* Domain Configuration(s) Today’s Capabilities Policies

CPM CIM

Capacity Provisioning Manager – CPM Sample data sets and files Common Information Model - CIM

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 72 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 36 IBM System z10 EC Functional Overview

z10 EC Capacity Provisioning • Unexpected workload spikes may exceed available capacity such that Service Level Agreements Capacity cannot be met Planned growth • While business need may not justify a permanent Demand upgrade it might well justify a temporary upgrade Future • z10 EC provides improved and integrated On/Off direction* CoD and CBU concept – Faster activation and improved robustness – Can be partially activated and combined Extension of T capabilities* • Value Proposition – System zToday’s Capacity Provisioning allows customers to manage Capabilitiesprocessing capacity more reliably, more z10 EC Image easily, and faster. LPAR 1 LPAR 2 LPAR 3 LPAR 4 – Replacing manual monitoring with autonomic z/OS z/OS z/Linux z/VM Autonomic activation management, or supporting manual operation with of spare recommendations can assure that sufficient CPs processing power will be available with the least Hypervisor / LIC-CC CPs possible delay. – Demonstrates the superior vertical scalability of System z

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 73 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

z10 EC Capacity Provisioning Infrastructure Overview z10 EC • WLM manages by goals • WLM metrics available through existing SE HMC interfaces ™ – One RMF gatherer per z/OS system Future – RMF distributed data server (DDS) per LPAR Hypervisor direction*SCLP z/OS LPAR Sysplex Linux z/OS LPAR LPAR • The RMF CIM providers publish the RMF RMF Extension of RMF Mon III data RMF capabilities* DDS • Capacity Provisioning Manager (CPM) Capacity WLM CIM Provisioning WLM CIM retrieves critical metrics through CIM server Manager server via http Prov. Policy • CPM communicates to support elements and HMCs, via – System z API (SNMP via IP) MVS System z API • Capacity Provisioning Control Center is console (SNMP/IP) front end to administer Capacity Capacity Provisioning Control Center Provisioning policies (CPCC)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 74 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 37 IBM System z10 EC Functional Overview

Capacity Provisioning Policy

• A policy may consist of multiple rules Capacity Provisioning Policy – Based on a variety of things, such as specific applications (bank transactions for example)

Maximum Provisioning Scope • The “Maximum Provisioning Scope” definesFuture the direction* Processor Limits maximum additional capacity that may be activated at any time for all contained rules – ExpressedExtension in MSUs, of zIIPs, zAAPs Rule capabilities* • “Provisioning Condition“ is simply a group of Time ProvisioningToday’s Condition and Workload Conditions that can be referred to Capabilities Time Condition – WLM Service Class conditions – Time Condition (start/deadline/end) Workload Condition – Workload (critical workload conditions)

Provisioning Scope • “Provisioning Scope” defines the maximum capacity that may be activated Processor Limits – Expressed in MSUs, zIIPs, zAAPs

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 75 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Capacity Management

• CPM differentiates between different types of Provisioning Requests – Manual through HMC or CPM (via command) – Scheduled (time condition without workload condition) – Conditional (based on workload condition) Future direction* • Different Capacity Demands – Number of zAAPs – Number of zIIPs – General purpose capacity: Today’s • NeedsCapabilities to consider different capacity levels (processor speeds) for sub-capacity processors • Speed demand for higher capacity levels • Unqualified demand (capacity level or number of GCPs)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 76 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 38 IBM System z10 EC Functional Overview

Components of Capacity Provisioning

• The Capacity Provisioning Manager (CPM) – is the server program that monitors the defined systems and CPCs and takes actions as appropriate and authorized by the policies • The Capacity Provisioning Control Center (CPCC) Future direction* – is the Graphical User Interface (GUI) component. It is the interface through which administrators work with provisioning policies and domain configurations Extension of – Optionally, you can use the CPCC to transfercapabilities* provisioning policies and domain configurationsToday’s files to the CPM, or to query the Capacity Provisioning Manager status Capabilities – The CPCC is installed and used on a Microsoft ® Windows® workstation. It is not required for regular operation of the CPM

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 77 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

CPM – Processing Modes

• The CPM operates in either of these 4 Modes allowing for different Degrees of Automation: – Manual mode • This is basically a command driven mode where no CPM policy is active – Analysis mode Future • CPM processes the capacity provisioning policy and informs the operator when a provisioning / direction* deprovisioning action would be due according to the criteria specified in the policy. • No checking if On/Off CoD record is available • It is up to the operator either to ignore that informationExtension or to perform of the up/downgrade manually (using the HMC/SE or the available CPM commands) capabilities* – ConfirmationToday’s mode • CPM processesCapabilities the policy as well as the On/Off CoD record to be used for capacity provisioning. Every provisioning action needs to be authorized (confirmed) by the operator – Autonomic mode • Similar to the preceding mode, except that no human (operator) intervention is required • In all modes: – Various reports will be available with information about workload and provisioning status, and the rationale for provisioning recommendations – User interface through • z/OS system console and CP control center application

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 78 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 39 IBM System z10 EC Functional Overview

CPM Policies and Processing Parameters

• The CPM server is using three types of input data sets that contain different type of information: 1. The domain configuration defines the topology and connections, such as the CPCs and z/OS systems that are to be managed by the server Future direction* 2. The policy contains the information as to: • which work is provisioning eligible, – under which conditions and during which Extensiontimeframes, of • how much capacity may be activated when thecapabilities* work suffers due to insufficient processing capacityToday’s Capabilities 3. The PARM data set contains setup instructions such UNIX® environment variables, and various processing options that may be set by an installation.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 79 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

The Capacity Provisioning Domain

• The domain configuration CPD defines CPCs and z/OS LPAR 1 LPAR 2 LPAR 3 LPAR 4 systems that are controlled by z/OS z/OS z/Linux CF a CPM instance Future • Sysplexes do not have todirection* be completely contained in a Hypervisor / LIC-CC SE CPs domain but must not belong to CPC 1 Extension ofmore than one domain Sysplex 1 Sysplex 2 capabilities* • Multiple Sysplexes and hence LPAR 1 LPAR 2 LPAR 3 LPAR 4 HMC z/OS Today’sz/OS z/OS z/VM multiple WLM service Capabilities API definitions may be involved CPM CPP • One active Capacity Hypervisor / LIC-CC Provisioning Policy (CPP) per SE CPs Domain at a time CPC 2 – More than one policy can Other CPCs exist for different purposes

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 80 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 40 IBM System z10 EC Functional Overview

CPM Command Interfaces

A Capacity Provisiong Manager (CPM) can be controlled: CPM • Using the CP Control Center (only partially) Future direction* – CPCC defines configuration, policy, rules etc and can upload them to CPM, nothing else, even to activate them you Extension of CIM Server have to go to MVS console and run a capabilities* command Today’s • Using the CapabilitiesMVS console – Cannot change configuration, rules etc. z/OS Console from a console. So CPCC and console have no common commands

CP Control Center

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 81 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Workload Condition

• Identifies the work that may trigger the activation of additional capacity when that work does not achieve its goal due to insufficient capacity and additional capacity would help • Parameters: Future direction* – Sysplex/Systems: The z/OS systems that may run eligible work – Importance Filter: Eligible service class periods, identified by WLM importance Extension of –PI Criteria: capabilities* • ActivationToday’s threshold: PI of service class periods must exceed the activation threshold for a specifiedCapabilities duration the work is considered to require help • Deactivation threshold: PI of service class periods must fall below the deactivation threshold for a specified duration the work is considered to no longer require help

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 82 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 41 IBM System z10 EC Functional Overview

Workload Condition…

• Parameters: – Included Service Classes: Eligible service class periods • Extends the set of Service Class periods with qualified work (extends the default set of default eligible service classes) and may specify different PI criteria Future direction* – Excluded Service Classes: Identifies service class periods, that should not be considered Extension of – If specifications exist on multiple levels then the service class periods as derived from capabilities* the importance filter are merged with the explicitly defined (included) service class period. FinallyToday’s the excluded service class periods (if any) are removed from the previousCapabilities set

• If no workload condition is specified the full capacity will be activated and deactivated unconditionally at the start and end times of the time condition (scheduled activation, deactivation)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 83 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Capacity Provisioning Hardware (and Workstation) Requirements • One or more z10 EC Servers • If temporary capacity should be controlled by the Capacity Provisioning Manager, i.e. if the manager is running in the confirmation or autonomic mode,Future or if provisioning actions are performed through CPM commands in either direction* mode, temporary capacity needs to be available – Requires the CIU Enablement feature and On/OffExtension CoD enablement of as well as a valid On/Off CoD record for temporary general purposecapabilities* processor, zAAP or zIIP capacity • WorkstationToday’s Requirements Capabilities The workstation for the Control Center needs: – INTEL® Pentium® or equivalent processor with 512 MB memory (1 GB recommended)

– Available disk space 150 MB. v Microsoft Windows XP Professional. Service Pack 2 or later. Screen resolution 1024x768 or higher

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 84 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 42 IBM System z10 EC Functional Overview

Capacity Provisioning Configuration Dependencies and Restrictions • While observed systems must be running z/OS Release 9 or higher it is allowable that other operating systems or the Coupling Facility Control Code (CFCC) are active in other LPARs Future • Observed systems that are running as guests under z/VM are not supported direction* – Also the Capacity Provisioning Manager should run on a system that is not running as a z/VM guest

• An observed system may run in a shared or Extensiondedicated of LPAR. An LPAR with dedicated processors can only generate demand for highercapabilities* general purpose processor capacity level. If theToday’s processor is not a sub-capacity processor, i.e. it is already operating at its maximumCapabilities capacity level, no additional demand will be recognized. • Also, for a dedicated LPAR, no demand for additional special purpose processors will be recognized • Demand for additional physical processors – opposed to increased capacity level – for a shared CP, zAAP, or zIIP processors can only be recognized if the current sum of logical processors is greater than or equal to the target number of physical processors in the respective pool – Rationale: Capacity Provisioning does currently not configure reserved or offline processors online

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 85 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Capacity Provisioning Configuration Dependencies and Restrictions…

• Observed systems may have general purpose CPs, zAAPs zIIPs, or any combination thereof, configured. Other processor types in the physical configuration are allowable. Future • Demand for zAAP processors can be recognized if at least one zAAP is alreadydirection* online to the system • Demand for zIIP processors can be recognizedExtension if at least of one zIIP is already online to the system capabilities* Today’s • The additionalCapabilities physical capacity will be distributed through PR/SM and the operating system. In general, the additional capacity will be available to all LPARs. – Facilities such as defined capacity (soft capping) or initial capping (hard capping) can be used to control the use of capacity • IBM recommends to avoid defining provisioning conditions for service classes that are associated with WLM resource groups for which a capacity maximum is in effect

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 86 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 43 IBM System z10 EC Functional Overview

Capacity Provisioning Installation and Customization • Prerequisites for installation – z/OS RMF must be set up and customized • Including Distributed Data Server (DDS) – z/OS CIM Server must be set up Future direction* • z/OS Base element since z/OS Release 7 • Perform Capacity Provisioning Customization Extension of – as described in Chapter 3 of “z/OS MVS Capacity Provisioning User's capabilities* Guide” (SA33-8299) • CustomizationToday’s required on Capabilities – Observed z/OS systems • These are the systems in one or multiple Sysplexes that are to be monitored • See description of Capacity Provisioning Domain – Runtime systems • These are the systems where the Capacity Provisioning Manager is running, or to which the server may fail over after server or system failures

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 87 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

Capacity Provisioning Software Requirements

• Systems with z/OS Release 9 plus APAR OA20824 or above can be monitored or used to run the Capacity Provisioning Manager • z/OS Resource Measurement Facility (RMF), an optional element of z/OS, must be enabled (or an equivalent product, including equivalent CIM RMF provider Future capability) direction* • The z/OS security product needs to support creation of passtickets (R_GenSec) and evaluation through the SAF interfaces. Extension of When using a security product other than IBMcapabilities* Security Server (Resource Access Control Facility, RACF®), check with your vendor. Today’s • TCP (SNMP)Capabilities connectivity from the hosting system (i.e. where the CPM server is running) to the Hardware Management Console or Support Elements, respectively. • IBM 31-bit SDK for z/OS, Java 2 Technology Edition, V5 (5655-N98) – Currently, no other levels are supported • Capacity Provisioning utilizes the CIM server (z/OS base element)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 88 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 44 IBM System z10 EC Functional Overview

CPM – Supported Environments and Prerequisites – Summary

• One or more z10 EC server – On/Off Capacity on Demand - enablement feature • Hardware Management Console – TCP/IP connection to HMC must be available Future direction* • Multi-LPAR Environments – Sufficient number of logical CPs to utilize additional physical CPs Extension of • z/OS Release 9 (on any observed system) capabilities* – RMF or likeToday’s product – RACF orCapabilities like product – CPM not supported when z/OS is a z/VM Guest • CPCC Workstation – An INTEL Pentium® or equivalent processor with 512 MB memory (1 GB recommended) – Available disk space 150 MB – Microsoft Windows XP Professional - Service Pack 2 or later – Screen resolution 1024x768 or higher

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 89 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

CPM Summary

• Capacity Provisioning provides faster, more reliable methods to activate On/Off Capacity on Demand – Manual mode allows activation and deactivation of physical general purpose capacity, zAAPs, and zIIPs Future – Analysis mode to receive suggestions on how to address capacity bottlenecks direction* – Confirmation and automation mode to automate recognition and enabling of capacity changes with or without operator confirmation Extension of – Can help optimizing the activated resources capabilities* • Solution integratesToday’s z10 EC enhancements, as well as new and existing z/OS component:Capabilities Capacity Provisioning FMID, WLM, RMF, CIM • Different degrees of automation available, ranging from manual to autonomic

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 90 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 45 IBM System z10 EC Functional Overview

z/OS CPM – Additional Information • Resources – www.ibm.com/systems/z/cod/ – z10 EC Capacity on Demand – Redbook – SG24-7504 – z/OS Common Information Model User’s Guide – SC33-7998 Future direction* – z/OS MVS Capacity Provisioning Manager User’s Guide – SA33-8299 – PR/SM Planning Guide – GA22-7236 – z/OS RMF Report Analysis – SC33-7991 Extension of capabilities* – z/OS MVS Initialization and Tuning Guide – SA22-7591 Today’s – z/OS MVSCapabilities Initialization and Tuning Reference – SA22-7592 – System z Application Programming Interfaces – SB10-7030 – Hardware Management Console Operations Guide (Version 2.10.0) - SC28-6867

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 91 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

System z10 EC Operating System Support

ESA/390 z/Architecture Operating System (31-bit) (64-bit) z/OS Version 1 Releases 7(1), 8 and 9 No Yes Future (2) No Yes Linux on System z , RHEL 4, 5 & SLES 9, 10 direction* z/VM Version 5 Release 2(3) and 3(3) No Yes z/VSE Version 3 Release 1(2)(4) Yes No Extension of z/VSE Version 4 Release 1(2)(5) capabilities*No Yes z/TPF VersionToday’s 1 Release 1 No Yes TPF VersionCapabilities 4 Release 1 (ESA mode only) Yes No

1. z/OS R1.7 + zIIP Web Deliverable required for z10 EC to enable HiperDispatch 2. Compatibility Support for listed releases. Compatibility support allows OS to IPL and operate on z10 EC 3. Requires Compatibility Support which allows z/VM to IPL and operate on the z10 EC providing System z9 functionality for the base OS and Guests. 4. z/VSE v3. 31-bit mode only. It does not implement z/Architecture, and specifically does not implement 64-bit mode capabilities. z/VSE is designed to exploit select features of IBM System z10, System z9, and zSeries hardware. 5. z/VSE V4 is designed to exploit 64-bit real memory addressing, but will not support 64-bit virtual memory addressing

Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2097DEVICE Preventive Planning (PSP) bucket prior to installing a z10 EC

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 92 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 46 IBM System z10 EC Functional Overview

System z10 EC Minimum OS Support for new Functions Linux on z/TPF z/OS(**) z/VM(**) z/VSE(**) System z(**) TPF(1)(**) SLES 9 1.1 Basic System z10 EC support 1.7(3) 5.2 3.1 RHEL 4 4.1(1) SLES 11 HiperDispatch 1.7(3) Not Supported RHEL 6 Future SLES 10 SP2 STSI for Capacity Provisioning 1.7(3) 5.2 RHEL 6 direction* Capacity Provisioning 1.9(3) Not Supported IBM work with LDPs(2) Large Page (1MB) 1.9(3) Not Supported IBM work with LDPs(2) Extension of RMF Enhancements for FICON 1.9 Not Supportedcapabilities* Not Supported HW DecimalToday’s Math Support 1.7(3) 5.3 IBM work with LDPs(2) z/VM-ModeCapabilities partitions 1.7(3) SOD* IBM work with LDPs(2)

CPACF Enhancements 1.7(3) 5.2 (Guests) IBM work with LDPs(2) 4.1(3) SLES 9 SP3 Configurable Crypto Express2 1.7(3) 5.2 (Guests) 3.1 1.1 RHEL 4.4 SLES 10 SP1 Dynamically Add Crypto to LPAR 1.7(3) 5.2 (Guests) RHEL 5.1 1. Indicates TPF 2. This function will be provided in a future Linux on System z distribution release/service updates. IBM is working with Linux distribution partners (LDPs) on Kernel space exploitation 3. Additional features, service or Web downloads required ** Note: Please refer to the latest PSP bucket for latest PTFs for z10 EC Compatibility and new functions/features support. * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 93 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

IBM System z10 EC Functional Overview

System z10 EC Minimum OS Support for new Functions…

Linux on z/TPF z/OS(**) z/VM(**) z/VSE(**) System z(**) TPF(1)(**) 5.3 (Dynamic InfiniBand Coupling Links* 1.7(3) Not Supported I/O) (3) STP NTP Client Support 1.7 Not Supported Not Supported Future SLES 9 OSA-Express3 10 Gbps – CHPID OSD 1.7 5.2 3.1 direction*4.1 & 1.1 RHEL 4 HiperSockets Multi Write Facility 1.9(3) Not Supported Not Supported (2Q08) Extension SLESof 10 SP2, 11 HiperSockets Layer 2 Support Not Supported 5.2 (Guests) capabilities* RHEL 6 EnhancedToday’s FCP caching Not Supported 5.2 64-wayCapabilities support 1.9 5.3 (32-way) SLES 9 1.1 Preserve CTC Logical Path 1.7(3) 5.2 Not Supported Fabric Config Support for NPIV Not Supported 5.2 1 TB/LPAR (z10 HW supports 1.5 TB) 1.8 Not Supported SLES 9 (4 TB) 256 GB 1.8 5.3

1. Indicates TPF 2. This function will be provided in a future Linux on System z distribution release/service updates. IBM is working with Linux distribution partners (LDPs) on Kernel space exploitation 3. Additional features, service or Web downloads required ** Note: Please refer to the latest PSP bucket for latest PTFs for z10 EC Compatibility and new functions/features support. * All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 94 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 47 IBM System z10 EC Functional Overview

System z z/OS and z/OS.e Support Summary

z890 z990*** z9 EC z9 BC z10 EC End of Coexist Ship (WdfM) (WdfM) Servic s with Date e z/OS z/OS & 1.7 x x x x x 9/08 1.9 9/05 z/OS.e -”- 1.8 x x x x x 9/09* 1.10* 9/06

z/OS 1.9 x x x x x 9/10* 1.11* 9/07

Note: z/OS R1.7 + zIIP Web Deliverable required to use HiperDispatch

z/OS.e - z800, z890 and z9 BC only. Release 1.8 will be the last release of z/OS.e. Only service-supported releases can coexist in the same sysplex

** Planned. All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice. Any reliance on these Statements of General Direction is at the relying party's sole risk and will not create liability or obligation for IBM. *** WdfM for EMEA countries. Rest of the world June 30, 2008

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 95 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 96 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 48 z/OS Planning Information

z/OS Planned Roadmap  XML System Services and specialty engines for 1.7 + 9/07  zIIP Assisted IPSec for 1.8+ 9/07 R11* 9/09  zIIP support 1.6+ 6/06  Enhanced Crypto Support for 1.7 5/06 1.10* • z/OS V1.10  Encryption Facility for z/OS 12/05 – GA September  z/OS OMEGAMON® Mgt Console 12/05 2008 planned* 1.9 • z/OS Version 1 Release 9 – GA September 2007 1.8 • z/OS 1.8 GA September 2006 – z/OS (and z/OS.e) 1.8 end of service is September 2009* 1.7 • z/OS 1.7 GA September 2005 Architectural Level Set – z/OS (and z/OS.e) 1.7 end of service is September 2008

1.6 • z/OS 1.6 GA September 2004 – z/OS (and z/OS.e) 1.6 end of service was September 30, 2007

* Statements regarding IBM future direction and intent are subject to change or withdrawal, and represents goals and objectives only.

* Statements regarding IBM futureZürich direction | 26. andOktober intent 2004 are subject Silvioto change Sasso's or withdrawal, Corner, and represents GSE z/OS goals and objectives only. 97 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS Support Summary

z/OS Support Summary

Coexists z800 z890 z9 EC DS8000 End of with Ship Date z10 EC TS1120 Service z/OS z900 z990 z9 BC DS6000 z/OS...

R6 x x x x x x 9/07 1.8

R7 x x x x x x 9/08 1.9

R8 x x x x x x 9/09* 1.10* R9 x x x x x x 9/10* 1.11* 9/07

R10* x x x x** x x 9/11* 1.12* 9/08* R11* x x x x x x 9/12* 1.13* 9/09*

z/OS 1.9 Coexistence-supported releases Release Coexistence-supported z/OS 1.9 z/OS 1.7, z/OS 1.8, z/OS 1.9 z/OS 1.10* z/OS 1.8, z/OS 1.9, z/OS 1.10* z/OS 1.11* z/OS 1.9, z/OS 1.10*, z/OS 1.11* z/OS.e 1.7, 1.8 supported on z800, z890, and z9 BC only. There is no z/OS.e 1.9. ** zIIP Web Deliverable required for HiperDispach support on System z10

* Statements regarding IBM futureZürich direction | 26. andOktober intent 2004 are subject Silvioto change Sasso's or withdrawal, Corner, and represents GSE z/OS goals and objectives only. 98 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 49 z/OS Support Summary

Current JES2 Releases

See http://www.ibm.com/servers/eserver/zseries/zos/support/zos_eos_dates.html

* = projected...

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 99 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS Support Summary

JES2/z/OS Compatibility

• JES levels supported by a given z/OS release are the same as the JES levels that can coexist in a MAS, which are essentially all currently supported releases of JES2

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 100 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 50 z/OS Support Summary

Active JES3 Releases

• TCP/IP for NJE support in z/OS V1 R8 JES3 was made available 2/15/2007 via APAR OA16527

• Support in z/OS V1 R8 JES3 was made available 2/15/2007 via APAR OA16527

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 101 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS Support Summary

JES3/z/OS Compatibility

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 102 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 51 z/OS Support Summary

Functions Withdrawn from z/OS V1R9

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 103 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS Support Summary

Functions Withdrawn in the Future (planned)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 104 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 52 z/OS Support Summary

Coexistence, Fallback, and Migration

• Coexistence of a V1R9 system with a V1R9, V1R8, or V1R7 system is supported • Fallback from a V1R9 system to a V1R8 or V1R7 system is supported

• Migration to a V1R9 system from a V1R8 or V1R7 system is supported

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 105 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

New Address Spaces with z/OS 1.9

• The System REXX address space, AXR

• Non-cancelable, but can be terminated by invoking FORCE AXR,ARM • When the AXR address space terminates, ENF signal 65 with a qualifier of 40000000x is issued

• AXR can be restarted by starting the AXRPSTRT proc, which can be found in SYS1.PROCLIB

• When the AXR address space initializes, an ENF signal of 80000000x is issued

• Common event adapter (CEA)

• The common event adapter (CEA) provides the ability to deliver z/OS events to C-language clients, such as the z/OS CIM server

• Started automatically by DFSMShsm whenever a data set is recovered from DASD using the FRRECOV DSNAME command and terminates when DFSMShsm ends Overview

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 106 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 53 A Spotlight on selected z/OS 1.9 Enhancements

New Address Spaces with z/OS 1.9…

• DSSFRDSR

• A DFSMSdss address space to recover up to 64 datasets concurrently from one or more copy pool backup versions

• Started automatically by DFSMShsm whenever a dataset is recovered from DASD using the FRRECOV DSNAME command and terminates when DFSMShsm ends

• ARCnXXXX

• A DFSMSdss address spaces is started automatically by DFSMShsm whenever a dump, restore, migration, backup, recover, or CDS backup function is invoked

– A DFSMSdss address space is not started for recall tasks

– Can reduce the storage used in the DFSMShsm address space, enabling more tasks to be started • DFSMShsm invokes DFSMSdss and requests that DFSMSdss use a unique address space identifier for each unique DFSMShsm function and host ID where n is a unique DFSMShsm host ID and XXXX is an abbreviation of a DFSMShsm function

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 107 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

New Address Spaces with z/OS 1.9…

• ARCnXXXX…

• Where xxxx is as follows:

– DUMP for dump – REST for restore – MIGR for migration – BACK for backup

– RCVR for recover – CDSB for CDS backup

• For instance, migration for DFSMShsm host ID 1 would result in a generated address space identifier:

– ARC1MIGR • Address space terminates when DFSMShsm terminates

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 108 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 54 A Spotlight on selected z/OS 1.9 Enhancements

New Address Spaces with z/OS 1.9…

• DFSMShsm Address Spaces

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 109 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

New Address Spaces with z/OS 1.9…

• DFSMShsm Address Spaces with SDSF

• ABEND 878 Reduction

• New address spaces reduce this possibility

• Reduces amount of storage in DFSMShsm address space

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 110 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 55 A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN

• With z/OS 1.9, the ISPF ISRDDN Utility provides a new Function to disassemble a Load Module

• This can be useful when there’s no more Source available for a Module for what ever Reason

• To disassemble a Load Module with ISRDDN, perform the following Steps: 1.) TSO ISRDDN

2.) Browse a Load Module already in Storage

 BROWSE modname

If the Load Module requested isn’t already in Storage, load with the LOAD Command, as follows:

 LOAD IEFCIPS

3.) When in Browse Mode, enter the new DISASM Command provided with z/OS 1.9

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 111 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• ISRDDN primary Panel

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 112 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 56 A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• LOAD a LNKLST Module into Storage for Browse

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 113 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• LOAD a LNKLST Module into Storage for Browse…

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 114 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 57 A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• Press ENTER to browse the Module

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 115 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• Disassemble the Load Module

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 116 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 58 A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• Disassemble the Load Module…

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 117 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• Scroll down using PF8

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 118 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 59 A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• Type “Yes” to proceed

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 119 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

A Spotlight on selected z/OS 1.9 Enhancements

Disassemble a Load Module with ISRDDN…

• Press ENTER to disassemble the Load Module

• For more Information on the DISASM Command refer to ISRDDN Help

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 120 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 60 z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 121 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10…

• z/OS provides additional Constraint Relief – for common storage, z/OS Communications Server, Allocation, OAM, XES/XCF CF locking, and GRS Latch and ENQ processing • Network TCP/IP Stack Performance Improvements – in multiple areas, including CPU consumption, cache line contention, and common storage utilization • Metro Mirror (PPRC) secondary Devices can be defined in Subchannel Set 1 – Can free Subchannel Set 0 slots for additional devices – Complements PAV alias definitions in SCS 1 • Hashed DSAB Searches – Improve Allocation performance for large numbers of data sets, any workload with lots of open data sets can benefit (ex:DB2 and IMS™) –Use GETDSAB! • Minimize the Delay in starting CDS Backup due to an active DFSMShsm™ Workload • Mark selected Devices unavailable for Allocation – Reduce recovery Allocation overhead, help keep purposefully offline devices offline – Three device states: ONLINE; OFFLINE; and OFFLINE UNAVAILABLE

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 122 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 61 z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… Taking z/OS storage volumes to the extreme • An Extended Address Volume (EAV) is a volume with over 65,280 cylinders – 223 GB volumes initially supported on z/OS V1.10* and IBM System Storage DS8000* – Larger volumes are planned to be rolled out over time * – First exploiter is VSAM – applications that uses VSAM data sets (including DB2 and CICS®) can benefit from EAV – IBM intends to enable other access methods in the future * • EAV helps address storage constraints for very large storage • In the future, EAV can help simplify storage management – Manage fewer, large volumes as opposed to many small volumes EAV • DS8000 HyperPAV function complements EAV by allowing the scaling of the I/O rates against a single, larger volume • DS8000 Dynamic Volume Expansion can allow non-disruptive 3390-A migration to larger volume sizes 3390-A

2314-1 3330-1 3350 3390-3 3390-9 3390-9 3390-9 29 MB 101MB 317MB 3GB 9GB 27GB 54GB 223GB* Architectural Limit: ~300 cyl 404 cyl 555 cyl 3,339 cyl 10,017 cyl 32,760 cyl 65,520 cyl 262,668 cyl 100s of TB**

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 123 * When available z/OS V1.10 GA planned to be 3Q 2008, DS8000 function planned 2H 2008 Statements regarding IBM future direction and intent are subjectExpertforum to change or CHwithdrawal, #68, and1.-2.4.2008, represents goals Thun and objectives only.

z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… Availability Enhancements • Improved consoles and message handling • JES2 dynamic exits – can help avoid JES2 restarts • JES2 NJE improvements – automatically restarts connections • Auto IPL – can reduce latency of operator response time by automatically initiating a dump to capture data for analysis and a restart based on z/OS diagnostics. • ASID reuse – helps reduce planned and unplanned outages by allowing more address spaces to be reused. Exploiters include: – CATALOG, LLA, and VLF (available with z/OS V1.9) –z/OS UNIX® RESOLVER, TCP/IP, DFSMSrmm™, and TN3270 (with z/OS V1.10) • System to react automatically to high fixed storage users • Parallel Sysplex® improvements • Basic HyperSwap solution* • z/OS Global Mirror (eXtended Remote Copy, XRC) enabled for zIIP** • ... and beyond with GDPS® V3.5 ** IBM System z10 Integrated Information Processor and IBM System z9 Integrated Information Processor

* Statements regarding IBM future Zürichdirection | 26. and Oktober intent are2004 subject to Silviochange orSasso's withdrawal, Corner, and represents GSE z/OS goals and objectives only. 124 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 62 z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… Improved Consoles ...Improved Availability • z/OS V1.4 / V1.5 – First phase of Consoles Enhancements - Improved message production and consumption flows to help reduce bottlenecks • z/OS V1.7 – Improved processes for deleting consoles, message handling, support for Improved system subsystems, and improved availability availability, enhanced – IBM Health Checker for z/OS - checks for console definitions capacity and reliability of message delivery • z/OS V1.8 – Master console and console switch functions were removed, eliminating them as potential points of failure • z/OS V1.9 (rolled back to V1.8) – Automation for dealing with large amounts of messages (also available with z/OS 1.6 -1.7 w/PTF) – Helps prevent the flood messages from being displayed on a console, from being logged, from being queued for automation, from propagating to other systems in a sysplex • z/OS V1.10* – Designed to reduce serialization contention – Increases the maximum number of MCS, SMCS, and subsystem consoles in a sysplex from 99 per sysplex to 99 active consoles per system; and more

* Statements regarding IBM future directionZürich and| 26. intentOktober are 2004 subject to changeSilvio or withdrawal,Sasso's Corner, and represents GSE goals z/OS and objectives only. 125 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… Availability - Sysplex Improvements • Simplification • Performance/ Availability –XCF/ XEShealth checks promotes sysplex – Support for InfiniBand® Coupling links ‘best practices’ – Intelligent WLM XCF signaling –HCMenables you to share configuration – Optimized XCF/ XES CF locking requests packages across sysplex – Consoles enhancements – Improved console serialization – z/OS Communications Server - New support to (up to 99 active MCS, SMCS, and subsystem consoles help you coordinate LU name assignments per system in a sysplex) among TN3270 servers in sysplex – Reduced potential of RACF® database error – Planned: A z/OS Management Facility for – Potential to avoid IPL with z/OS UNIX System Services sysplex management support * sysplex wide root – Reduced latency with SFM Auto IPL trigger – Improved GRS migration – Load balancing advisor support of subplexes – Shorter wait for DFSMShsm CDS backup

Scalability! Up to 64 processors per server (z10 EC) and up to 32 servers in a sysplex = up to 2,048 engines!

* Statements regarding IBM future directionZürich | and26. Oktober intent are 2004 subject to changeSilvio or Sasso's withdrawal, Corner, and represents GSE goalsz/OS and objectives only. 126 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 63 z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… Application Development and Integration* • HLASM source-level dbx debug support (ISVs) – This is expected to help you debug applications that include High-Level Assembler source parts • Submit jobs from z/OS UNIX shell –New submit shell command • New Run-time options support for Language Environment® – Support CEEROPT for batch outside CICS and IMS environments – Support for AMODE 64 • NFS V4 Server support for: – NFS V4 locking and name mapping – Client file locking and ACL support • NFS V4 Client support for: – Client pthread conversion – RPGSEC_GSS, ACL, and file locking

* Statements regarding IBM futureZürich direction | 26. and Oktober intent 2004 are subject toSilvio change Sasso's or withdrawal, Corner, and represents GSE z/OS goals and objectives only. 127 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… z/OS Simplifying Operations and Programming* • IBM Health Checker for z/OS – Save check data and browse historical Health Check output (from log stream) – helps allow you or check application to view historical values returned by various health checks, can help establish predictive diagnosis capabilities. – New and/or updated checks for SFM, z/OS UNIX System Services, RACF, CINET, XCF/XES • Configuration Assistant for z/OS Communications Server – The Configuration Assistant plans to import existing policy text files into the GUI. This allows the CA to learn of and absorb manual changes that the system administrator may have made to the policy configuration text files since the last time they were exported • Hardware Configuration Manager – Need help with I/O management? – Improved saved views. – Support for configuration packages similar to those supported by HCD – Support for importing and exporting I/O configuration data, similar to that provided by HCD • Additional improvements to – Language Environment – new parmlib syntax checks – ISPF – allows you to specify multiple targets for move and copy line commands, and more! – New GRS ENQ monitor – to aid in identifying/optimizing resources – Logger – enhancement to aid in problem determination of log stream data sets. – z/OS Communications Server - New functions for network management and improvements to the network management APIs – DFSMShsm – many improvements

* Statements regarding IBM futureZürich direction | 26. and Oktober intent 2004are subject toSilvio change Sasso'sor withdrawal, Corner, and represents GSE z/OS goals and objectives only. 128 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 64 z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… z/OS Optimization and Management* • Policy based Capacity Provisioning for System z10 – A new Capacity Provisioning Manager planned for z/OS V1.10 (and z/OS V1.9 with PTF) plans to monitor System z10 servers and manage z/OS 1.9 and 1.10 systems and add /remove temporary capacity automatically – In the future, z/OS will allow authorized applications to query, change, and perform basic operational procedures against the installed System z hardware base - efficiently deploying server resources when needed* • z/OS Workload Manager: – Improved Contention Management • Longer promotion, will now promote resource holders to the priority of the highest-priority waiter – WLM to manage more address spaces in service class SYSTEM: • XCFAS, GRS, SMSPDSE, SMSPDSE1, CONSOLE, IEFSCHAS, IXGLOGR, SMF, and CATALOG (in addition to *MASTER* and WLM) – More Performance Block (PB) delays • Up to 15 from 5 • Applications can specify names to replace the default names – zIIP CPU management = Manage zIIPs like CPs and zAAPs

* Statements regarding IBM futureZürich direction | 26. Oktoberand intent 2004 are subject Silvioto change Sasso's or withdrawal, Corner, and representsGSE z/OS goals and objectives only. 129 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

z/OS 1.10 Announcement Preview

What’s new in z/OS V1.10… DFSMSrmm* • Enterprise Storage and Storage hub – IBM TotalStorage Productivity Center for System z to report on volumes managed by DFSMSrmm – Improved integration with IBM Integrated Removable Media Manager – DFSMSrmm Web services no longer dependent on WebSphere Application Server • Simplified monitoring and management – improved ease of use – Policies to manage: DELETEd tape data sets; to enforce tape retention and expiration objectives to help avoid accidental loss of data; and to manage media end-of-life support / replacement – Improved reporting – Use Fast replication to create an almost instant copy of the DFSMSrmm CDS – CDS forward recovery via DFSMSrmm records – New parmlib commands replace the REJECT command and enable simplified and more powerful partitioning of tape volumes and better control the use of tape volumes • Availability – ASID reuse – helps reduce planned and unplanned outages by allowing more address spaces to be reused

* Statements regarding IBM futureZürich direction | 26. Oktoberand intent 2004 are subject Silvioto change Sasso's or withdrawal, Corner, and representsGSE z/OS goals and objectives only. 130 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 65 GDPS Version 3.5 Enhancements

GDPS® V3.5 Update

Business Continuity on System z GDPS/PPRC HyperSwap Manager RCMF/PPRC

GDPS/PPRC GDPS Metro/ GDPS Metro/ Global Mirror z/OS Global Mirror GDPS/XRC GDPS/Global Mirror RCMF/XRC

Delivered by IBM Global Services

GDPS The ultimate eBusiness Availability Solution

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 131 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

GDPS® V3.5 Update

Continuous Availability Continuous Availability / Disaster Recovery at Continuous Availability of Data within a Data Disaster Recovery Extended Distance Regionally and Disaster Center Metropolitan Region Recovery Extended Distance

Single Data Center Two Data Centers Two Data Centers Three Data Centers Applications remain active Systems remain active Near-continuous Automated D/R across Automated Data availability availability to data site or storage failure Disaster Recovery No data loss No data loss “seconds” of Data Loss Extended distances

AB

C GDPS/ PPRC HyperSwap GDPS®/PPRC GDPS/GM GDPS/MGM Manager HyperSwap Manager GDPS/PPRC GDPS/XRC GDPS/MzGM

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 132 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 66 GDPS Version 3.5 Enhancements

System Management Improvements

System Management Improvements

IPL Message Automation Graphical User Interface (GUI) System z Capacity Management Health Checker GDPS/PPRC Global Mirror FlashCopy Support HyperSwap Manager HyperSwap Coexistence RCMF/PPRC

GDPS/PPRC GDPS Metro/ GDPS Metro/ Global Mirror z/OS Global Mirror GDPS/XRC GDPS/Global Mirror RCMF/XRC

Delivered by IBM Global Services

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 133 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

Web Graphical User Interface • Intuitive, easy to use, modern Web GUI – Based on NetView Web Application (standard feature) – Webserver can run on many platforms including laptop – Coexists with existing 3270 interface • Improved design based on customer feedback – Designed keeping existing customers in mind • Removes many restrictions imposed by 3270 and NetView panel interface • Simplified systems management

GDPS/PPRC in V3.4* GDPS/PPRC HM in V3.5 GDPS/GM in V3.5

• Tape Management added in V3.5

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 134 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 67 GDPS Version 3.5 Enhancements

GUI Interface

Multiple Windows Color coded alerts

GDPS Status Menu

Go directly to desired option • Remote Copy • Standard Actions • Planned Actions

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 135 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

System z Capacity Management

• CBU – Allows specification of activation key – Key avoids RSF (call home) interaction during activation – Improves activation performance and reliability • OOCoD – New ACTIVATE and UNDO script statements – Order number allows choice of license activated

• CBU and OOCoD will be mutually exclusive in the initial implementation

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 136 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 68 GDPS Version 3.5 Enhancements

Availability Improvements

 HyperSwap Extensions  Zero Suspend Flash Copy  GDPS/MGM Incremental Resync  GDPS/MzGM Incremental Resync GDPS/PPRC HyperSwap Manager RCMF/PPRC

GDPS/PPRC GDPS Metro/ GDPS Metro/ Global Mirror z/OS Global Mirror GDPS/XRC GDPS/Global Mirror RCMF/XRC Delivered by IBM Global Services

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 137 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

GDPS/MGM Incremental Resync (V3.4)

Site1 • Incremental resynch A C if Site 2 or B-disk fails Kp P1 P2 Non bkup – Maintains disaster recovery position CF1 -z KG – Improved RTO KpK1 • GM K-Sys runs in production LPAR – HyperSwap protection

12 11 1 10 2 9 3 8 4 – Reduced resource requirement 7 5 Non- 6 A z KG

12 11 1 ETR or Metro 10 2 9 3 8 4 7 5 STP Mirror 6 A Non-z D 12 11 1 Non-z 10 2 9 3 A 8 4 Non- 7 5 6 z B C KgKg Global Mirror A F KP P1 R KP Non  Optional CFs / Prod bkup P2 -z systems in Site2 R P1 P2 Non-z CF2 Kg Bkup  Non-z: Unix, Linux, CF3 Bkup Bkup Site2 Linux on z, Win Recovery Site

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 138 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 69 GDPS Version 3.5 Enhancements

GDPS/MzGM Incremental Resync (V3.5) Site1 • Incremental resynch A C if Site2 or A-disk K1 P1 P2 Unix fails bkup CF1 • Maintains disaster recovery position K1K1 • Improved RTO • Planned for 2Q08

12 11 1 10 2 9 3 8 4 7 5 6 B

12 ETR or 11 1 Metro 10 2 9 3 STP 8 4 7 5 Mirror 6

12 11 1 10 2 9 3 A 8 4 7 5 6 A C Kg z/OS Global Mirror A F K2 SDM P1 K2 P2 bkup  Optional: CFs / Prod Kx P1 P2 CF2 systems in Site2 CF1 SDM Bkup Bkup Site2 Recovery Site

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 139 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

Heterogeneous Data Center Management

 Open LUN Management  MultiPlatform Resiliency  DCM for VCS GDPS/PPRC  DCM for Tivoli AppMan HyperSwap Manager  BCPM RCMF/PPRC

GDPS/PPRC GDPS Metro/ GDPS Metro/ Global Mirror z/OS Global Mirror GDPS/XRC GDPS/Global Mirror RCMF/XRC Delivered by IBM Global Services

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 140 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 70 GDPS Version 3.5 Enhancements

GDPS/PPRC Multi Platform Resiliency for System z (SUSE Linux)

SITE 1 SITE 2

12 12 1 11 1 GDPS 11 10 2 10 2 3 CF1 9 3 9 CF2 8 4 8 4 7 5 7 5 Linux: 6 6 automates Native or Linux zVM Guest System z System z Startup

z/OS Linux z/OS Linux

VM VM K1 VM

P P P P S S S S K/L

HyperSwap

Coordinated near-continuous Availability and DR Solution z/OS and Linux running native or z/VM Guests

Requires Tivoli System Automation for MultiPlatform

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 141 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

GDPS/PPRC Multi Platform Resiliency for System z (Red Hat Linux)

SITE 1 SITE 2

12 12 1 11 1 GDPS 11 2 10 2 10 3 CF1 9 3 9 CF2 8 4 8 4 7 5 7 5 6 6 automates Linux System z System z Startup z/OS Linux z/OS Linux

VM VM K1 VM

P P P P S S S S K/L

HyperSwap

GDPS V3.5: RHEL 4 on System z (SOD for RHEL 5 support) z/VM guests Requires Tivoli System Automation for MultiPlatform

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 142 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 71 GDPS Version 3.5 Enhancements

GDPS Distributed Cluster Management (DCM)

• DCM function added to GDPS/XRC and GDPS/PPRC Control Code • Helps manage and coordinate distributed clusters. • “end-to-end recovery solution” – Helps optimize operations and performance – Helps meet enterprise-level RTO and RPO – Integrated, industry-unique, automated • Veritas Cluster Server is first exploiter • Tivoli SA Application Manager planned for 2Q08

Integrated automated Industry-unique

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 143 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

VCS Exploitation

• Unlimited distance — DR coverage – GDPS/XRC for System z and GCO VCS-managed asynchronous replication for distributed servers today. • Metro distance — high-availability coverage – GDPS/PPRC for system z and GCO VCS-managed synchronous replication for the distributed servers in 2008.

– Enables cross-platform communication between System z™ and non-z systems (IBM-AIX, SUN-Solaris, HP-UX, Linux) – Offers coordinated site switch for planned and unplanned outages

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 144 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 72 GDPS Version 3.5 Enhancements

Distributed Systems Availability and Disaster Recovery Solution

Disaster recovery within Metropolitan area (MAN) High availability clusters with High High availability High availability Extended distance availability clustering with clusters with disaster Remote mirroring recovery disaster recovery clustering (WAN) (LAN) GCO GCO VCS VCS VCS VCS VCS VCS

Remote mirror, SAN-attached, Fibre Replication, IP, DWDM, Escon VERITAS Cluster Server (VCS) VERITAS Storage Foundation/VERITAS Volume Replicator

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 145 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

GDPS Family Support for DCM

CA / DR within a metropolitan region Two data centers - systems remain active; designed to provide no data loss

GDPS/PPRC

K-Sys

K-Sys

Site-1VCS Site-2

VCS and VCS GCO GDPS DCM Agent

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 146 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 73 GDPS Version 3.5 Enhancements

GDPS Family Support for DCM

DR at extended distance Rapid systems recovery • Additional Benefit: with only ‘seconds” of data loss • GDPS Agent can detect Loss of VCS Cluster and notify GDPS/XRC GDPS/XRC SDM K-sys The Operator now gets a takeover prompt – Server Failure – individual System Failure – Start of a Site Failure • Operator must investigate • Does not manage, monitor z/OS Systems

Site-1VCS Site-2 in Site1

VCS and VCS GCO GDPS DCM Agent

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 147 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

GDOC Services

• Geographically Dispersed Open Clusters (GDOC) – IBM integration, consulting and project management • Associated services – Project management – Planning – Testing – Code delivery GDPS/XRC – Customization GDPS/PPRC – Onsite skills transfer • All necessary knowledge and skills – Data replication, Bandwidth analysis, – Server management, monitoring and automation – Leverages Veritas Cluster Server (Symantec)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 148 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 74 GDPS Version 3.5 Enhancements

DCM and SA AppMan

• GDPS manages servers, data replication and has site awareness – System z scope for servers – System z and open systems scope for data replication

• SA AppMan automation manages applications – End to end scope – cross cluster dependencies GDPS/GM – resource grouping (customer defined) GDPS/PPRC • Shouldertapping between GDPS and SA AppMan automation

Planned for 2Q08

SA Application Manager was previously called SA for Muliti-Platform End-to-End

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 149 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

Tivoli BCPM Support (SOD)

Managing a Recovery Plan

Outage Analysis AlarmAlarm Outage Event What is impacted ? • What is broken / OK • What are the objectives/policies ? •Senior•Senior Analysis Analysis • Is this a crisis ? • Recommendations? •Subject•Subject Expert Expert Recovery Plan Geo-112/113 •• Crisis Crisis Mgmt Mgmt Team Team • Options to recover • Who to notify • Approvals ?

Recovery Point Objective (RPO) equals 1 Hour is jeopardized for SAP CRM Back Office Geo-112/113 Freeze Wake Up GDPS Policy is Freeze and Go • Can the problem be solved within 30 mins  Stay • otherwise switch site

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 150 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 75 GDPS Version 3.5 Enhancements

Disk and z/OS Support

 FlashCopy SE  Multiple Reader  Extended Distance FICON

GDPS/PPRC HyperSwap Manager RCMF/PPRC

GDPS/PPRC GDPS Metro/ GDPS Metro/ Global Mirror z/OS Global Mirror GDPS/XRC GDPS/Global Mirror RCMF/XRC Delivered by IBM Global Services

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 151 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

IBM FlashCopy Space Efficient • Can significantly reduce disk capacity needed for copies – Just save source data being updated – No longer need to match full source space – Best for temporary copies • Reduced capacity requirements – Lower costs, – Fewer drives, – Less power •Uses – Backing up data – Data mining – Data validity checking … • GDPS Value – GDPS/GM “C-disk” –GDPS Testing – Supported by all GDPS offerings

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 152 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 76 GDPS Version 3.5 Enhancements

z/OS Global Mirror Multiple Reader GDPS/XRC V3.5

• Improves throughput for IBM z/OS Global Mirror and GDPS/XRC • Can better sustain peak workloads for a given bandwidth • Can increase data currency over long distances • Can replicate more capacity while maintaining same RPO • Helps avoid potential host slowdowns • Facilitates very large volumes System Data Application Mover

Multi-Reader Support Multiple “sub-sidefiles”, for increased parallelism PAVs - multiple Secondary PAVs processing I/Os primary writers

Primary Secondary Subsystem Subsystem

Parallel processing for higher performance

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 153 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

SDM / zIIP enabled z/OS Global Mirror

• zIIP designed to help: – Integrate data across the enterprise – Improve resource optimization – Lower cost of ownership for eligible workloads • No IBM software charges on the zIIP – Consistent with other specialty engines

• System Data Mover processing now enabled in zIIP • Available with: – z/OS V1.10 – z/OS V1.8 or V1.9 with PTF for APAR OA23174 – IBM System Storage DS8000 or any storage controller supporting DFSMS SDM • Up to one zIIP / general purpose CP

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 154 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 77 GDPS Version 3.5 Enhancements

SDM Offload – Potential Value

•Formula:

10-20 MIPS per LPAR (not offloadable) + 10-20 MIPS per SDM + 4 MIPS per 100 writes

• Example customer data (Approx 15-18 SDM MIPS/TB)

515 TB / 31,700 volumes 7725 MIPS 15 MIPS / TB 21 TB / 2,160 volumes 361 MIPS 17 MIPS / TB 21 TB / 2,160 volumes 377 MIPS 18 MIPS / TB 14 TB / 1,440 volumes 206 MIPS 15 MIPS / TB 14 TB / 1,440 volumes 231 MIPS 17 MIPS / TB

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 155 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

DS8000 Extended Distance FICON Enables Protection at less Cost

• Optimized FICON pacing increases number of commands in flight • Enables communication over greater distances without substantial reduction to effective data rate • Can significantly reduce the cost of remote mirroring over FICON for z/OS Global Mirror (XRC) solution • Eliminates need for more expensive 3rd party protocol-specific channel extenders

Requires IBM System z10 and DS8000

System Data Mover 1 0 z/OS 0 Glo 0 bal M Pri 0 irr Sec 1 or 0

SDM zGM Secondary zGM primary controller controller Channel extension devices

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 156 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 78 GDPS Version 3.5 Enhancements

Service Support Policy

• GDPS releases with planned GA and End of Service dates

Release GA EOS

GDPS V3.3 1/25/06 March, 2009

GDPS V3.4 3/30/07 March, 2010

GDPS V3.5 3/31/08 March, 2011

GDPS V3.6 March, 2009 March, 2012

GDPS V3.7 March, 2010 March, 2013

GDPS V3.8 March, 2011 March, 2014

(1) Dates are based upon the current intentions of IBM (2) GDPS levels beyond GDPS V3.5 represent current intentions of IBM

www.ibm.com/systems/z/gdps/getstarted/eospolicy.html

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 157 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GDPS Version 3.5 Enhancements

GDPS Advantages • Mature – Hundreds implementations since 1998 – Many customer references • Flexibility – Synchronous or Asynchronous remote copy – 2 or 3 site – Easily customized automation • Multi-Vendor – Allows multi-vendor policy • Heterogeneous Data Management – Open System LUN management – Multi-System Resiliency for System z – GDPS/Global Mirror – GDPS DCM for VCS – GDPS DCM for SA AppMan

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 158 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 79 GDPS Version 3.5 Enhancements

What’s in a Name

New name Old name GDPS Solution Metro Mirror PPRC (Peer to Peer Remote Copy) GDPS/PPRC z/OS Global Mirror XRC (eXtended Remote Copy) GDPS/XRC Global Mirror Asynchronous PPRC GDPS/Global Mirror

Three-site Solution Description GDPS Solution Metro / Global Copy Asynchronous Cascading PPRC n/a Metro / Global Mirror (MGM) Cascading: A  B  C GDPS/MGM z/OS Metro / Global Mirror (zMGM) Multi-target: A  B; A  C GDPS/MzGM

ibm.com/systems/z/gdps

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 159 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 160 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 80 APAR's of Interest

APAR's and Red Alerts

• II14250 Differences in IDCAMS LISTCAT Level Processing in z/OS 1.8 and later Releases • II14297 Catalog and GRS Configurations • OA17735 New SRM Function for blocked Workload (z/OS 1.7+) • OA21371 (Hiper) Unable to gain Connectivity to CF after Upgrade • OA21454 (Hiper) HFS Abend 5B8CA414 • OA21461 (Hiper) High CPU when Vary xxxx-yyyy,ONLINE is issued for SMS managed Volumes • OA21635 Recalibrate SYNC and ASYNC Thresholds • OA21917 (Hiper) ABEND00C RC13260001 IXCL2EVT many Alter Begin/End Events stacked • OA21958 (Hiper) ABEND0C4 IXCC1RCD+X'66' called under the IXCS1MSI Stack • OA22097 (Hiper) Hang occurs after DFHFC6009 received when attempting to quiesce a VSAM RLS dataset as it's being deleted • OA22112 (New Function) XRC secondary Volser and Device Number Filters • OA22281 (Hiper) Hang on IGWLSH Latch due to EOV in RM Thread • OA22504 (Hiper) HyperSwap because of IEA497I perm I/O Errorin IFCC Situation together with MIH Occurance • OA22612 (Hiper) Log Stream may be incorrectly marked Loss of Data following Structure Rebuild • OA22738 (Hiper) IGD17295I received after z/OS 1.8 HDZ1180 for PDS PDSE Datasets using Dynamic Volume Count (DVC)

• OA22991 RMF III XCF Delay Report may present misleading Information • OA23174 (New Function) ZIIP Ennablement for XRC

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 161 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

APAR's of Interest

APAR's and Red Alerts...

• OA23238 (Hiper) The GN_SYSACTIVE Event does not always get delivered to Member Group Notify Exits • OA23278 (Hiper) FEOV after Read for the last Block on a Volume will cause the next Read to invalidly invoke EOV • OA23312 (Hiper) Rebuild for a Logger Structure stopped with Message IXG101Idue to Staging Dataset Error

• OA23569 (Hiper) CLOCKXX with SIMETRID 0X and ETRMODE NO was allowed to join the Sysplex with Systems on another physical CEC • OA23725 (Hiper) Log Streams can be prematurely disconnected following a Staging Dataset I/O Error Condition • OA23817 (Hiper) SQA Overlay may occur as a Result of XES Resource Manager Address Space EOM Processing

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 162 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 81 GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 163 Expertforum CH #68, 1.-2.4.2008, Thun

RMF and XCF Delay Reporting

RMF Monitor III XCF Delay Reporting

• As Sysplexes and the Workloads executed on it are getting larger and larger, XCF signalling traffic is significantly increased

• To monitor XCF performance, RMF Monitor III and the RMF Postprocessor is widely used

• RMF Monitor III provides online Functions to monitor probable XCF Delays

• RMF Post Processor provides Reports on XCF signalling Activity by Transport Class, XCF Path, XCF Groups and Members

• RMF Monitor III Delay Reports provide the Capability to display Delays for a Workload caused by XCF

• With today’s Implementation, it is known that the RMF Monitor III Delay Measurement has a poor Correlation to the actual Delays that occur within XCF

• This results in Customers being alarmed with false Reports of Delays

• Indeed, the higher the XCF signalling Rate gets, the more likely RMF is to report Delay, even though everything in XCF is running terrific

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 164 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 82 RMF and XCF Delay Reporting

RMF Monitor III XCF Delay Reporting…

• Because of this, today XCF could have sent 750,000 Signals in a 10 Sample Window, and having just one Signal seen as pending for each of 3 Samples would be reported as XCF causing 30% Delay

• Currently, RMF and XCF Development is investigating to provide a Solution for a more accurate Measurement of XCF Delays with increasing signalling Traffic

• In the meantime, if you frequently see RMF Monitor III reporting high Delays in XCF, verify your XCF Configuration Performance with an RMF Postprocessor REPORTS(XCF) Report

• For detailed Information on XCF Configuration and Performance Aspects, refer to the following (frequently updated) Techdocs Whitepaper:

• XCF Performance Considerations V3R1, WP100743, 21.6.2006

• In Addition, also consider RMF APAR OA22991, which addresses another area of false XCF Delays being reported

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 165 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

RMF and XCF Delay Reporting

Example: RMF Monitor III shows 15% RMF Monitor III XCF Delay Reporting… XCF Delay for Task DBO4IRLM

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 166 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 83 RMF and XCF Delay Reporting

Example: RMF III XCF Delay Details RMF Monitor III XCF Delay Reporting… Panel for Task DBO4IRLM indicates 13% Delay over XCF CF Structure and 2% Delay over XCF CTC Device

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 167 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 168 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 84 Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio

• What is uncaptured CPU Time?

• Uncaptured CPU time can be defined as the Processing within z/OS that MVS can not, or does not, associate with a specific Unit of Work for Accounting Purposes

• This Processing that occurs during uncaptured Time is considered as System Overhead caused by MVS Management of Resources that are outside of an Application's Control

• To know the Amount of uncaptured CPU Time in a System is important for several Reasons:

 Uncaptured Time is an Indicator of the general Health of the Operating System

- If the uncaptured CPU Time for a System changes drastically, there may be Tuning, Configuration, or Application Changes to be made - An Increase in uncaptured Time could also be due to a Software Problem within MVS and would need to be fixed - An Increase may also just be normal and what is required to run the current Work

 Uncaptured Time is Part of the overall Processor Capacity that must be available to run the Operating System

- In order for Workloads to meet Service Goals it is CPU Time consumed over and above the Time directly required for System and User Address Space Execution

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 169 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

 When doing Capacity Planning, uncaptured Time as well as the Overhead of System Address Spaces is usually spread proportionally across the "real" Work in the System

- Uncaptured Time and the Overhead of System Address Spaces usually grows in Proportion to the Growth of Work in the System, and spreading it across the “real” Work allows for it to be included in Growth Projections for Applications

• There will always be Processing within MVS which will result in uncaptured Time

 Over the Years a lot of Work has been done to reduce uncaptured CPU Time, but it can never be totally eliminated

 The captured CPU Time for an Address Space is the Sum of:

- Task and SRB Execution Time - I/O Interrupt Handler Time (IIT) - Hiperspace Management Time (HSP) - Region Control Task Time (RCT)

• Measurement of uncaptured CPU Time is often expressed in Terms of a Capture Ratio

 A Capture Ratio is the Percentage of total Task and SRB CPU Time compared to the total CPU busy which is captured over an Interval of Time, and may or may not include I/O Interrupt Handler, Region Control Task, or Hiperspace Management Times

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 170 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 85 Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

 In this Presentation, I/O Interrupt Handler (IIT), Region Control Task (RCT), and Hiperspace Management (HSP) Times are considered as Part of captured CPU Time

• The Percentage of uncaptured CPU Time in a System during an Interval of Time can be calculated as follows:

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 171 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• Uncaptured CPU Time occurs during the following Processing within MVS: • Dispatcher Processing • Interrupt Handling • Work Suspension • Task Termination • Uncaptured Time usually, but not always, occurs while MVS is running disabled for Interrupts

• Some Processing, such as Time spent in the Dispatcher looking for Work to run, cannot be associated with specific Work • Other Processing, such as Time spent in the Dispatcher queuing Interrupt Request Blocks (IRBs), could be tracked to a specific Address Space and may be considered by some to be useful, but is not

• Some Processing, such as Program Check Interrupt Handling, is intentionally not captured in an Attempt to keep the Processing Time used by an Application consistent from Run to Run no matter how many or few Events, such as Page Faults, outside of the Application's Control occur

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 172 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 86 Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• What is a good Capture Ratio? • As mentioned before, a lot of Work has been done over the Years to increase the Capture Ratio by reducing uncaptured CPU Time

• As a Rule of Thumb, the Capture Ratio of a System should be more than 90%

• If the Capture Ratio falls below 90%, this should be investigated • Causes of high uncaptured Times

• It is impossible to list all the Causes of high uncaptured Time, but some of the common Causes of high uncaptured time are:

 High Page Fault Rates - This is true for any Page Fault, even if it does not require I/O to a Page Dataset  Full Preemption - When Work is running with full Preemption the Number of Times the Dispatcher is entered increases  Suspend Lock Contention - High Contention for Suspend Locks causes an Increase in Work Suspension and Dispatcher Processing

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 173 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• Causes of high uncaptured Times…

 Spin Lock Contention - High Contention for Spin Locks in Interrupt Handlers or the Dispatcher will result in higher uncaptured Times for the Time spent handling Spin Lock Contention  Getmain/Freemain Activity - Getmain and Freemain Processing within Interrupt Handlers or the Dispatcher are very expensive, and to be avoided if possible

- To avoid this Overhead it is common for Cell Pools to be used instead of Getmain/Freemain

 SRM Time-Slice Processing - SRM Time-Slice (TSDP, TSGRP, TSPRTN) Priority Adjustment can be very expensive under certain Conditions

 IRBs for a Task with a large Subtask Tree - Because the entire Subtask Tree must be searched before queuing an IRB a large Subtask Tree makes this Processing very Time consuming - When possible, large Numbers of Subtasks should be avoided in Applications

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 174 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 87 Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• Causes of high uncaptured Times…

 Unable to queue IRBs - If a Task is in a State which prevents the IRB from being properly processed the Dispatcher will keep trying unsuccessfully to queue the IRB  SLIP Processing - SLIP Processing which causes many PER and Space Switch Program Checks which must be analyzed increases the ime in the Program-Check Handler

 Long Queues being searched - Long Queues being searched within uncaptured Processing increases the uncaptured Time because of the Time it takes to search the Queue - This can occur within both System Code and authorized Application Code which uses Timer or I/O disabled Interrupt Exits (DIEs)

 Affinity Processing - Affinity to a specific CPU or the need for an asymmetric Resource such as a Cryptographic Processor increases the Path Length for dispatching Work

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 175 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• How to determine the Capture Ratio of a System? • The Capture Ratio is calculated as follows:

Captured CPU Time Capture Ratio = Total CPU Time

• Instead of calculating the Capture Ratio manually, the RMF Spreadsheet Reporter can be used

• The Capture Ratio is included in the System Overview Report created with the Spreadsheet Macro RMFY9OVW  See Worksheet OneCPuUtil

• To create an RMF System Overview Spreadsheet do the following:

 Create an RMF System Overview Report Input Dataset using the RMF Post Processor

- For the required RMF Input Control Statements refer to RMF Spreadsheet Reporter Macro RMFX9MAK (Create Overview Control Statements)

 Binary download the RMF System Overview Report to your Workstation  Use the RMF Spreadsheet Reporter to create a Working Set  Create a System Overview Report using the RMFYOVW Spreadsheet Macro

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 176 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 88 Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• RMF System Overview Report Sample

Capture Ratio

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 177 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• Other Considerations • As Processors are getting faster and faster, for example the IBM system z9 and now the z10, it becomes necessary that some SRM timed Algorithms are involved less frequent

• The SRM Invocation Interval can be specidied in the IEAOPTxx Parmlib Member

 The RMPTTOM Parameter specifies the SRM Invocation Interval in real Time

 The specified real Time Interval is adjusted by relative Processor Speed to become SRM Time in order to ensure consistent SRM Control across various Processors

 The Relationship of real Time to SRM Time for each Processor is described in the “Advanced SRM Parameter Concepts” Section of z/OS MVS Initialization and Tuning Guide

• Before the IBM System z9. the Default for RMPTTOM was 1000 Milliseconds

• With the z9, an Increase of uncaptured CPU Time was experienced because some SRM Functions were involved more often than necessary on these fast Processors

• As a Consequence, the Default for RMPTTOM was increased to 3000 Milliseconds

 This has been addressed by APAR OA18452

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 178 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 89 Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• Other Considerations… • For most Environments with the Fix for APAR OA18452 applied the new RMPTTOM • Default of 3000 should be adequate

• With or without the Fix, Installations may choose to experiment with larger RMPTTOM Values for Production LPARs running on Processors with a per CP speed above 100 MIPS

• With the APAR Fix larger Values like 5000 may have little Influence on SRM CPU Time

• For any LPAR, but especially small LPARs (less than 150 MIPS) or non-Production LPARs, which may not need SRM Period Switch or can live with less precise Period Switches, Numbers up to 20,000 or larger may be used

• Customers must do the Analysis to ensure high Values do not impact Responsiveness and System Efficiency

• Small Changes in RMPTTOM should be tried with Analysis of the Effect of each Change

• An Installation may decide higher Values may be acceptable due to the Nature and Importance of the Workload on the LPAR

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 179 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Uncaptured CPU Time and the Capture Ratio

Uncaptured CPU Time and the Capture Ratio…

• Other Considerations… • For detailed Information on the Capture Ratio and recommended RMPTTOM Settings for different Processors Models and Environments, refer to the following WSC Flash:

 z/OS Performance: Capture Ratio Considerations for z/OS and IBM System z Processors - V2

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10526

• OA07055 Page Fault Delays for extremely large Jobs due to UIC Processing

 z/OS Performance: Performance Improvements when managing for large Real Environments

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10598

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 180 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 90 Sysplex Aggregation Verification

Parallel Sysplex Aggregation

• What is Parallel Sysplex Aggregation?

• Sysplex aggregation is a set of rules that IBM have defined that determine the basis that will be used when deciding how some z/OS®-related software should be charged

• The concept of sysplex aggregation was introduced when IBM announced Parallel Sysplex and the 9672 range of CPCs 2 in 1994

• At its most simplistic, sysplex aggregation allows you to pay for software on two CPCs as if those CPCs were in fact a single CPC

• The reason this is attractive is that the software price per additional unit of processing power decreases as the CPC size increases

 As the number of MSUs increases, the incremental cost of each additional MSU gets smaller and smaller

• When qualified for Parallel Sysplex Aggregation, there are 2 attractive license models

 Aggregated Parallel Sysplex License Charge (PSLC)

 Aggregated Workload License Charge (WLC)

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 181 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Parallel Sysplex Aggregation

Parallel Sysplex Aggregation

• Aggregation Rules and Criteria • There are a number of rules and criteria that IBM have specified that determine whether a given CPC is eligible for sysplex aggregation • A very important point is that your systems must always conform to these rules  it is not sufficient that they just conform when the CPC is originally installed • The restatement of the sysplex aggregation criteria specifically states - “You must also notify IBM when you believe that your systems no longer qualify for pricing under this criteria” - The complete restatement announcement is available on the Web at:  http://www-306.ibm.com/common/ssi/rep_ca/8/897/ENUS204-268/ENUS204-268.PDF • Aggregation Requirements • The sysplex aggregation criteria break out into hardware, software, and operational requirements, as follows: - Sysplex aggregation hardware requirements:  All CPCs in the Parallel Sysplex must have a common time source

o This means they must all be connected to the same Sysplex Timer® or STP network  Because the Sysplex Timer and STP support connection of up to 24 CPCs, it is very likely that all your CPCs are already connected to the same common time source

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 182 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 91 Parallel Sysplex Aggregation

Parallel Sysplex Aggregation

• Aggregation Requirements…  All systems in the Parallel Sysplex must be connected to at least one common CF  You can use either shared or dedicated CPs in your z/OS LPAR’s - Sysplex aggregation software requirements:

 MVS/ESA™ 5.2.2, OS/390®, or z/OS must be executing in all LPARs that are to be considered for aggregation  All z/OS, z/OS.e, and OS/390 images that comprise the Parallel Sysplex environment must have at least one common systems enablement function activated to use the CF across all images in the PricingPlex. Eligible systems enablement functions are:

o Application data sharing, including: - IMS TM: with IMS DB or DB2‚ - CICS: with IMS DB, DB2, or VSAM/RLS - TSO and DB2 data sharing - An eligible Independent Software Vendor’s Data Base from Group C of the PSLC Exhibit.

o WebSphere MQ shared message queues

o HSM common recall queue

o Enhanced Catalog Sharing

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 183 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Parallel Sysplex Aggregation

Parallel Sysplex Aggregation

• Aggregation Requirements…

- Sysplex aggregation software requirements:

 Eligible systems enablement functions (cont.)

o GRS Star

o JES2 Checkpoint in the Coupling Facility

o RACF® database caching

o SmartBatch multisystem processing

o VTAM® Generic Resources

o VTAM MulitNode Persistent Sessions

o Automated tape sharing and switching (prior to z/OS 1.2)

o System Logger SYSLOG (OPERLOG)

o System Logger LOGREC

o System Logger Resource Recovery Services (RRS)

Even though there are other CF exploiters, like XCF for example, it is only one or more items in the list above that will meet the requirement for use of a systems enablement function

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 184 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 92 Parallel Sysplex Aggregation

Parallel Sysplex Aggregation

• Aggregation Requirements…

- Sysplex aggregation operational requirements:

 The z/OS, z/OS.e, and OS/390 images participating in the above systems enablement function(s), in the same sysplex, must account for at least 50% of the total MSUs consumed by MVS-based systems on each eligible CPC.

Note: This is not 50% of the total capacity the CPC, but 50% of the used capacity. Further, it is 50% of the capacity used by MVS-based systems (including those running under VM), so it excludes capacity used by VM itself and its other guests, Coupling Facility, or Linux LPARs, as well as any capacity used on any special purpose engines (zAAPs, for example). This is a very important point that is often misunderstood.

 To determine eligibility for the 50% rule, the following calculation is applied for each CPC:

o Sum the utilization of all the systems in each sysplex on the CPC for the 8-hour prime shift, for the 5 working days in the week (a total of 40 hours)

o Sum the utilization for all MVS-based LPARs on the CPC for the same time period

o Divide the utilization of the largest sysplex on this CPC by the total MVS-based utilization on this CPC

o In order for this CPC to be eligible for aggregation, the PrimaryPlex utilization must be using more than 50% of the total for every week in the month.

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 185 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Parallel Sysplex Aggregation

Parallel Sysplex Aggregation

• Aggregation Requirements…

- Sysplex aggregation operational requirements (cont.):

 Calculation of eligibility for the 50% rule

o The PLEXCALC tool provided by IBM does these calculations for you, using SMF Type 70 records from every MVS-based LPAR (or VM guest) in the CPC

o The Sysplex Calculator tool PLEXCALC is a no-charge tool provided on an as-is basis that can be downloaded from the following Web site:

http://www.ibm.com/servers/eserver/zseries/swprice/sysplex/sysplex_calc.html

Important Note: to ensure that your installation always is in complience with the 50% rule, it is highly recommended that you run PLEXCALC on a regular basis!

• Documentation

- Restatement of Criteria for aggregated Sysplex, ENUS204-268

- z/OS Systems Programmers Guide to Sysplex Aggregation, REDP-3967-00

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 186 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 93 Sysplex Aggregation Verification

Sysplex Aggregation Verification

• As Workloads change, in Order to avoid any Inconveniences Customers with eligible Environments should regularly verify their Complience with the Aggregation Rules

• Therefore, it is highly recommended that those Customers frequently run the PLEXCALC Tool described previously

• Recommendation: run PLEXCALC on a weekly basis

• To run the PLEXCALC Tool, do the following Steps:

1.) Collect all RMF SMF Type 70 Records for all Systems on all Processors

2.) Run the PLEXCALC Utility using the SMF Type 70 Records collected in Step 1

3.) Download the PLEXCALC Output Dataset as Type .csv File to your Workstation

4.) At your Workstation, open the the .csv File with MS Excel

5.) Save the Excel Sheet created in Step 4

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 187 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

VSAM RLS Lock Contention Avoidance

VSAM RLS Lock Contention Avoidance

• Background • In a Parallel Sysplex, VSAM Record Level Sharing (RLS) uses the Coupling Facility to allow VSAM Data to be shared on a Record Level

• Serialization for protecting Data Integrity is done by using Locks and Latches

• As any other VSAM Dataset, a VSAM Dataset shared in RLS Mode can be backed up with DF/DSS (ADRDSSU) • At the Beginning of the Backup Process, DF/DSS (ADRDSSU) triggers a Quiesce Event – either a Copy or a Backup-while-open • This will cause the Quiesce Request to be broadcast to all Systems - and require a Response from each

• In order to process the Request, it is necessary for each SMSVSAM address space to look up the Spheres, then determine any Subsystems that are connected • This results in a SHCDS LISTSUBSYS Command being issued internally to display a Subsystem, and look up the associated Spheres

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 188 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 94 VSAM RLS Lock Contention Avoidance

VSAM RLS Lock Contention Avoidance

• Possible Causes of Lock Contention • Displaying a Subsystem and looking up Spheres requires Lock and Latch Services to be used for Serialization

• In certain Circumstances, depending on the VSAM RLS related Activity and Commands being processed in parallel, this may lead to a latch contention

• Recently, in a Customer Environment this resulted in intermittent hangs of Batch Jobs regularly used to DS/DSS Backups of CICS VSAM RLS Datasets

• Detailed Problem Analysis has shown that these intermittent Hangs where caused by the Quiesce Events triggered by DF/DSS being blocked due to a Latch Contention

• This Latch Contention was the Result of a SHCDS LISTSUBSYS Command being issued in parallel to the Quiesce Event triggered by ADRDSSU

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 189 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

VSAM RLS Lock Contention Avoidance

VSAM RLS Lock Contention Avoidance

• Possible Causes of Lock Contention… • Further Analysis then showed that the SHCDS LISTSUBSYS Command was issued internally to provide Output for a D SMS,SMSVSAM Command

• At the End of the Chain, the D SMS,SMSVSAM Command was issued by a REXX Exec executed every 2 Minutes by Automation to verify if SMSVSAM is still active

• As a Result, to verify the Presence of the SMS VSAM Address Spaces Automation was changed to issue a D A,SMSVSAM instead of a D SMS,SMSVSAM Command

• As a Conclusion, since this Change no more Hangs have been experienced while doing DF/DSS Backups of VSAM RLS Datasets

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 190 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 95 Static System Symbol Update Utility IEASYMUP

Static System Symbol Updates using the IEASYMUP Utility

• Normally, an IPL is required to change Static System Symbols defined in the IEASYMxx Parmlib Member • In OS/390 and earlier z/OS Releases, a Program called SYMUPDTE was provided for free Download • SYMUPDTE allowed to dynamically change Static System Symbols • SYMUPDTE is described in Redbook SG24-5451-00 • With z/OS 1.6 and later Releases, a new Program providing similar Function is Part of the z/OS Base • The new Program is called IEASYMUP and is shipped as an Object deck in SYS1.SAMPLIB • To use IEASYMUP it has to be link-edited into an authorized Library • It is recommended that you install IEASYMUP as an SMP/E Usermod • For more Information on the Installation and Use of IEASYMUP, refer to Appendix B of the following Redbook: • z/OS Planned Outage Avoidance Checklist, SG24-7328-00 • Note: the Description of IEASYMUP in SG24-7328-00 also provides a Sample SMP/E Usermod

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 191 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 192 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 96 Recommended Readings

Redbooks and Redpapers  Server Time Protocol NTP Client Support, REDP-4329-00  z/OS V1R9 Implementation, SG24-7427-00  HFS to zFS Migration Tool, REDP-4328-00  Monitoring System z Cryptographic Services, REDP-4358-00

 IBM System z10 EC Technical Introduction, SG24-7515-00

 IBM System z10 EC Technical Guide, SG24-7516-00

 Getting started with Infiniband on System z10 and System z9, SG24-7539-00

 IBM System z10 Enterprise Class Capacity on Demand, SG24-7504-00

 IBM System z Connectivity Handbook, SG24-5444-08

 Server Time Protocol Planning Guide, SG24-7280-01  Server Time Protocol Implementation Guide, SG24-7281-01  ABCs of z/OS System Programming Volume 6, SG24-6986-00  Multiple Subchannel Sets: An Implementation View, REDP-4387-00  IBM System Storage DS8000 Series: IBM FlashCopy SE, REDP-4368-00

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 193 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

Zürich | 1. April 2008 © 2008 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 194 Expertforum CH #68, 1.-2.4.2008, Thun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 97 Silvio’s Corner Doc Jukebox

The following Documentations will also be supplied as Part of the GSE z/OS Expert Forum CH Handouts CD:

 z/OS Hot Topics Newsletter #18 (February 2008), GA22-7501-14  z/OS V1R9 Parallel Sysplex Test Report December 2007, SA22-7997-06  SMP/E 3.3 and 3.4 Users require SMP/E APAR to install z/OS Service, FLASH10639  HyperPAV Analysis Case Study, PRS3119  z/OS SMF Recording with MVS Logger, WP101130  z/OS use of System z HiperSockets, TC000016  Running IBM System z at high Utilization V1, WP100208  z/OS Availability: Blocked Workload Support, FLASH10609  Mainframe as a Green Machine and more (WinterGreen Research)  Asynchronous DB2 Data Sharing Locks not counted V2, FLASH10632  z/OS Basic HyperSwap and GDPS HyperSwap Overview, PRS3168  z/OS Basic HyperSwap High Availability Options Trifold, PRS3165  Introduction to Data on System z, WP101226

All Documentations listed on this Foil will be provided on the GSE z/OS Expertforum CH Homepage

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 195 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Last, but not least…

How to get the Handouts for this Session

• The Handouts for this Session and the Documentations listed in the “Doc Jukebox” will be provided on the GSE z/OS Expert Forum CH Homepage

http://www.thebrainhouse.ch/gse/

Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 196 The Future runsExpertforum on System CH z and #68, IBM 1.-2.4.2008, System StorageThun

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 98 GSE z/OS Expertforum Switzerland # 68, 1.-2.4.2008

The End...

Zürich | 26. Oktober 2004 © 2004 IBM Corporation Zürich | 26. Oktober 2004 Silvio Sasso's Corner, GSE z/OS 197 Expertforum CHZürich, #68, 1.-2.4.2008, 1. April 2008 Thun © 2008 IBM Corporation

Silvio Sasso's Corner, GSE z/OS Expertforum CH #68, 1.-2.4.2008, Thun 99