NUMA-Q Enabled for S/390: Technical Introduction

What is NUMA-Q?

Running VSE, VM, and OS/390 on NUMA-Q

Setup, customization, operation, and results

Bill Ogden, Erich Amrehn, Peter Cabaj, Gary Eheman, Mike Hammock, Steve Lee, David MacMillan, Mark Majhor, James Reynolds, Roger Thibault

ibm.com/redbooks

International Technical Support Organization SG24-6215-00

NUMA-Q Enabled For S/390: Technical Introduction

December 2000 Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix C, “Special notices” on page 139.

First Edition (December 2000)

This edition applies to the software current at the time of publication, as described throughout this document.

Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. HYJ Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.

© Copyright International Business Machines Corporation 2000. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Contents

Preface ...... vii The team that wrote this redbook ...... vii Commentswelcome...... viii

Chapter 1. Introducing S/390 on NUMA-Q ...... 1 1.1 General concepts ...... 1 1.1.1ITSOprojectconfiguration...... 1 1.2Positioning...... 2 1.3 Advantages...... 2 1.4Limitationsandconcerns...... 3 1.5Typicaluses...... 4 1.6 Skills and education needed ...... 5 1.6.1Terminology...... 5 1.7IBMsoftwarelicensing...... 6 1.8 Product documentation ...... 6

Chapter 2. A closer look at NUMA-Q ...... 9 2.1Hardwareoverview...... 9 2.2DYNIX/ptx...... 11 2.3 NUMA-Q Enabled For S/390 (NUMA-Q EFS) ...... 11 2.4Operation...... 11 2.5Service...... 12 2.6 Upgradability...... 12 2.7 Advantages...... 12 2.8Systemusedforthisproject...... 13

Chapter 3. FLEX-ES ...... 15 3.1 General concepts ...... 15 3.2Processoremulation...... 16 3.3I/Odevices(emulatedandreal)...... 18 3.4Configurationandcontrol...... 20 3.5Operation...... 21 3.5.1TerminalSolicitor...... 24 3.5.2 CLI facilities ...... 24 3.6 FSI Integrated Communications Adapter (ICA) ...... 25 3.7 FSI Parallel Channel Adapter (PCA) ...... 26 3.7.1 Need for Contiguous Memory ...... 27

Chapter 4. Typical customization tasks and considerations ...... 29 4.1 Planning the configuration...... 29 4.1.1I/Odeviceconfiguration...... 29 4.1.2 Planning the processor configuration ...... 30 4.1.3TheITSOprojectconfiguration...... 30 4.2Configuringemulateddiskvolumes...... 33 4.2.1 A brief introduction to SVM ...... 33 4.2.2DefiningthediskspacetoDYNIX/ptx...... 35 4.2.3Definingvolumesgreaterthan2GB...... 35 4.2.4InitializingthevolumesforFLEX-ES...... 36 4.2.5InitializingtheemulatedvolumesforusebyS/390...... 37 4.2.6 Loading S/390 data on an emulated disk volume...... 37 4.3 Adding LAN connections...... 37

© Copyright IBM Corp. 2000 iii 4.3.1 Adding LAN adapters...... 37 4.3.2 Configuring LAN adapter ports to DYNIX/ptx ...... 38 4.3.3 Configuring LAN adapter ports for FLEX-ES ...... 38 4.3.4 Allocating LAN adapter ports to S/390 images ...... 38 4.4Tapedrives...... 40 4.4.1 General SCSI tape drive considerations ...... 40 4.4.2 Installing SCSI tape drives...... 42 4.5MultipleS/390CPUcomplexes(FLEX-ESinstances)...... 43 4.5.1Buildingamulti-instanceconfiguration...... 43 4.5.2Structureofmulti-instanceconfigurationfile...... 44 4.5.3 Starting and running multiple instances ...... 45 4.5.4 Multi-instance considerations and suggestions ...... 45 4.5.5HowmultipleFLEX-ESinstancescomparetoLPARs...... 47 4.6 Virtual CTC connections ...... 48 4.6.1 Defining virtual CTC connections...... 48 4.6.2AssociatingvirtualCTCwithanS/390instance...... 48 4.6.3UsingavirtualCTCA...... 49 4.7Memoryuse...... 49 4.7.1 S/390 expanded memory ...... 49 4.7.2ProcessorCache...... 50 4.7.3DiskDataCache...... 50 4.7.4S/390centralstorage...... 50

Chapter 5. VSE/ESA on NUMA-Q EFS ...... 55 5.1 Planning an installation ...... 55 5.2BasicVSE/ESAmigrationorinstallation...... 55 5.3InstallingtheADCD-ROM...... 56 5.3.1Shellscript...... 56 5.3.2TerminalSolicitor...... 56 5.4Multi-systemsetup...... 57 5.5 Connectivity...... 57 5.5.1TCP/IPforVSE/ESA...... 57 5.5.2 Integrated communications adapter ...... 58 5.5.3 Emulated channel-to-channel adapter ...... 58

Chapter 6. VM/ESA on NUMA-Q EFS ...... 61 6.1 General ...... 61 6.2Configuration...... 61 6.3Installation...... 62 6.4VM/ESAIPL...... 62 6.4.1RecustomizationforNUMA-Q...... 62 6.5 Differences when running VM/ESA on the NUMA-Q ...... 63 6.5.1 Use of expanded storage...... 63 6.5.2I/Omeasurements...... 63 6.6 Running guest operating systems ...... 64 6.7AccesstoVM/ESAfilesfromNUMA-Q...... 64

Chapter 7. OS/390 on NUMA-Q EFS ...... 67 7.1 Planning and installation ...... 67 7.1.1Packageselection...... 67 7.1.2OS/390deviceconfiguration...... 68 7.2Operationanduse...... 70 7.2.1 IODF requirements ...... 71 7.2.2Systemperformancemonitors...... 73 iv NUMA-Q and S/390 Emulation 7.2.3Security...... 73 7.2.4 Parallel Channel Adapter (PCA) ...... 73 7.2.5Multi-systemsetup...... 74 7.2.6TCP/IPforOS/390...... 75 7.2.7FLEX-ESFakeTapeonOS/390...... 75

Chapter 8. Loading CD-ROM systems ...... 77 8.1BasicCD-ROMformats...... 77 8.2FLEX-ESformats...... 79 8.3ObtaininganUNZIPprogram...... 79 8.4ptxvolumenames...... 79 8.5InstallingtheOS/390ADCD-ROMsystem...... 79 8.5.1Systemdevicelayout...... 80 8.5.2Installationtasks...... 80 8.6InstallingtheVM/ESAADCD-ROMsystem...... 83 8.6.1Systemdevicelayout...... 84 8.6.2Installationtasks...... 84 8.7InstallingtheVSE/ESAADCD-ROMsystem-CKDformat...... 88 8.7.1Systemdevicelayout...... 88 8.7.2Installationtasks...... 88 8.8InstallingtheVSEADCD-ROMsystem-FBAformat...... 91 8.8.1Systemdevicelayout...... 91 8.8.2Installationtasks...... 92 8.9 Automating loading of CD-ROM systems ...... 92

Chapter 9. Typical configurations ...... 97 9.1TypicalsystemconfigurationforNUMA-QEFS...... 97 9.2Multipleapplicationenvironmentconfiguration...... 99

Chapter 10. Additional topics...... 101 10.1 External fibre channel disks ...... 101 10.2Security...... 101 10.3Tapetricks...... 102 10.4Traces...... 104 10.4.1 Examples of trace commands ...... 105 10.4.2Diskspaceadministration...... 105 10.4.3Usingsar...... 105 10.5Mini-rootbackup...... 105

Chapter 11. Frequently asked questions ...... 107

Appendix A. Configuration file listings ...... 115 A.1FLEX-ESconfigurationforOS/390-singleinstance...... 115 A.2FLEX-ESconfigurationforOS/390-multipleinstances...... 117 A.3FLEX-ESconfigurationforVM/ESA...... 118 A.4FLEX-ESconfigurationforVSE/ESA-CKDformat...... 120 A.5FLEX-ESconfigurationforVSE/ESA-FBAformat...... 122 A.6FLEX-ESconfigurationforcombinedsystems...... 124

Appendix B. The Storage Volume Manager (SVM) ...... 131 B.1 Disks, data groups, and volumes ...... 131 B.2 Modifying system SWAP volumes...... 131 B.3CreatingdiskgroupforS/390...... 133 B.3.1ChangingSVMDefaults...... 133

v B.4AllocatingS/390volumes...... 133 B.5SomeotherusefulSVMcommands...... 134 B.5.1Faileddiskrecovery...... 136 B.6 Additional configuration tasks ...... 137 11.1Furtherinformation...... 138

Appendix C. Special notices ...... 139

Appendix D. Related publications ...... 141 D.1 IBM Redbooks ...... 141 D.2 IBM Redbooks collections ...... 141 D.3Otherresources...... 141 D.4ReferencedWebandInternetsites...... 142

How to get IBM Redbooks ...... 143 IBM Redbooks fax order form ...... 144

Index ...... 145

IBM Redbooks review...... 149

vi NUMA-Q and S/390 Emulation Preface

NUMA-Q is a well-established IBM product in the very high-end market. Using appropriate S/390 emulation software products, a NUMA-Q system can emulate a smaller S/390, including many of the common I/O units associated with an S/390. This IBM redbook briefly introduces the NUMA-Q system, and then describes its use while using OS/390, VM/ESA, and VSE/ESA through such S/390 emulation.

This IBM redbook assumes the reader is familiar with S/390 concepts and terminology and has a user-level familiarity with traditional UNIX.

The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization Poughkeepsie Center.

Bill Ogden was an IBM Systems Engineer for many years, in IBM’s World Trade organization. Bill specializes in entry-level OS/390 systems (including all of OS/390’s predecessors) and has continued this work, part time, with the ITSO since his formal retirement.

Erich Amrehn is an ITSO staff member on assignment from IBM Germany. He specializes in VM, VSE, and (more recently) for S/390 topics.

Peter Cabaj is a Senior IT Specialist working for IBM Global Services in Canada. He has 20 years of experience in MVS and OS/390 systems. He holds a Master’s degree in Computer Sciences.

Gary Eheman is a senior representative in IBM NUMA-Q and S/390 Enterprise Systems Technical Marketing and has a long history of working with both VM and VM/VSE systems and with entry-level S/390 systems.

Mike Hammock is a retired IBM’er is currently working for IntelliWare systems, an IBM Business Partner which focuses on the small S/390 marketplace. Mike has extensive experience in entry-level S/390 systems within both IBM and IntelliWare and has worked on FLEX-ES based systems for the past year.

Steve Lee is a member of the S/390 Enterprise Systems Technical Marketing team. He specializes in VSE, VM, and networking on S/390 machines.

Dr. David M. MacMillan is with Fundamental Software, Inc. of Fremont, California, the company which produces the emulation software enabling NUMA-Q EFS. With degrees in both Computer Science and English Literature, as well as 20 years of experience in UNIX, he specializes in documentation and Web development.

Mark Majhor is a member of the S/390 Product Marketing Team who specializes in NUMA-Q and various database software running on NUMA-Q. He also has 20 year of experience working with UNIX and very large database systems.

© Copyright IBM Corp. 2000 vii James Reynolds is a member of the S/390 Market Development and Support team. He has many years of both technical and marketing experience in VSE, VM, and S/390 systems.

Roger Thibault is currently working as a VM support specialist with the IBM Support Centre (ISC) in Canada.

Comments welcome Your comments are important to us!

We want our Redbooks to be as helpful as possible. Please send us your comments about this or other Redbooks in one of the following ways: • Fax the evaluation form found in “IBM Redbooks review” on page 149 to the fax number shown on the form. • Use the online evaluation form found at ibm.com/redbooks. • Send your comments in an Internet note to [email protected].

viii NUMA-Q and S/390 Emulation Chapter 1. Introducing S/390 on NUMA-Q

This redbook uses the names of older products, such as S/390, VM/ESA, and OS/390, instead of the names of newer products, such as zSeries Server, z/VM, and z/OS. Much of the discussion in this book discusses NUMA-Q solutions for existing S/390 customers, often using older hardware and software. To avoid a confusing mixture of terms, the older names are used consistently throughout this document.

This document provides a general technical introduction to using S/390 solutions on NUMA-Q platforms. The formal name, NUMA-Q Enabled For S/390, is rather long and we will refer to it as NUMA-Q EFS in this redbook.

1.1 General concepts NUMA-Q is the name of a series of IBM servers. These servers are based on Intel Pentium processors, and use DYNIX/ptx as the native . The NUMA-Q machines and operating system are described in some detail in Chapter 2. These servers are enabled for S/390 by software that emulates S/390 functions. This software, provided by Fundamental Software, Inc. of Fremont, California, is described in Chapter 3.

S/390 processor emulation can use multiple Pentium processors, emulating a S/390 multiprocessor. There is a one-to-one relation between the number of Pentium processors involved and the number of (emulated) S/390 CPUs available to the S/390 operating system.

The emulated S/390 functions include the emulation of common S/390 I/O devices such as 3390 disk drives, LAN connections, tape drives, printers, and so forth. In addition, “real” S/390 control units and I/O devices can be connected through S/390 channels. The emulated DASD functions are especially attractive and typically provide better performance than “real” CKD disks.

Connectivity for 3270 sessions is provided through TN3270 sessions, over TCP/IP, to the emulation software. These TN3270 TCP/IP sessions are transformed, by the emulation software, to appear as local, non-SNA 3270s to the S/390 operating system. These sessions can function as system consoles, VTAM terminals, and so forth. In addition, IBM 3174 control units (or other similar control units) may be channel connected and used to drive “real” coax-attached 3270 terminals.

No modifications to the S/390 operating systems or customer applications should be needed to run in the NUMA-Q EFS environment. During our usage at the ITSO, we found this was the case.

1.1.1 ITSO project configuration NUMA-Q machines are available in many different configurations. In particular, there are a variety of disk drives and controllers available. For the work described in this redbook, we used a system that did not use a hardware RAID adapter. We used full mirroring of our disks, and used striping for each logical disk. The mirroring and striping were performed by the SVM component of DYNIX/ptx. This worked quite well, but the initial setup was more complex than

© Copyright IBM Corp. 2000 1 for a system that uses hardware for RAID recovery and striping. The specific disk setup we used is described in Appendix B. We expect that many customer systems will use fibre-attached solutions that use a hardware RAID implementation.

1.2 Positioning The S/390 offerings based on NUMA-Q provide the lowest-cost current S/390 platforms. In terms of the most immediate predecessors and comparisons, the recommended NUMA-Q solutions fit as shown in Figure 1.

larger

200

100

50

25

12

6 Arbitrary performance scale (logarithmic)

NUMA-Q EFS Multiprise 3000 zSeries

Figure 1. Positioning NUMA-Q Enabled For S/390 performance

The recommended NUMA-Q EFS solutions fit below the smallest Multiprise 3000 machine. Earlier machines, such as most Multiprise 2000 systems and almost any machine earlier than these, are at the lower end of the optimum NUMA-Q EFS range. A NUMA-Q system could provide a substantial upgrade for such customers, both in processor performance and in DASD capacity and performance.

1.3 Advantages There are a number of advantages to using S/390 emulation with a NUMA-Q machine. These include: • The power of the emulated S/390 system can be adjusted by changing the number of NUMA-Q processors used. The licensing cost of some S/390 software can be adjusted to meet the S/390 power being emulated. These effects should allow the customer to customize his hardware and software costs to better fit his needs. More traditional mainframe solutions tended to have coarser granularity of performance and more software licensing steps. • The ability to emulate traditional “local” S/390 devices (such as disk drives and tape drives) using TCP/IP network connections to other NUMA-Q processors provides a distributed solution unlike anything typically found in traditional

2 NUMA-Q and S/390 Emulation S/390 installations. Due to time constraints, we did not explore this capability in the ITSO project that produced this redbook. However we recognize that considerable potential exists in this area and that many customers are interested in functions that facilitate transparent geographic distribution of traditional S/390 processing. • The use of TN3270 sessions in place of locally-attached, non-SNA 3270 terminals provides ease of use (simple VTAM definitions for terminals) with wide area access (via TCP/IP functions external to the S/390 being emulated). • New S/390 instructions and functions can be installed simply by installing a new release of the S/390 emulation program, with no hardware changes required. • The use of Intel components and software emulation can provide a platform for an S/390 solution at an attractive price. The NUMA-Q solutions discussed in this document are in a performance range between a P/390 system (no longer marketed) and a Multiprise 3000. • Modern application solutions often involve mixtures of computing platforms. The NUMA-Q solution can run traditional UNIX applications (including Linux applications), Windows NT, and S/390 in the same “box”, with high-speed interconnects. Larger NUMA-Q configurations are very powerful machines, capable of effectively running such mixed workloads. • Multiple S/390 images can be run in the same NUMA-Q system. The effect is similar to LPARs in a “real” S/390, although the details are somewhat different.

1.4 Limitations and concerns There are limitations and concerns for this approach. Some of these are inherent in the concepts of the system, while others are current (as of December 2000) limitations in the implementation.

The inherent concerns include the following: • S/390 customers must learn a new operating system (DYNIX/ptx), plus a moderately complex total package (including FLEX-ES) in order to run their S/390 solutions. Both the operating system and the S/390 emulation functions have administrative requirements that are unfamiliar to traditional S/390 users. Surveys have shown that most S/390 enterprises also have UNIX skills already on staff and reuse of that skill helps mitigate this issue. • The use of an emulated S/390 instead of a real S/390. This concern may be one of perception rather than reality. Real S/390s implement some of their instructions (and almost all of their I/O functionality) through internal code running on an internal microprocessor. The NUMA-Q EFS machines do the same at a more visible level. Nevertheless, the perception of emulated versus real can be a when evaluating systems. • The underlying hardware of the NUMA-Q processors (based on Intel Pentium III quads) does not have the degree of internal error checking present in “real” S/390 processors,1 and does not have the associated S/390 hardware recovery functions.

1 This excludes the P/390-based systems, which have little internal S/390 error checking. Also, the Multiprise 3000 systems do not fully implement all the error recovery features of the larger S/390 processors. (In particular, the Application Preservation function is not available.)

Chapter 1. Introducing S/390 on NUMA-Q 3 • Multiple Pentium processors can be used for S/390 emulation; these correspond to multiple S/390 CPUs in a multiprocessor configuration. A single S/390 task is limited to the performance of a single Pentium processor emulating S/390 instructions. (This effect is the same on a traditional S/390; a single task is limited to the speed of a single processing unit.)

Current implementation restrictions, subject to change, include the following: • External parallel channels (using a PCI adapter card from Fundamental Software) can be used, but currently cannot connect to external S/390 DASD devices. • If parallel channels are used, the largest S/390 central storage that can be defined is limited to slightly less than 512 MB. • No S/390 ESCON or FICON connections are available. Just as our redbook project ended, FSI posted a statement of direction for ESCON on their Web site. You should check the www.funsoft.com site for more details. • No Coupling Facility channels or functions are available. • IEEE (“binary”) floating point instructions are not emulated. (OS/390 detects the absence of binary floating point “hardware” and provides its own emulation.) • No cryptographic hardware assistance functions are available. • S/390 instruction sets beyond the first level set (corresponding to OS/390 V2R10) are not available. (The 64-bit capabilities of the zSeries machines are considered to be after this level set, and are not available in the NUMA-Q S/390 emulation.) • The OSA Express hardware functions (including the QDIO channel programming technology) are not available. • No simple emulation interface is available for connecting a few SDLC lines to OS/390 VTAM.2 (VM/ESA and VSE/ESA can use the ICA emulation provided, but OS/390 does not support this.) • SNA LAN connections are supported only with token ring.

Again, all of these current implementation restrictions are subject to change. If you have major concerns about these topics you should obtain the most current information from your NUMA-Q supplier. At the time of writing, the EFS design and implementation was in the process of being extended. Additional information should be available throughout 2001.

1.5 Typical uses There are several typical uses of NUMA-Q EFS solutions. These include: • Replacing existing, smaller S/370 and S/390 machines. The NUMA-Q base can provide a substantial performance upgrade for these, while remaining in the S/390 entry-level price ranges. • Providing a single platform for mixed solutions. The ability to run S/390, UNIX (including Linux) applications, and Windows NT applications (with high-speed interconnections between the applications) in the same box can be very

2 The P/390-based systems and the Multiprise 3000 provide the WAN3172 emulation function for this purpose. In general, this interface transforms up to 18 SDLC line connections so they appear as SNA LAN connections to VTAM.

4 NUMA-Q and S/390 Emulation attractive. Server consolidation has become a major objective in many sites. The scalability of the NUMA-Q platform is one of its major design attributes. • Disaster recovery, with a single NUMA-Q machine providing UNIX, Windows NT, and S/390 backup.

1.6 Skills and education needed Installation and use of an NUMA-Q EFS system requires a number of skills. These include: • The normal S/390 application, administration, operation, and systems programming skills that would be needed for any S/390 running the same operating system and applications. • FLEX-ES configuration and administration skills. These are not difficult and the FLEX-ES documentation may be sufficient. This redbook provides a reasonably complete overview of this area. • Understanding specific FLEX-ES hardware options. These include S/390 channel adapters and SDLC/BSC adapters. • DYNIX/ptx SVM skills. SVM is the logical volume manager for DYNIX/ptx. SVM functions are needed to define and manage disk areas for emulated S/390 DASD devices. SVM is a moderately large topic and we only the surface in this redbook. The SVM manual goes into more detail, but additional education relating SVM use to specific S/390 setup may be necessary. • General DYNIX/ptx skills, including reasonable competence with the vi editor. A competent, traditional UNIX administrator should be able to pick up the DYNIX/ptx administration functions without a major effort. • Understanding general NUMA-Q hardware and options.

Once an EFS system is operational and in production, this list is mostly reduced to the first item: normal S/390 skills. IBM intends to provide classes that combine FLEX-ES, DYNIX/ptx, SVM, and NUMA-Q hardware materials. The basic UNIX skills can come from many sources.

The requirement for skills other than S/390 is strongly influenced by several factors. Installation assistance by an IBM business partner, especially in a stable production environment, can substantially reduce the need for the non-S/390 skills. Conversely, a development environment that experiments with a variety of configurations and options will need all the skills listed here.

1.6.1 Terminology An NUMA-Q EFS system involves traditional S/390 terminology and traditional UNIX terminology, and such terms are used without further definition throughout this redbook. Several new terms, used frequently in this redbook, require some explanation to prevent confusion.

The term CPU complex, in the NUMA-Q EFS context, means an emulated S/390. The emulated S/390 may have multiple S/390 CPUs processors emulated within that S/390. The term FLEX-ES instance means the same thing as CPU complex. Both terms are used in this book (and in the FLEX-ES documentation) and should be regarded as synonyms. An EFS system may have multiple CPU complexes (or FLEX-ES instances) active. While reading this redbook, do not interpret the

Chapter 1. Introducing S/390 on NUMA-Q 5 term CPU Complex (or S/390 CPU complex) to mean the hardware involved in a traditional S/390 mainframe installation.

The term volume has a very well-defined meaning to S/390 customers. A DYNIX/ptx system (and this document) uses the term ptx volume,orSVM volume, or ptx/SVM volume (these three forms are synonyms) for a completely different purpose, as part of a logical volume manager for DYNIX/ptx. While reading this redbook (or while reading SVM documentation), be careful because both ptx volumes and S/390 volumes are discussed in different parts of the book and should not be confused.

1.7 IBM software licensing Several licenses are involved for the basic system. A license is required for DYNIX/ptx; this can be a minimal license because S/390 users are not directly seen by DYNIX/ptx. A license is required for FLEX-ES, to provide the S/390 enablement functions, and is delivered with the solution. This license is unique to each NUMA-Q machine and specifies the number and speed of the Pentium processors to be used.

Licenses are required for any IBM operating systems and for any of the normal program products used. The exact details were not available at the time of writing and should be obtained through your IBM or IBM business partner representatives.

1.8 Product documentation For functional S/390 usage, normal S/390 product documentation applies to EFS. Additional documentation applies to the NUMA-Q hardware, DYNIX/ptx, and FLEX-ES.

All the NUMA-Q and DYNIX/ptx documentation is available through a Web site. The starting point is webdocs.sequent.com (note that this URL does not begin with www). This Web page lists functional groups of NUMA-Q and DYNIX/ptx documentation and offers links to the various manuals. In general, each Web page of a document corresponds to a chapter in the document.

The URL webdocs.sequent.com/docs/ produces a directory listing of all the available documents for NUMA-Q and DYNIX/ptx. This directory listing is not very readable because it lists only the document numbers, such as dpugab00, dpa0aa02, and so forth. If you select one of the form numbers (using your Web browser), the next page contains a list of all the chapters in the document.3 Yo u can view these on your web browser or print them on a typical PC printer. By printing each chapter in a document, you obtain a reasonably good complete printed document. We say “reasonably good” because there are no indexes and the table of contents do not contain page numbers.

Using this method, we printed the following documents and found them useful for learning about DYNIX/ptx: form number Title dpugab00 DYNIX/ptx User’s Information

3 These are, in effect, the same chapter “pages” you can view by starting at webdocs.sequent.com.

6 NUMA-Q and S/390 Emulation dpa0aa02 DYNIX/ptx System Administration Guide dpcpad00 DYNIX/ptx System Configuration and Performance Guide dprtac00 DYNIX/ptx System Recovery and Troubleshooting Guide dpinaa01 ptx/ Software Installation Guide dpidaa01 DYNIX/ptx Implementation Differences tcpaae00 ptx/TCP/IP Administration Guide svagad04 ptx/SVM Administration

Some of the documents are long and we strongly suggest using a printer with duplex (print on both sides) capability.

The location and availability of FLEX-ES documentation was in transition at the time of writing. We assume this will be resolved about the time this redbook is published and will be covered in the EFS announcement materials. The FLEX-ES documentation is well-written and quite readable.

Chapter 1. Introducing S/390 on NUMA-Q 7 8 NUMA-Q and S/390 Emulation Chapter 2. A closer look at NUMA-Q

NUMA-Q is the hardware base for the EFS product. This chapter briefly describes NUMA-Q hardware and its operating system, DYNIX/ptx.

2.1 Hardware overview The IBM NUMA-Q system is part of the xSeries family of IBM e(logo)servers. The NUMA-Q server uses CC-NUMA (Cache Coherent - Non Uniform Memory Access) architecture, providing an advanced technique for producing larger symmetrical multi-processing (SMP) systems. The basic building block of a NUMA-Q system is four Intel processors, together with memory and I/O, known as a quad. Memory and I/O are located close to the processors to maximize overall system performance by having all memory and I/O accesses within that same quad whenever possible.

Figure 2. The NUMA-Q quad

Intel Intel Intel Intel Pentium III Pentium III Pentium III Pentium III Xeon Xeon Xeon Xeon

Embedded System Bus 720MB/sec

Memory PCI Bridge C IQ- D

M PCI/EISA PCI I/O Bridge

Memory PCI I/O PCI I/O Interconnect to other Quads PCI I/O PCI I/O

PCI I/O

PCI I/O 264 MB/sec PCI Bus 264MB/secPCIBus

© Copyright IBM Corp. 2000 9 Each quad consists of: • Four Pentium III (Xeon) processors (currently running at 700 Mhz). With the quad design, the NUMA-Q system ranges from the smallest (one quad - four processors) to the largest system (16 quads - 64 processors). • 1 to 8 GB Memory per quad - maximum 64 GB per system. • 7 PCI slots for I/O per quad with throughput up to 300 MB/sec. • High-speed interconnection between quads via the “IQ-Link which connects two or more quads together at speeds up to 1GB/sec.1 Another direct hardware connection for linking two quads is also available - but this second type of link cannot be expanded beyond the first two quads, while the IQ-Link can be expanded as each additional quad is installed in the system. • An internal bus (720 MB/sec) linking the internal components together.

NUMA-Q uses multiple fibre connections from the quads to various DASD solutions - both internal and external - including Enterprise Storage System (ESS) (also known as “Shark”). Fibre offers very high speed access at 110 MB per second.

The system can be partitioned2 on a quad boundary to run separate server operating systems in the different partitions. For example, separate instances of DYNIX/ptx might run in two partitions, while another partition might run Windows NT - separated on a quad boundary. (The S/390 emulation that is the focus of this book would run under any or all of the DYNIX/ptx partitions mentioned in this example.) Partitioning is not required. All the processors (in all the quads in a NUMA-Q machine) can run as a single SMP under a single copy of DYNIX/ptx. For the ITSO project described in this redbook, we used an unpartitioned machine.

Windows NT (or Windows 2000) in this environment currently runs in a rack-mounted model of the Netfinity server contained within the NUMA-Q’s enterprise cabinet and controlled by additional NUMACenter management functions. In the future, Windows NT will be made available to execute natively on the NUMA-Q hardware - but still on a quad boundary. Windows NT quads will not share processors nor memory with the DYNIX/ptx quads. Windows NT is not a required part of a NUMA-Q solution. However Windows NT operation can be consolidated into the same hardware frame as DYNIX/ptx, Linux applications, and S/390 operations, providing a more integrated total server solution.

A console PC is provided with the NUMA-Q for administration, management, and diagnostic services for the system. The console runs the Windows NT operating system and uses the Virtual Console Software (VCS) for management of the NUMA-Q.

The console PC is connected to the NUMA-Q system through a private Ethernet called the Management and Diagnostics Controller (MDC) network. Each quad in the system is connected to the MDC through a private 10base2 Ethernet connection.

1 Note that this is 1 gigabytes per second, not gigabits per second. 2 Partitioning, in this discussion, is accomplished by removing the memory interconnect (“I/Q-Link”) between quads.

10 NUMA-Q and S/390 Emulation Although the console PC is a general-purpose computer, it should be dedicated to managing the NUMA system. It is not intended for running user applications and data.

2.2 DYNIX/ptx DYNIX/ptx is IBM’s implementation of a traditional UNIX for commercial SMP and NUMA-Q systems based on UNIX System V Release 4. It contains extensions and alterations to the initial software base in order to support large numbers of processors and users. DYNIX/ptx dynamically distributes responsibility for executing processes, handling interrupts, and performing housekeeping duties among all processors in the system. DYNIX/ptx is designed to support a large number of users. Some differences with System V are present, but these are, in most cases, transparent to the user. Specific information about standards conformance and differences with the System V UNIX is detailed in the NUMA-Q documentation which can be accessed through the Web at: http://webdocs.sequent.com

DYNIX/ptx is preinstalled on all NUMA-Q EFS systems before they are shipped to customers. It can be reinstalled from CD-ROMs included with the system.

2.3 NUMA-Q Enabled For S/390 (NUMA-Q EFS) NUMA-Q Enabled For S/390 (EFS), the subject of this redbook, is based on a NUMA-Q system using FLEX-ES. FLEX-ES is software from Fundamental Software, Inc. (FSI) of Fremont, California. Together they provide an S/390- compatible mainframe computer environment which implements the ESA/390 mainframe architecture in software. FLEX-ES runs as an application on the DYNIX/ptx operating system and is enabled for a specific, licensed number of processors.

2.4 Operation NUMA-Q system operation is managed through a console PC that is supplied as part of the system. Startup, booting, system changes, and diagnostics are run from this console. It is attached to the system through a 10base2 Ethernet connection, which is a private network used only for communication between the console and all quads installed in the system. No user applications and/or data are intended to be stored or processed on the console.

The console PC operates under Windows NT and the VCS application, which is unique to NUMA-Q operation. VCS (for Virtual Console Support) provides a number of character-based windows and several GUI controls. The largest window (on the console PC) is, in effect, a telnet session. The operator can log into the system, using the same userid and password that could be used from any telnet session. DYNIX/ptx commands, such as ps, , vi, and so forth, can be used through this window. This window can be scrolled backwards for about 100 lines. While normal DYNIX/ptx administration can be done through any telnet session, we found the console window particularly useful because of the scrolling function.

Chapter 2. A closer look at NUMA-Q 11 A separate “subwindow” in the primary console window provides “hardware” startup messages (roughly similar to BIOS messages seen on a Multiprise 3000 system or a PC during startup). A completely different window allows communication with lower-level NUMA-Q control functions. It can be used, for example, to enable modem access to the console for remote support.

2.5 Service Hardware service for the IBM NUMA-Q system is standard IBM maintenance - business as usual. The console PC includes an internal modem through which IBM service and support can access the NUMA-Q for diagnostics and delivery of code fixes, and so forth. A customer-supplied phone line is required for this feature.

2.6 Upgradability NUMA-Q systems are upgraded by adding quads to the existing system. If there is available space within the cabinet, the added quad is simply installed there. DYNIX/ptx is capable of recognizing that the new quad has been installed. If there is not enough space within the primary cabinet, an additional cabinet is installed.

Quads containing processors of one speed (e.g., 550 Mhz) can co-exist in the same system with quads containing processors of a different speed (e.g., 700 Mhz) - which means that as technology advances, and faster Pentium processors are made available, the original NUMA-Q system is not rendered obsolete. The customer may simply purchase a quad of the latest speed processors to be installed within their existing system. Adding an additional quad also provides more memory for the complex (1 to 8 GB) and seven additional PCI slots for I/O attachment.

2.7 Advantages NUMA-Q 2000 systems were originally designed for very high-end, business critical UNIX applications. They have been proven in customer installations for a number of years. Important characteristics of the NUMA-Q include: • Up to 64 Intel Pentium III (Xeon) processors, installed in 1 to 16 quad blocks. Current implementation is with the 700 Mhz processors. • Up to 64 GB total system memory, with 1 to 8 GB in each quad. • Over 200 TB disk in a multi-path SAN configuration. This is through Enterprise Storage Server units, with 420 GB to 11.2 TB per unit. Each unit has 8 or 16 GB cache and uses RAID 5 with 9, 18, or 36 GB disk drives. The boxes are connected through fibre channel to NUMA-Q with up to 110 MB/s bandwidth. Multiple fibre channel connections reduce or eliminate single points of failure for this DASD subsystem. • Fibre-connected ESS units provide support for Peer to Peer Remote Copy (PPRC), controlled at the DYNIX/ptx level, and disk mirroring over long distances.

12 NUMA-Q and S/390 Emulation • Extensive RAS design includes redundant and hot-swap power and cooling modules, hot-swap disks, fault-tolerant I/O, and partitioning capabilities at the quad level.

2.8 System used for this project We used the system shown in Figure 3 for the work described in this redbook. It is an older NUMA-Q system with more sophisticated disk functions than might be needed for typical NUMA-Q EFS usage.

Boot Bay 36GB disk & CD-RW - SCSI attached

High-speed Interlink (1 GB/s)

Pentium III Pentium III Pentium III Pentium III Pentium III Pentium III Pentium III Pentium III -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon --

System Bus (720 MB/sec) System Bus (720 MB/sec)

4GB 4GB PCI Bridge IQ-Link Memory PCI Bridge IQ-Link Memory

PCI I/O PCI I/O PCI I/O PCI I/O

PCI I/O PCI I/O PCI I/O PCI I/O

PCI I/O PCI I/O PCI I/O PCI I/O

PCI - EISA PCI I/O PCI - EISA PCI I/O PCI Bus (264 MB/s) PCI Bus (264 MB/s) PCI Bus (264 MB/s) PCI Bus (264 MB/s) MDC MDC

Fibre Channel Switch Fibre Channel Switch

Fibre/SCSI Bridge Fibre/SCSI Bridge PC MDC Console

Disks Disks

Figure 3. Overview of the system used for this book. It is a two-quad NUMA-Q with 4 GB memory per quad.

The hardware configuration we used during the writing of this book included the following: • Two-quad NUMA-Q (550Mhz Pentiums) with 4GB memory per quad and IQ-Links. (Current NUMA-Q systems have faster processors.) • Two peripheral bays for DASD - containing twenty-three 9GB volumes. (Newer systems use larger disks, packaged in slightly different ways.)

Chapter 2. A closer look at NUMA-Q 13 •Oneboot bay containing 36 GB hard disk, CD-RW, and a QIC tape drive. • MDC console workstation running Windows NT. • PCI cards installed: 1. Four host-connect adapters (fibre connects) 2. One S/390 channel card from FSI. 3. One integrated communications adapter (ICA) from FSI. 4. One 4-port Ethernet adapter. 5. One token-ring adapter. 6. One differential SCSI card. • Two fibre-channel switches. (These would probably not be needed on a typical NUMA-Q EFS system.) • Two fibre/SCSI bridges. (These are necessary with this system because the older DASD bays are SCSI-connect only. Newer, more current DASD solutions are direct fibre attached and do not have this requirement.)

The operating systems installed were: • DYNIX/ptx 4.5.1 • FLEX-ES 5.10 (beta version) • VM/ESA v2.4 and z/VM v3.1 (ESP level) • OS/390 v2.9 • VSE/ESA v2.4

Not shown in the drawing are the LAN connections. We installed one token ring adapter and one four-port Ethernet adapter in the first quad. These were used as follows: • Token ring: this LAN interface was assigned to DYNIX/ptx and connected to the IBM internal IP network at address 9.12.0.8. We used this interface to telnet to DYNIX/ptx and for TN3270 connections to the Terminal Solicitor (as described in following chapters). By going through the Terminal Solicitor, we could connect TN3270 sessions to OS/390, VM/ESA, and VSE/ESA. • Ethernet port 0: this was assigned to DYNIX/ptx and connected to a local private network. It was used for the same purposes as the token ring interface. • Ethernet port 1: this was assigned to VM/ESA and used for various IP functions through VM/ESA. • Ethernet port 2: this was assigned to OS/390 and used for TN3270, otelnet, and ftp functions. (otelnet is the OS/390 telnet interface to UNIX System Services.) • Ethernet port 3: this was assigned to VSE/ESA and used with TCP/IP.

With the system shown, we were able to configure and load OS/390, VM/ESA, z/VM, and VSE/ESA (3 different copies) and run them all concurrently. The flexibility of a NUMA-Q with a NUMA-Q EFS solution makes it possible to manage many different operating systems at once.

14 NUMA-Q and S/390 Emulation Chapter 3. FLEX-ES

The creator of FLEX-ES, Fundamental Software, Incorporated, of Fremont, CA is informally known as FSI and this abbreviation will be used throughout this redbook.

Versions of FLEX-ES are available to run on other platforms. However, the S/390 functions emulated are not necessarily the same for all FLEX-ES platforms. This redbook refers only to the version that runs on DYNIX/ptx, and is known as NUMA-Q Enabled for S/390 or NUMA-Q EFS.

3.1 General concepts FLEX-ES contains several components, including: • The S/390 instruction emulator (which can be viewed as the main component). •TheResource Manager that manages the interfaces between the emulated S/390 processors, emulated I/O devices, physical PCA channels and system memory. • Various I/O device emulators (collectively managed by the Resource Manager). • A main console component (not to be confused with an S/390 console) for managing FLEX-ES; it is accessed through network-attached “ Line Interface (CLI) consoles. • The Terminal Solicitor program that emulates local 3270 non-SNA terminals for operating system IPL and application access. • Several utility programs (for example, to format emulated CKD disks, or to “compile” resource definitions for FLEX-ES operation). • Two optional hardware components are available from FSI for use with FLEX-ES: a. A Parallel Channel Adapter (that fits on a PCI bus) b. An integrated communications adapter for SDLC and BSC lines (that fits on aPCIbus)

Taken together, the FLEX-ES components provide a layer of software that sits between S/390 programs (including operating systems) and an underlying DYNIX/ptx system. A conceptual diagram is shown in Figure 4 on page 16. The S/390 operating system and its applications are unaware that the environment provided by FLEX-ES is emulated. They operate just as if running on a “real” S/390 processor.

The “real” instruction set of the NUMA-Q machine (using Intel Pentium processors) is radically different than the S/390 instruction set. Both are Complex Instruction Set Computer (CISC) processors (as opposed to Reduced Instruction Set Computer (RISC) processors) and both use, and are addressed, via 8-bit bytes. This is about the extent of the similarity. The internal register architecture, elementary data formats, operation codes, and I/O operation are completely different.

© Copyright IBM Corp. 2000 15 The FLEX-ES program examines each S/390 instruction and then uses Pentium instructions to emulate the actions of the S/390 instruction. In some cases, this may require hundreds of Pentium instructions, while in other cases only a few Pentium instructions are needed for each emulated S/390 instruction.

FLEX-ES is highly optimized for this instruction emulation; nevertheless, the effective processing rate of S/390 instructions will be a small fraction of the processing rate of Pentium instructions. However, there is no simple formula for converting effective Pentium Mhz to S/390 MIPS.

VSE/ESA Applications (VTAM, CICS, Batch...) Emulated S/390 VSE / ESA

FLEX-ES FLEX-ES Emulated Resource Other I/O Manager Applications

Normal Processes DYNIX/ptx NUMA-Q Hardware PCI Adapters

I/O Devices I/O Devices I/O Devices I/O Devices

Figure 4. Conceptual view of FLEX-ES system

3.2 Processor emulation There is a one-to-one relationship between an Intel processor and an S/390 processor. If one Intel processor is used, the S/390 operating system will see one S/390 processor; if two Intel processors are used, the S/390 operating system will see two S/390 processors, and so on.1

You can dedicate Pentium processors to FLEX-ES, although at least one Pentium processor must be non-dedicated in order to perform emulated I/O functions and other DYNIX/ptx system functions. For example, in a single quad NUMA-Q system, one Intel processor is required for DYNIX/ptx and the FLEX-ES Resource Manager. Between one and three of the remaining Intel processors could be used for S/390 system images. Only one instance of the FLEX-ES resource manager is needed on a NUMA-Q server, regardless of the number of S/390 CPU complexes that may be running.

The term CPU complex is used to denote an instance of an emulated S/390. There may be several CPU complexes running on a NUMA-Q machine, with each CPU complex having its own S/390 operating system and applications. Emulated S/390 DASD can be shared (reserve/release). Other emulated and real I/O device interfaces cannot be shared, but can be reassigned to other CPU

1 FLEX-ES system definitions can cause a particular S/390 image to see fewer processors than the number of Intel processors enabled for S/390 emulation.

16 NUMA-Q and S/390 Emulation complexes. (A parallel channel control unit with multiple channel interfaces could be shared by multiple parallel channels connected to several S/390 instances.)

The NUMA-Q EFS is multi-image-capable and provides an environment similar to logical partitioning (LPAR) on traditional S/390 machines.2 Each system image can use one or more processors. Processors can be shared, or they can be dedicated to system images. The sharing and dedication of processors is defined by the system administrator or system programmer.

In general, you should not define more S/390 CPUs in a CPU complex than there are Pentium processors defined for FLEX-ES emulation. For example, if there is a single Pentium processor defined for FLEX-ES emulation, it does not make sense to define more than one S/390 CPU in this emulated CPU complex.

Figure 5 is highly conceptual but can be used to make several points: • On each DYNIX/ptx system, a single FLEX-ES Resource Manager runs, regardless of the number of S/390 CPU complexes in use. • On each DYNIX/ptx system, multiple “instances” of FLEX-ES can run, each providing an independent emulated mainframe. Each of these instances contains one or more emulated S/390 processors. These “instances” are sometimes known as FLEX-ES S/390 CPU complexes. • Multiple S/390 CPU complexes can be useful, in the same way LPARs are used. For example, the CPU complex on the left in the figure might be a VM/ESA system and the complex on the right might be a VSE/ESA system. • In principle, the number of S/390 CPUs defined in a complex, the number of Pentium processors used by FLEX-ES for S/390 CPU emulation, and the number of Pentium processors in the NUMA-Q system are three separate numbers.

VM/ESA VSE/ESA 2 S/390 CPUs defined 1 S/390 CPU defined for this CPU complex for this CPU complex FLEX-ES Resource FLEX-ES FLEX-ES Manager

DYNIX / ptx NUMA-Q 8 Pentium III XEON Processors (2 "quads")

Figure 5. Highly conceptual view with multiple CPU complexes

Each emulated S/390 processor can have one of three possible instruction sets:

2 See “Multiple S/390 CPU complexes (FLEX-ES instances)” on page 43 for important considerations.

Chapter 3. FLEX-ES 17 • ESA - Uses the S/390 ESA instruction set and I/O architecture. With this choice, there is an option to make the CPU complex respond as if it were an LPAR. • 370 - Uses a more limited instruction set, is limited to two emulated S/390 CPUs, and uses the 370 (“old”) I/O architecture. • VSE - Uses the ECPS:VSE functions in addition to normal 370 functions.

However, there is no current 370 and VSE instruction set support from IBM. In our ITSO projects we used the ESA mode. More information is available in the FSIMM020: Concepts document from FSI.

3.3 I/O devices (emulated and real) FLEX-ES emulates a large number of S/370 and S/390 I/O devices. Most of these are summarized in the following table. A more extensive list is available in the FSI documentation.

Category Type Models CU Characteristics

CKD 3390 3390-1, 3390-2, 3390-3, 3990 3390-9

3380 3380-A, 3380-D, 3990, 3380-J, 3380-E, 3380-K 3880

9345 9345-1, 9345-2 9341, 9343

3375 3375 3880

3350 3350 3880

3340 3340-35, 3340-70 3880

3330 3330, 3330-11 3880

2314 2314 2314

2305 2305-2 2835

FBA 9336 9336-10, 9336-20 6310

9335 9335 6310, 6010

9332 9332-200, 9332-400, 6310, 9332-600 6010

3370 3370, 3370-1, 3370-2 3880 FBA

3310 3310 3310

TAPE 3420 3420, 3422, 3430 3803, FakeTape,SCSI9-trk,DLT,DAT 3422, 3430

3480 3480 3480 FakeTape, SCSI 3480, DLT, DAT

3490 3490, 3490E 3490 FakeTape, SCSI 3480, DLT, DAT

18 NUMA-Q and S/390 Emulation Category Type Models CU Characteristics

3270 many 3178, 3179, 3180, 3278, 3174 Uses TN3270 sessions 3279, 3290

many 3178, 3179, 3280, 3277, 3274 Uses TN3270 sessions 3278, 3279, 3286-1, 3286-2, 3290

LAN 3172 3172 3172 Ethernet or Token Ring

Console 3215 3215 3215 Several versions (some X-based)

Printers 1403 1403, 1403-N1 2821 Writes to DYNIX files or printers

3203 3203-1, 3203-2, 3203-4, 3203 Writes to DYNIX files or printers 3203-5

3211 3211 3211 Wites to DYNIX files or printers

4245 4245, 4245-1, 4245-12, 4245 Writes to DYNIX files or printers 4245-20

4248 4248 4248 Writes to DYNIX files or printers

Cards 2540 2540R, 2540P, 2501 2821, 2501

CTC CTC CTC CTC

ICA ---- rbsca, rsdlc 9221, Uses FSI adapter 9373

CETI 6034 6034 SNA support over Token-Ring only

The emulated DASD functions are especially attractive and provide better performance than “real” CKD disks. This is due to the extremely fast speeds of NUMA-Q disks and I/O busses.

In addition to these emulated devices, a number of “real” control units (and their devices) may be attached through the FSI parallel channel adapter. These include 3705, 3725, 3x74, 5088, 3803, 3480, 3800, and 4245 devices. See the FSI documentation for a more complete list.

The FSI documentation contains considerable detail about the emulation functions and should be consulted for more details. We mention a few points here that may be of special interest.

FakeTape[TM] emulates a tape drive by using a DYNIX file. The internal format of the file is unique to FakeTape. However, the FakeTape program will recognize tapes written in AWSTAPE format. (AWSTAPE is a P/390-related program that emulates 3420 tape drives by using an OS/2 file.) This compatibility provides a method of exchanging files between P/390-based systems and FLEX-ES systems. FakeTape also provides read-only support for the OMA/2 format and provides tape emulation for it.

3174 (and 3274) emulation uses TCP/IP TN3270 (or TN3270E) sessions to emulate local, channel-attached, non-SNA 3270 terminals. These are suitable for Operating System consoles, VTAM terminals, and other users of local 3270 terminals. TN3270 is used to encapsulate 3270 data streams over TCP/IP. Clients

Chapter 3. FLEX-ES 19 can be anywhere in a connected TCP/IP network. The S/390 applications treat each client session as a local 3270.

3.4 Configuration and control The FSI document FSIMM310: Resource Language Reference describes the control files needed to manage FLEX-ES functions. This and other FSI documentation should be consulted for details. The following example may serve to provide the general flavor of the configuration files used by FLEX-ES.

This example defines a very basic S/390 system that could be used with a typical VSE/ESA system to support a few CICS users. The users would connect via TN3270 sessions that appear as local 3270s to VSE/ESA. # A small VSE/ESA AD CD-ROM system system vsetcp: memsize(225280) # K of core memory (220M memory minus 32M for pca card) cachesize(1024) instset(esa) tracesize(256) cpu(0) channel(1) local cu devad(0x00C,3) path(1) resource(vset2821) # card rdr,pun and prt cu devad(0x01F,1) path(1) resource(vsetcons) # VSE system console cu devad(0x150,2) path(1) resource(vsetdasd) # ckd dasd for vse cu devad(0x200,7) path(1) resource(vset3274) # nSNA terminals end vsetcp

resources vsetres: vsetcp: memory 252 # 252 M total central storage end vsetcp # Emulated 2821 (unit record) control unit vset2821: cu 2821 interface local(1) device(00) 2540R OFFLINE #reader device(01) 2540P OFFLINE #punch device(02) 1403 OFFLINE #printer end vset2821 vsetcons: cu 3274 interface local(1) # system console at 01F device(00) 3278 OFFLINE end vsetcons # 3390-3 DASD vsetdasd: cu 3990 interface local(1) device(00) 3390-2 /dev/vx/rdsk/s390dg/4333902s1 # dosres device(01) 3390-2 /dev/vx/rdsk/s390dg/4433902s1 # syswk1 end vsetdasd vset3274: cu 3274 interface local(1) device(00) 3278 OFFLINE #vset200 device(01) 3278 OFFLINE #vset201 device(02) 3278 OFFLINE #vset202 device(03) 3278 OFFLINE #vset203 device(04) 3278 OFFLINE #vset204 device(05) 3278 OFFLINE #vset205

20 NUMA-Q and S/390 Emulation device(06) 3278 OFFLINE #vset206 end vset3274 end vsetres

The two sections of the configuration file shown here, the system section and the resources section, work together.

The system section (vsetcp) defines the system name and other system parameters such as: • Number of CPUs and whether they are dedicated • Number of channels and their usage • Control units for all your system devices

The resources section (vsetres) defines a set of resources for a single system such as: • Interfaces for all the control units defined in the system section • Devices for all the control units defined in the system section

The system and resource definitions, in the source form shown here, cannot be used to start an instance of FLEX-ES. They must first be compiled.The FLEX-ES command cfcomp reads definition files (as shown here) and writes a compiled version of the file. There is a well-defined naming convention used to convert the resource names used in the source file to the names used for the compiled files. The cfcomp program performs syntax checking and verifies the named relationships among the various components of the resource definitions. Therefore, whenever we say we will use a resource definition file, there is always an implication that we compile it first.

3.5 Operation There are several FLEX-ES stand-alone utility programs which run under DYNIX/ptx. These utilities provide services such as: • Low-level formatting of emulated DASD • BackupofemulatedCKDDASD • Compilation of FLEX-ES resource and system definitions

As distributed, the FLEX-ES command processors and utilities are in the /usr/flexes/bin directory. By convention, resource definitions (both in source form and compiled form) are stored in /usr/flexes/rundir. These two directory names appear in most of the examples in this document.

The resource definition file is used by the resadm command to activate the resources. (The resadm command is the interface to the FLEX-ES resource manager.) The resource manager can display active resources, terminate active resources and refresh new resources, using the resadm command: # resadm - Usage: resadm <-h nodename> -r list the active resources resadm <-h nodename> -n resadm -s resadm -k kill the resource manager resadm -x refresh the indicated resources resadm -t terminate the named resource resadm -T terminate all active resources

Chapter 3. FLEX-ES 21 Youmustbeinsuperuser mode to use any of the change functions of the resource manager (but not for the query functions). The resadm command is typically not in the DYNIX/ptx PATH environment; it is normally used by moving to the appropriate directory and using the ./ function to access an executable in the current directory.

The following examples illustrate how a system owner could switch from one defined S/390 to another. One definition might be for a VSE/ESA using CKD disks, and another definition might be for a VSE/ESA using only FBA disks. The process is shown here: $ su Password: # /usr/flexes/rundir # cd ../bin # ./resadm -r

Assuming VSE/ESA system and resource definitions similar to that shown in “Configuration and control” on page 20, but with a few more devices included, the resadm -r command returns the following: Resource: CPU Flags: READY Type: CPU Port: 4745 Pid: 10193 Resource: MEMORY Flags: READY Type: MEM Port: 4747 Pid: 10195 Resource: vset2821 Flags: READY Type: CU Port: 4749 Pid: 10196 Resource: vsetdasd Flags: READY Type: CU Port: 4751 Pid: 10197 Resource: vsetcons Flags: READY Type: CU Port: 4753 Pid: 10198 Resource: vset3274 Flags: READY Type: CU Port: 4755 Pid: 10199 Resource: vset3172en Flags: READY Type: CU Port: 4757 Pid: 10200 Resource: vsetctc Flags: READY Type: CU Port: 4759 Pid: 10201 Resource: vset3480ft Flags: READY Type: CU Port: 4761 Pid: 10202 Resource: vset3490scsi Flags: READY Type: CU Port: 4763 Pid: 10203 Resource: NETCU Flags: READY Type: NETCU Port: 4765 Pid: 10204 Resource: TS3270 Flags: READY Type: TS3270 Port: 4767 Pid: 10205

We can use the following command to terminate all the active resources: # ./resadm -T

We can then verify that all resources are gone by using the command: # ./resadm -r No resources defined to resource manager

The following command will refresh (start, in this case) the resources for a different system. (The definition statements for this second VSE/ESA system are not listed here) : #./resadm -x /usr/flexes/rundir/vsecres.rescf

Verify that the new resources were started successfully: #./resadm -r Resource: CPU Flags: READY Type: CPU Port: 4938 Pid: 10468 Resource: MEMORY Flags: READY Type: MEM Port: 4940 Pid: 10469 Resource: vsec2821 Flags: READY Type: CU Port: 4975 Pid: 10488 Resource: vsecdasd Flags: READY Type: CU Port: 4977 Pid: 10489 Resource: vseccons Flags: READY Type: CU Port: 4979 Pid: 10490 Resource: vsec3274 Flags: READY Type: CU Port: 4981 Pid: 10491 Resource: vsec3172tr Flags: READY Type: CU Port: 4983 Pid: 10492

22 NUMA-Q and S/390 Emulation Resource: vsecctc Flags: READY Type: CU Port: 4985 Pid: 10493 Resource: vsec3480ft Flags: READY Type: CU Port: 4987 Pid: 10494 Resource: vsec3490scsi Flags: READY Type: CU Port: 4989 Pid: 10495 Resource: NETCU Flags: READY Type: NETCU Port: 5007 Pid: 10504 Resource: TS3270 Flags: READY Type: TS3270 Port: 5009 Pid: 10505

Leave the superuser mode: # exit

Once all resources have been started, you can use a shell script to IPL your system. The shell script creates a FLEX-ES CPU complex (or “instance”) through the invocation of the flexes command. The shell script also identifies the system resources and prepares devices to be mounted that were defined as OFFLINE in the resource configuration file. It can also be used to issue the IPL statement to IPL your S/390 system. A sample shell script is shown here: PATH=/usr/flexes/bin:$PATH; export PATH flexes vseckd.syscf 'mount 01f vseccons' | flexescli localhost vseckd echo 'mount 200 vsec200' | flexescli localhost vseckd echo 'mount 201 vsec201' | flexescli localhost vseckd flexescli localhost vseckd

The format and purpose of the mount commands in the shell script may not be obvious. Note that the mount command is issued to the flexescli command processor. It is not a UNIX mount command. Assuming a local 3270 (OFFLINE) is defined at address 200, the command mount 200 vsec200 causes FLEX-ES to activate (make ONLINE) an emulated 3270 terminal at this address and to place the name (vsec200) in the Terminal Solicitor list. In the shell script, the mount command is piped to the flexescli command. It is the flexescli command that actually alters a running instance of FLEX-ES. The localhost parameter means this system (as opposed to a remote system connected via TCP/IP), and the vseckd (in this example) is the name of an instance of FLEX-ES that is running. (The TCP/IP loopback address 127.0.0.1 is equated to localhost in the /etc/hosts file.)

In practice, a shell script like this could have many more mount commands for 3270 devices. Once the VSE/ESA system in this example is running, a user can connect to the Terminal Solicitor, select one of the 3270 device names, and connect to it.

Notice that the shell script ends with a command to execute flexescli. That is, flexescli is left running when the shell script completes. This allows the operator (or whoever is starting VSE/ESA in this case) to immediately enter FLEX-ES commands after the shell script completes. The operator would probably want to enter an IPL command at this point.

Assuming our shell script is in /usr/flexes/rundir/shvsec, we can execute it and then IPL the system as follows: $ pwd /usr/flexes/rundir $ sh shvsec FLEX-ES: Copyright (C) Fundamental Software, Inc., 1991-2000 This FLEX-ES module is licensed to International Business Machines Corporation Poughkeepsie, New York

Chapter 3. FLEX-ES 23 flexes> ipl 150 flexes>

3.5.1 Terminal Solicitor We have already mentioned the Terminal Solicitor several times. It is a TN3270 server function, provided with FLEX-ES, that listens on TCP/IP port 24 on the DYNIX/ptx system. That is, a user wishing to connect his TN3270 session to a system running under FLEX-ES would start a TN3270 session on his client machine and connect to port 24 on DYNIX/ptx. In our ITSO work, we had both token ring and Ethernet connections to DYNIX, as described in “System used for this project” on page 13, and were able to use either one for connections to the Terminal Solicitor.

The Terminal Solicitor typically produces a display on the client TN3270 session similar to this:

Welcome to the FLEX-ES Terminal Solicitor (node: q390dyn)

Please select (X) the desired service and press enter (PA1 to exit; CLEAR to refresh)

_ osterm4 _ osterm3 _ osterm7 _ osterm9 _ osterm900 _ osterm901 _ osterm902 _ osterm903 _ osterm904 _ osterm905 _ osterm906 _ osterm907 _ vsec200 _ vsec201

The Terminal Solicitor lists all the local emulated 3270 devices that are ONLINE and are not being used. If several instances of FLEX-ES are running, then 3270 devices from all the instances are shown. The names shown are taken from the flexescli mount command. This screen example contains vsec200 and vsec201 from the shell script described in “Operation” on page 21.

When a client selects and uses a 3270 device session that device disappears from the Terminal Solicitor list. When the TN3270 session with the device is dropped (by the client), the device will again appear in the Terminal Solicitor list for use by someone else.

Use of the Terminal Solicitor is not required. The alternative is to specify the IP address of the TN3270 client that is associated with a 3270 device. This can be done in the resource definitions or in the mount commands.

3.5.2 CLI facilities The Command Line Interface (CLI) is essentially the interface to the hardware console for the CPU complex. You can use the CLI console to communicate with the CPU complex using the flexescli command.

24 NUMA-Q and S/390 Emulation One or more CLI consoles may be started for any CPU complex (FLEX-ES instance). One or more of these CLI consoles may be attached at any given time to a single CPU complex main console.

The CLI console reads CLI commands from standard in and executes them. A null parameter list causes the command line program to initiate an interactive session where you can directly enter CPU complex control commands. The flexes> prompt is a clear indication that the flexescli program is active and waiting for a command. The shvsec shell script we just described leaves flexescli running in interactive mode when the shell script exits.

Examples of use are: $ pwd /usr/flexes/rundir $ sh shvsec (run the shell script) FLEX-ES: Copyright (C) Fundamental Software, Inc., 1991-2000 This FLEX-ES module is licensed to International Business Machines Corporation Poughkeepsie, New York flexes> ipl 150 (issue an IPL command) flexes>

To mount a terminal and bypass the Terminal Solicitor: flexes> d mount 205 (display MOUNT status) OFFLINE flexes> mount 205 @9.12.2.119 (issue new mount command) flexes> d mount 205 @9.12.2.119

To change 205 back to using the Terminal Solicitor: flexes> d mount 205 @9.12.2.103 flexes> mount 205 vsef205 (name for Terminal Sol) flexes> d mount 205 vsef201

When the S/390 system has been manually stopped and shut down, you can shut down the FLEX-ES processor complex from the CLI: flexes> shutdown (stop this instance) /usr/flexes/rundir $

3.6 FSI Integrated Communications Adapter (ICA) The FSI Integrated Communications Adapter (ICA) is a six-port RS-232 PCI-bus- based card. It provides six Bisychronous Communications (BSC,BiSync) lines or Synchronous Data Link Communications (SDLC) lines in any combination. Each line is defined by a separate device in the FLEX-ES control unit definition.

The card supports local and network interfaces. A local interface is used by a CPU complex on the same server where the control unit is defined. A network interface is used by a CPU complex on a different server than the defined control unit.

Chapter 3. FLEX-ES 25 The SDLC lines are IBM ICA type, not NCP. The devices are attached to emulated 2701d, 9221ICAd, or 9373ICAd control units. The 9221ICAd and 9373ICAd control units return sense ID data, while 2701d control units do not. These devices only apply to the VM and VSE environments. OS/390 has no ICA support.

The first device must be device 0 and subsequent ones should be numbered sequentially after it. Do not omit device 0 or skip a device number. A sample definition is shown here: # FSI ICA vset2701d: cu 2701d options ‘/dev/fsiica/ica0p0:/usr/flexes/bin/pica960.img’

(Note: The single quotes around the option parameters are required.) interface local(1) # SDLC dialup line (default), nrz (default)

device(00) rsdlc /dev/fsiica/ica0p0 devopt ‘leased,nrzi’ # device(01) rsdlc /dev/fsiica/ica0p1 devopt ‘leased,nrzi’ # device(02) rsdlc /dev/fsiica/ica0p2 devopt ‘leased,nrzi’ # device(03) rsdlc /dev/fsiica/ica0p3 devopt ‘leased,nrzi’ # device(04) rsdlc /dev/fsiica/ica0p4 # device name filled in by mount command device(05) rsdlc /dev/fsiica/ica0p5 # device name filled in by mount command

end vset2701d

Each line can operate up to 56 Kbps, and all can operate simultaneously at this speed.

3.7 FSI Parallel Channel Adapter (PCA) The FSI Parallel Channel Adapter (PCA) is available in two formats: 1. One channel attachment per card with the ability to connect one set of bus and tag cables 2. Three channel attachments per card with the ability to connect three sets of bus and tag cables

A FLEX-ES configuration typically uses fewer physical channels than a traditional S/390 mainframe because many devices may be better provided as emulated devices on emulated channels. In addition, remaining physical devices may be combined on fewer PCA channels. As long as the unit addresses of these devices are different, devices on different channels on a traditional mainframe can be connected on a single FSI PCA channel with no changes to the I/O configuration.

In its current implementation, using a PCA adapter requires special care in the FLEX-ES system definitions. The total defined S/390 memory, to be used by all instances of FLEX-ES that use PCAs, cannot exceed approximately 512 MB. Care is needed in matching the name of the system definition with the name of the memory resource definition. An example is: system vm:

26 NUMA-Q and S/390 Emulation memsize(98304) # K of core memory (memory minus 32M for pca card) essize(512) # M of expanded storage cachesize(1024) instset(esa) tracesize(256)

cpu(0) # 2 undedicated cpus cpu(1)

channel(0) localbyte channel(1) local channel(2) blockmux vmchpbt0 # pca card to tape drives

cu devad(0x170,2) path(2) unitadd(0x70) streaming (45) # real tape drives

end vm

resources vmesa:

vm: memory 128 # (VM=128,MVS=256,VSEC=64,VSEF=64) end vm

vmchpbt0: blockmux /dev/chpbt/ch0 end vmchpbt0

end vmesa

See “Parallel Channel Adapter (PCA)” on page 73 for a usage example.

3.7.1 Need for Contiguous Memory Whenever an S/390 instance includes a Parallel Channel Adapter, the memory for that instance must be allocated from a preallocated and contiguous area of memory. This requires that the contiguous memory be allocated at DYNIX boot time, and there there be a memory clause in the resource definition with the same name as the system name of this instance. For more information on memory allocations within the S/390 environment, refer to “Memory use” on page 49.

Chapter 3. FLEX-ES 27 28 NUMA-Q and S/390 Emulation Chapter 4. Typical customization tasks and considerations

Most NUMA-Q Enabled for S/390 systems will require a standard set of configuration and setup tasks. This chapter will present a series of these common tasks, with information about how they can be accomplished. The tasks described below are the ones we completed as part of the residency; other sets of tasks may be required, depending on the target hardware and software configuration. The common configuration tasks are: • Planning the configuration • Configuring the disk subsystem for the S/390 environment • Defining initial FLEX-ES configuration files • Loading initial S/390 data onto emulated S/390 disks • Connecting Local Area Networks to the NUMA-Q and S/390 system • Using SCSI-connected tape drives • Defining and exploiting multiple S/390 system images in one NUMA-Q EFS • Defining and using emulated channel-to-channel connections between system images • Optimizing memory allocation and utilization

4.1 Planning the configuration As is always the case with complex systems, proper and complete planning is critical to the successful configuration and setup process. This section will point out some of the key planning factors, but it is not intended to be a complete planning guide or a substitute for a full Systems Assurance review. This section will only address the use of the NUMA-Q EFS within the S/390 environment; it will not consider any configuration and setup process that may be required for running applications outside of the S/390 systems.

4.1.1 I/O device configuration The I/O device configuration is usually the most complex part of the planning process. For the purposes of our planning and implementation sections, we will consider the following components: • Emulated Disk Volumes • Network devices (LAN and WAN) • 3270 Terminal access • SCSI attached tape drives • Parallel (Bus & Tag) channel connected devices • Connecting multiple S/390 CPU complexes (FLEX-ES instances) with emulated channel-to-channel devices

The first step in the planning process should be to determine and accurately document the desired I/O configuration. In most cases, the target configuration will be based on some existing systems, with additional capacity or capabilities added to accommodate workload growth. It may be helpful to break the configuration into the types of I/O devices listed above. If multiple FLEX-ES

© Copyright IBM Corp. 2000 29 instances are to be implemented, two views of the I/O configuration should be developed: • A consolidated view of all devices for all instances. A FLEX-ES resource file built defining all the devices will allow you to start multiple instances of S/390s at the same time. For example, you might want to run an OS/390 system and a VSE system. By defining all the I/O devices needed for both systems in the same FLEX-ES resource definition file, you can run two S/390 instances (one for OS/390 and one for VSE) at the same time. • Separate views of the configuration for each instance. Using the same example, you would define separate resource definitions for your VSE and your OS/390 systems. Using these separate definitions, you could run one or the other at a given time. Separate resource definitions tend to be simpler and easier to configure.

Developing these two views of the configuration will simplify the development of the FLEX-ES configuration file.

Each planned I/O device should be identified as to: • Device type and model • Device address (physical device address, if appropriate, and logical device number) • Which FLEX-ES instances will use it.

4.1.2 Planning the processor configuration The primary considerations for planning the processor configuration are: • Allocating available S/390 memory between multiple FLEX-ES instances. • Allocating enabled S/390 processors between multiple instances.

There are only a few simple rules to follow: • The total S/390 memory in use at any one time cannot exceed the amount of memory allocated to the FLEX-ES memory manager. If parallel channel adapters (PCAs) are used, then a limited amount of memory (approximately 512 MB) can be defined as S/390 memory.1 All concurrent FLEX-ES instances (that use PCA definitions) must work from this memory pool. • No single S/390 instance can have more processors than have been enabled for S/390 processing2 on the NUMA-Q EFS system. • If dedicated processors3 are used, the total number of dedicated processors in use at one time cannot exceed the number of enabled S/390 processors.

4.1.3 The ITSO project configuration For the purposes of this redbook project, we wanted to demonstrate both the general capabilities of the system and the ability to define and run multiple instances of different S/390 operating systems. To accomplish this, we obtained copies of three different operating systems:

1 See the comments in “Limitations and concerns” on page 3 concerning possible changes to various limitations. 2 The term “enabled for S/390 processing” is a key term that relates to the FLEX-ES license conditions on your machine. The license (and the associated FLEX-ES code that is delivered) specifies the maximum number of Pentium processors that can be used for S/390 processing. 3 You can optionally specify (in your S/390 system definition files) that a number (limited by your license) of Pentium processors can be dedicated solely to S/390 work. This improves performance by reducing cache invalidations and so forth.

30 NUMA-Q and S/390 Emulation • VM/ESAV2.4 • VSE/ESA V 2.4 • OS/390 V 2.9

The tables below present a summary of the configuration for each of these systems, then of the total combined configuration. It should be noted that in a true “production” environment, it might be more efficient and practical to run the VSE/ESA and OS/390 systems as guests of VM/ESA. Although we did run run VSE/ESA and OS/390 as guests of VM/ESA, our primary objective was to investigate running multiple S/390 instances.

Table 1: VM/ESA I/O configuration Count Device Type Device Number Used for Comments 5 3390-3 100 - 104 Disks ECKD DASD 1 2540R 00C Card Reader Emulated 1 2540P 00D Card Punch Emulated 1 1403 00E Printer Emulated 8 3270 020 - 027 Terminals Terminal Solicitor 1 3480 181 Tape FakeTape 2 CTC 600 and 620 Interconnect Emulated CTC 2 3172 800 - 801 LAN Access TCP/IP Access Table 2: OS/390 Configuration Count Device Type Device Number Used for Comments 1 2540R 00C Card Reader Emulated 1 2540P 00D Card Punch Emulated 1 1403 00E Printer Emulated 2 3480 560 - 561 Tape FakeTape 3 3390-3 A80 - A82 Disks System DASD 1 3390-3 A87 Disk System DASD 3 3390-1 A8A - A8C Disk Work Packs 1 3172 E20 - E21 LAN Access TCP/IP Access 1 CTC E22 Interconnect Emulated CTC 16 3270 700 - 71F Terminals Non-SNA 1 Parallel Chan. 900 = 90F Chan. Attach PCA Channel 32 3270 800 - 81f Terminals Channel attached

Table 3: VSE/ESA Configuration

Count Device Type Device Number Used for Comments 1 3270 01F Console Operator 1 2540R 00C Card Reader Emulated 1 2540P 00D Card Punch Emulated 1 1403 00E Printer Emulated 6 ICA 038 - 03D SDLC FSI ICA Adapter 3 9336-20 140 - 142 VSEF system FBA: not striped 2 3390-3 150 - 151 VSEC system CKD: striped 8 3270 200 - 207 Terminals Non-SNA 2 3172 300 -301 LAN Access TCP/IP Access 1 CTC 400 Interconnect Emulated CTC

Chapter 4. Typical customization tasks and considerations 31 Table 3: VSE/ESA Configuration

Count Device Type Device Number Used for Comments 1 3480 500 Tape FakeTape 3 3490 590 - 592 Tape SCSI Tape

Table 4: Combined I/O Configuration

Count Device Type Device Number in Resource Comments system Name 3 Unit Record 00C, D, E vm2821 Emulated rdr, pun, prt 16 3278 020 - 02F vm3274 3270 sessions 2 3172 800 - 801 vm3172 Ethernet LAN, IP 1 3480 181 vm3480 FakeTape 1 CTC 600 vmctca1 virtual CTC to VSE 1 CTC 620 vmctca2 virtual CTC -OS/390 5 3390 100 - 104 vmdasd 2.4 base + zVM 5 3390 120 - 124 vmdasd2 2.4 base AD CD sys 1 3390 140 vmdasd3 3390-2 1 3390 160 vmdasd4 Test 3390-1 1 Parallel Chan. multiple oschpbt0 PCA channel 3 Unit Record 00C, D, E os2821 Emulated rdr, pun, prt 3 Unit Record 00C, D, E os12821 Another set 2 3480 560 - 561 os3480 SCSI or FakeTape 32 3278 700 - 71F os3274 Emulated Terminals 16 3278 700 - 70F os13274 3270s for 2nd OS 16 3270 900 - 90F N/A Channel attached 16 3390 A80 - A8F os3390 Emulated DASD 1 3380 120 os3380 3380 for 2nd OS 2 3172 E20 - E21 os3172 LAN: /dev/pe2 1 CTC E22 osctc Virtual CTC 3 Unit Record 00C, D, E vsec2821 Emulated rdr, pun, prt 6 SDLC - ICA 038 - 03D vsec2701d ICA - SDLC 2 3390 150 - 151 vsecdasd VSE CKD system 3 9336-20 140 - 142 vsefdasd VSE FBA system 1 3278 01F vseccons VSE Console 8 3278 200 - 207 vsec3274 VSE Terminals 2 3172 300-1 vsec3172tr SNA over Token Ring 1 CTC 400 vsecctc CTC to FBA sys 1 3480 500 vsec3480ft FakeTape 2 3490 590 - 591 vsec3490scsi SCSI 3490

Once the overall system configuration has been determined, the next steps would normally be: • Allocate the available NUMA-Q disk drives into one or more SVM disk groups. • Allocate ptx/SVM volumes within the disk groups, specifying desired striping and mirroring characteristics, if appropriate4.

4 The disks and disk controllers we used did not provide hardware augmentation such as mirroring, RAID-5, or striping. We needed to implement these within SVM. Later models of EFS machines are expected to include disk subsystems that provide these functions; in this case, it would not be necessary to specify these details at the SVM level.

32 NUMA-Q and S/390 Emulation • Format the ptx/SVM volumes into emulated S/390 volumes using the ckdfmt and fbafmt commands. • Initialize the S/390 volumes via standard S/390 utilities such as ICKDSF. • Load the desired S/390 volumes onto the emulated volumes.

Under some circumstances, the ckdfmt or fbafmt step, and the ICKDSF step, may not be required. This will occur when data is being restored with other utilities which will create the needed format information, such as the ckdconvaws and ckdrestore commands. Examples of the use of these commands are provided in Chapter 8, “Loading CD-ROM systems” on page 77.

4.2 Configuring emulated disk volumes Much of the disk configuration process is actually done via the DYNIX/ptx Storage Volume Manager (SVM). The details of using SVM to configure the emulated disk volumes are covered in Appendix B.4, “Allocating S/390 volumes” on page 133. This section will cover the process at a summary level, primarily to discuss the configuration decisions we made for the system used in the project, and the reasons behind those decisions. The DYNIX/ptx documentation uses the terminology “ptx/SVM objects” to refer to disks and volumes, so we will use the term “ptx/SVM” when referring to objects at the DYNIX level, and “S/390” when referring to objects as seen from the S/390 environment.

4.2.1 A brief introduction to SVM SVM manages the disk space in the NUMA-Q disk subsystem. At a conceptual level, SVM combines multiple physical disk drives into disk groups, which are then logically subdivided into ptx/SVM volumes for use by the S/390 systems. Depending on the type of S/390 disk device type to be emulated, one or more ptx/SVM volumes will be required to emulate each S/390 volume.

In normal DYNIX/ptx usage, a traditional UNIX file system is built on a ptx/SVM volume. For EFS, the ptx/SVM volumes are used as raw disks. FLEX-ES emulates S/390 DASD using DYNIX/ptx raw disks. This is done for performance. The disadvantage is that many of the concepts and commands for traditional UNIX files do not apply to the contents of raw disks.

Via SVM, each ptx/SVM volume can be created as a simple disk, or striped across multiple physical disks, or mirrored for redundancy, or some combination of these attributes. Probably the most common allocation will be striping across multiple disks, and mirrored. As part of the project, we defined several different types of disks in order to document the different processes and to see the different results.

In our NUMA-Q, we had access to a total of 23 physical disks, spread across two physical disk bays, 11 in one bay and 12 in the other5. We determined that we had a total of four uses for these disks. • DYNIX/ptx system use (swapping, temporary storage, etc.) • Emulated volumes for the OS/390 system • Emulated volumes for the VM/ESA system

5 This count does not include the 36 GB disk drive in the boot bay, where DYNIX/ptx resides. This drive is not managed by SVM. We elected not to use it for S/390 volumes. Spare space on it could be used, but the disk management techniques involved would be different than for the drives totally managed by SVM.

Chapter 4. Typical customization tasks and considerations 33 • Emulated volumes for the VSE/ESA system

We decided to map these requirements against the available devices in the following manner: • Two physical drives were allocated to a disk group (named sysdg), not mirrored, for use as DYNIX/ptx system swap areas and other temporary files. (Lack of redundancy was accepted because no critical data would reside on these drives. For enhanced availability, these disks should be mirrored so the system would continue running, even if one of these drives failed.) This disk group was created by using the command vxdiskadm, which invokes a text- based menu system. • A total of 21 drives were configured into a disk group we named s390dg.They were then allocated as: • Four drives, mirrored onto another four drives, for use by OS/390. The ptx/SVM volumes would be striped across the four drives. (The mirrored drives would reflect the same striping arrangement, of course.) • Four drives, mirrored onto another four drives, for use by VM/ESA. The ptx/SVM volumes would be striped across the four drives. • Two drives, mirrored onto another two drives, for use by VSE/ESA. For comparison purposes, we striped CKD volumes across these two drives, but did not stripe FBA volumes.

This configuration provides about 35 GB for VM/ESA, 18 GB for VSE/ESA, and 35 GB for OS/390. There was one other drive which could be used for temporary data or as a spare in case one of the other drives fails. This disk group was also created via the vxdiskadm menus.

This total configuration can be illustrated by Figure 3 on page 13.

DYNIX/ptx SVM controlled disks s390dg disk group sysdg

sd1 sd2 sd3 sd4 sd5 sd6 sd7 sd8 sd9 sd10 sd12 sd11

Mirrored Mirrored Mirrored

sd23 sd13 sd14 sd15 sd16 sd17 sd18 sd19 sd20 sd21 sd22

VM/ESA OS/390 VSE/ESA SWAP

Figure 6. Physical and logical disk configuration

There is no requirement to separate the emulated disks of the various FLEX-ES instances as we did. It was done in this manner purely to simplify the administration and measurement process.

34 NUMA-Q and S/390 Emulation 4.2.2 Defining the disk space to DYNIX/ptx SVM was used to allocate the required ptx/SVM volumes within the s390dg Disk Group that we created. A ptx/SVM volume naming convention was adopted in order to more easily identify the use of any specific ptx/SVM volume. The naming convention used consisted of a ptx/SVM volume name of vvttttmsn where: vv Relative S/390 volume number in the disk array. (We assigned ranges of volume numbers to each S/390 operating system; thus, VM/ESA had volumes in the range 01 to 19, OS/390 had volumes in the range 21 to 39, and VSE/ESA had volumes in the range 41 to 59.) tttt Emulated disk type. m Emulated disk model. sn The lower-case letter s, followed by a single digit which is the ptx/SVM volume number within the S/390 volume. When multiple ptx/SVM volumes are combined to form a single S/390 volume, the last character of the volume name must start with a 1 and increment by one for each additional ptx/SVM volume to be included in the S/390 volume.

Because the commands to create and mirror the volumes are rather long, we created shell scripts with the needed commands and executed the scripts. Samples of the vxassist commands to create and mirror the volumes are in Appendix B, “The Storage Volume Manager (SVM)” on page 131.

4.2.3 Defining volumes greater than 2 GB The early release of FLEX-ES that we used could only access ptx/SVM volumes which are 2 GB or smaller in size. Since some S/390 volumes (such as 3390-3s) are larger than 2 GB, multiple ptx/SVM volumes are combined to provide one S/390 volume. To provide maximum flexibility, we allocated all S/390 CKD volumes using one or more ptx/SVM volumes, each of which was defined as the size of a 3390 model 1. Using this approach, each 3390-1 required one ptx/SVM volume, each 3390-2 required two ptx/SVM volumes, and each 3390-3 required three ptx/SVM volumes. The emulated FBA devices were less than 2 GB, so they were allocated as single ptx/SVM volumes of the appropriate size.

A later release of FLEX-ES, available after we finished our projects, can handle SVM volumes larger than 2 GB, and using multiple SVM volumes for a single 3390 volume is no longer necessary. This redbook relates our actual use of the system, and describes the procedures we used at the time we were limited to smaller SVM volumes.

After the ptx/SVM volumes have been defined, three volumes the size of a 3390-1 can be combined into one volume to create a 3390-3. This is done automatically when the first of the three volumes is specified for initialization by the FLEX-ES formatting program (ckdfmt).6 The ckdfmt program recognizes the disk allocations involved and logically combines the three ptx/SVM volumes into one S/390 volume. In all further references to the S/390 volume, only the first of the related ptx/SVM volumes is referenced.

6 This works only if a correct naming convention is used for the ptx volumes.

Chapter 4. Typical customization tasks and considerations 35 4.2.4 Initializing the volumes for FLEX-ES FLEX-ES provides two programs for initializing the ptx/SVM volumes which are being used for S/390 volumes. ckdfmt Formats volumes to emulate (E)CKD type disk devices fbafmt Formats volumes to emulate FBA type disk devices

Both programs prepare the ptx/SVM volumes for use by the S/390 environment. These utilities recognize when multiple ptx/SVM volumes must be combined to form a single S/390 volume of the target size, such as a 3390-3.

The ptx/SVM volumes, which are used as raw disks and are created using the commands shown in “The Storage Volume Manager (SVM)” on page 131 are represented as objects in the /dev/vx/rdsk/s390dg directory. The last directory level, s390dg, is the name of the SVM disk group we created. The file names in this directory, such as 2133903s1, 2133903s2, and so forth, reflect the naming conventions we devised for the ptx/SVM volumes.

The following illustrates the command to combine three ptx/SVM volumes named /dev/vx/rdsk/s390dg/2133903s1, s2 and s3 into a single 3390-3 volume and format the volume for use by FLEX-ES: # ckdfmt /dev/vx/rdsk/s390dg/4133903s1 3390-3 The following slices will be formatted to create one CKD disk: /dev/vx/rdsk/s390dg/4133903s1 (cylinders 0 - 1117) /dev/vx/rdsk/s390dg/4133903s2 (cylinders 1118 - 2235) /dev/vx/rdsk/s390dg/4133903s3 (cylinders 2236 - 3342)

Do you wish to continue (default: n) [y,n,?] y Max = 14, cyl = 3343, blks = 57

Note that the command named only the first ptx/SVM volume. If more space is needed, the ckdfmt command automatically looks for more ptx/SVM volumes with the same name up to the “sn” that forms the last two characters of the name. If such a volume is found, and if the “n” is one greater than the preceding ptx/SVM volume, the new ptx/SVM volume will be incorporated into the S/390 volume being formatted.

If a disk volume will be loaded with S/390 data via some other method that provides the S/390 formatting, such as restoring a S/390 volume from an appropriate CD-ROM, formatting with ckdfmt is not required. For further information on using the preconfigured CD-ROM systems, see Chapter 8, “Loading CD-ROM systems” on page 77. Likewise, if a S/390 volume is being restored (using the ckdrestore command) from a file created with the ckdbackup program, preformatting with ckdfmt is not required. If in doubt, use the formatting utilities; it can do no damage, and at worst can only take some time. (Formatting a 3990-3 took 12 to 15 minutes.)

As a test of our strategy of creating all ptx/SVM volumes as 3390-1 size, we later reused one of the 3390-3 volumes as one 3390-2 and one 3390-1. This was done by reformatting the appropriate ptx/SVM volumes as 3390-2 and 3390-1 and updating the corresponding FLEX-ES resource configuration. The format commands were: ckdfmt /dev/vx/rdsk/s390dg/0933903s1 3390-2 ckdfmt /dev/vx/rdsk/s390dg/0933903s3 3390-1

36 NUMA-Q and S/390 Emulation In this case, the first ckdfmt command did not use the “s3” volume, because it had enough space for the 3390-2 in the “s1” and “s2” volumes.

4.2.5 Initializing the emulated volumes for use by S/390 After the volumes have been created and initialized by the FLEX-ES programs, they will usually need to be initialized by standard S/390 utilities before they can be used as S/390 volumes. Most frequently, the normal ICKDSF utility will be used to prepare the S/390 volumes. Only the basic INIT function need be specified along with any other parameters needed in order for ICKDSF to prepare the volume for use by the planned S/390 operating system. For example, to initialize a 3390 for VSE/ESA: INIT SYSNAME(SYS000) NVFY VSEVTOC(0,1,1) VOLID(AA3390)

Emulated FBA disk devices must also be initialized by ICKDSF prior to use.

4.2.6 Loading S/390 data on an emulated disk volume Once an S/390 volume is initialized with ICKDSF, any applicable S/390 utility may be used to transfer data to the new volume. VM systems might use DDR, VSE systems would use FASTCOPY, and OS/390 could use DFSMS/dss. Equivalent utilities from other software suppliers can also be used. At this point, the emulated S/390 volumes can be treated just as any normal S/390 disk device.

4.3 Adding LAN connections All NUMA-Q EFS systems will require at least one LAN adapter, usually either a 4-port Ethernet adapter, or a token ring adapter. Additional Ethernet or token ring adapters can be installed. Most S/390 systems will need one or more of these Local Area Network connections to support TCP/IP and/or SNA protocols. These connections are accomplished by using LAN adapters in the NUMA-Q, which FLEX-ES then utilizes to emulate channel attached 3172 devices. The S/390 operating system can then use these emulated 3172 units to connect to the LAN. This section will describe how to add and configure these LAN connections. The primary steps are: • Adding LAN adapter(s) if required • Configuring the LAN adapter/port(s) to the DYNIX/ptx system • Configuring the LAN Adapter/port(s) to the FLEX-ES environment • Allocating a LAN Adapter/port to a specific FLEX-ES system image

Within FLEX-ES emulation, LAN adapters can be used for multiple protocols if each protocol uses a different Service Access Point (SAP). For example, a token ring adapter can be used for CETI SNA emulation and CETI TCP/IP emulations since SNA uses SAP 0x4 (by default) and TCP/IP uses 0xAA. Multiple TCP/IP stacks cannot run on the same adapter. An adapter cannot be shared between DYNIX/ptx TCP/IP and FLEX-ES 3172 emulation.

4.3.1 Adding LAN adapters As with all components internal to the NUMA-Q system, any additional LAN adapters will be installed by IBM NUMA-Q service specialists. Customers and users should not attempt to add such adapters themselves.

Chapter 4. Typical customization tasks and considerations 37 4.3.2 Configuring LAN adapter ports to DYNIX/ptx DYNIX/ptx will detect new LAN adapters at boot time and do the basic configuration functions automatically. The first Ethernet LAN port, typically owned by DYNIX/ptx, will generally be named /dev/net/pe0 and is physically the lowest of the four connectors on the four-port Ethernet adapter. The other Ethernet LAN ports will be named /dev/net/pe1, /dev/net/pe2, and /dev/net/pe3 and will be available for FLEX-ES use. Token Ring adapters will be configured as /dev/net/tr0, /dev/net/tr1, and so forth. DYNIX/ptx access to the LAN adapter to be used by DYNIX/ptx must be configured. We used the text-based menu system by invoking menu while in su mode and in the root directory: menu --> Network Administration --> TCP/IP Management --> TCP/IP Administration --> Basic Interface Configuration --> Add Interface

The necessary parameters can be entered in this menu panel. (The Interface Name is /dev/net/pe0 for the first Ethernet port.)

4.3.3 Configuring LAN adapter ports for FLEX-ES The boot-time autodetect process for new adapters will not cause any protocol stack to be loaded for the new LAN ports. Since they have no protocol stack and no DYNIX/ptx use, they will be available to FLEX-ES with no further actions. We generally think of making devices available to FLEX-ES by coding resources in the resources section of the FLEX-ES configuration file. The resources are then made available to the S/390 system by coding device statements in the system section of the configuration file. Since the LAN adapters can be used in several ways, to avoid confusion we will present paired examples of resources and system definitions in the following examples.

One parameter commonly used in the TCP/IP or VTAM configuration files on S/390 systems is the “adapter number”. A unique aspect of the FLEX-ES emulation of 3172s is that the adapter for any instance appears as “adapter 0”, no matter which adapter it is physically on the NUMA-Q system.7

4.3.4 Allocating LAN adapter ports to S/390 images S/390 systems, through FLEX-ES, can use LAN adapters for TCP/IP or for SNA. This section briefly illustrates the FLEX-ES definitions required.

4.3.4.1 Ethernet port for TCP/IP access Most S/390 systems in this environment will require at least one Ethernet port to access the local TCP/IP network. This access is provided by FLEX-ES emulating a 3172 with an Ethernet adapter. A typical FLEX-ES resource definition for this would look like: cu1900: cu 3172 # IP 3172 ports at 900 interface local(1) device(00) 3172 /dev/net/pe1 # DYNIX device name device(01) 3172 OFFLINE # odd part of even/odd pair end cu1900

LCS interfaces for S/390 appear as two addresses. One is used for receiving and one for sending. In the S/390 operating system, both interfaces must be online in

7 You can override this default action and assign a value other than 0 to any LAN adapter. There is normally no requirement to do this.

38 NUMA-Q and S/390 Emulation order to operate. However, at the FLEX-ES level, the second interface must be defined offline. The matching system configuration statement would be: cu devad(0x900,2) path(3) resource(cu1900) # IP 3172 on Ethernet

This would appear to be a 3172 at device number 900 (with an even/odd pair) for the S/390 system and will use the Ethernet port identified by the DYNIX device name /dev/net/pe1

4.3.4.2 Token ring port for TCP/IP access Token Ring adapters and ports can also be defined as 3172 devices and used to access a Token Ring based TCP/IP network. The definitions would be identical to the Ethernet definition above, but with a device name of /dev/net/tr0 (or something similar).

4.3.4.3 Token ring port for SNA access Token Ring adapters can also be used to provide access to an SNA based LAN environment. To do this, FLEX-ES emulates a Continuously Executing Transfer Interface (CETI) device. A resources definition for this could be cu1800: cu 6034 # SNA 3172 port at 80 interface local(1) device(00) ceti8025 /dev/net/tr0 # Assume adapter 0 device(01 - 03) ceti8025 OFFLINE end cu1800

The matching System device definition would be: cu devad(0x800,4) path(3) resource(cu1800) # SNA CETI on Token Ring

A CETI definition requires four addresses, with the last three defined as OFFLINE, as shown here.

4.3.4.4 Ethernet port for SNA access At the time of writing, SNA across Ethernet is not supported by the FLEX-ES environment. There is currently no way to provide SNA connections via an Ethernet adapter on the NUMA-Q EFS. If this function is required, some other method of attaching an Ethernet adapter to the system will be required, such as a real 3172, or an equivalent controller.

4.3.4.5 Intersystem TCP/IP connections There are several ways to provide connections between multiple NUMA-Q EFS systems connected via TCP/IP. Such system interconnections can be done either via network channels or by emulating a CTC connection between systems.

Network channel A system definition on one system can refer to a resource on another system by prefixing the target resource name with the hostname of the target system. For example: cu devad(0x160,16) path(2) resource(dasdfarm:cu1160) # 3990/3390s at 160-16f would allow access to the resource cu1160 on a system known as dasdfarm as though it were on the same system. Any I/O to this resource would actually be done via TCP/IP across the network connection, but the S/390 system accessing the disks on dasdfarm would not be aware of the remote nature of the emulated

Chapter 4. Typical customization tasks and considerations 39 disks. This is referred to as a network channel device, because the channel referenced by the path(2) part of the definition would be defined as: channel(2) network

This indicates that devices defined on that channel are actually on a remote system.

Emulated channel to channel A TCP/IP connection can also be used to emulate a CTC connection between systems running on two different physical machines (nodes). To accomplish this, a CTC resource with two interfaces is defined on one of the nodes, say, NODE1: ctc2: cu ctc interface local(1) interface network(1) device(00) ctc end ctc2

On the same (local) node, a CTC device is defined to use this resource as a CTC on address 200 as part of the system definition for system SYS1: channel(0) local cu devad(0x200,1) path(0) resource(ctc2)

On the other system (SYS2 running on NODE2), another CTC is defined at address 100 referencing this same resource, but indicating that it is to be accessed on a system named NODE1 via the network interface. channel(0) local cu devad(0x100,1) path(0) resource(NODE1:ctc2)

When both nodes and systems are initialized and running: • NODE1:SYS1 will have a CTC at 200 which is connected to NODE2:SYS2. • NODE2:SYS2 will have a CTC at 100 which is connected to NODE1:SYS1.

Note that emulated channels can also be defined between systems on the same node, but this will be covered in “Virtual CTC connections” on page 48.

4.4 Tape drives In addition to traditional channel-connected tape drives, SCSI-connected tape drives can also be used by S/390 systems running on an EFS system. Most SCSI tape drives will be ones that use compatible media to allow data interchange with other systems using traditional tape drives. Examples of these would be SCSI versions of 3420, 3480, and 3490 tape drives. In some systems, it may be desirable to use other media types, most commonly Digital Linear Tape (DLT), for lower cost and greater capacity. This section will discuss some of the considerations of using SCSI-connected tapes and how to install and configure such tapes for use by the S/390 systems.

4.4.1 General SCSI tape drive considerations There are a number of factors to consider when selecting tape drives in general and SCSI tape drives specifically: • Cost (both initial and maintenance) • Performance (data transfer rates and capacity)

40 NUMA-Q and S/390 Emulation • Media compatibility

We will first examine these factors when deciding whether to use SCSI-attached tapes drives, or parallel channel-attached drives.

4.4.1.1 Cost Most S/390 installations running traditional mainframe type systems will already have some form of channel-attached tape drives. In this case, initial acquisition costs are very low, but maintenance costs may be high. A cost comparison over a three-year period should probably be done to determine the most economical solution. New SCSI tape drives will generally cost more than equivalent used channel-attached tape drives, but will have a significantly lower maintenance cost.

4.4.1.2 Performance Most of the newer media-compatible SCSI tape drives can perform as well or better than current channel tape drives. For media-compatible tape drives (3480, 3490) capacity is not an issue, since it must be the same as the “original” drive.

4.4.1.3 Media compatibility By definition, 34x0 tape drives must be media-compatible, whether channel- or SCSI-attached.

4.4.1.4 IBM software distribution Within the installation, there must be a tape drive that can read a type of media which is supported by IBM (and other software vendors) for software distribution. All S/390 software is available on 3480 cartridges and most software is offered on 4mm DAT cartridges. Some software (mostly VM/ESA) is available on CD-ROM but IBM does not offer any software on DLT media. Most installations will require some form of 3480 media-compatible tape units for software installation and support.

4.4.1.5 Other considerations Some other things to consider are: • If some job streams call for multiple tapes at one time, the cost of new SCSI tape drives may become prohibitive (although the use of FLEX-ES’s “FakeTape” may reduce this problem). • The price of the channel adapters on the NUMA-Q EFS should be considered when comparing the cost of SCSI and channel-attached tape drives. SCSI adapters are much less expensive than channel adapters. • If an installation currently has channel-attached tape drives, it may make sense to initially continue using them, then acquire and migrate to SCSI units over time. • A parallel channel and any tape drives on it will be dedicated to the S/390 instance for which it is defined and cannot be made available to a different S/390 instance without having another channel adapter available. SCSI tape drives, on the other hand, can be dynamically moved from one image to another via FLEX-ES Command Line Interface mount commands.

Chapter 4. Typical customization tasks and considerations 41 4.4.1.6 SCSI tape drive selection There are a number of considerations for selecting SCSI tape drives. Probably the most important decision is whether to use compatible-media (34x0) tape drives or move to more current, but non-compatible, technology.

Compatible media Compatible media is generally considered to mean compatible with 3420, 3480, or 3490 channel-attached tape drives. If data must be exchanged via tape with other installations, then at least one media-compatible tape drive must be available. Most installations today have a 3480-compatible tape drive available, so that would be considered the “most compatible” of the various media types. If a 3490 is to be used, and data must be exchanged with other sites, a 3490 tape drive that can also write in 3480 mode should be selected.

Non-compatible media If interchangeable media is not important, or if a media-compatible tape drive is already available, it may be appropriate to obtain one or more non-compatible tape drives. The most commonly used non-compatible tape drive today is probably DLT. It offers very good performance, high capacity (up to 80 GB per tape cartridge when compressed) and good reliability. These DLT tape drives can be utilized either at the DYNIX/ptx level, using DYNIX/ptx tape commands, or at the S/390 level, with FLEX-ES providing the emulation layer to make the DLT tape drive look like a 3480 or 3490 to the S/390 system. SCSI tape drives can be moved between S/390 instances, or given to the DYNIX/ptx system for use.

Differential SCSI There are three classes of SCSI devices: • Single Ended SCSI • Low Voltage Differential SCSI • Differential SCSI

A complete description of these different SCSI interfaces is beyond the scope of this redbook, but a very basic understanding can be helpful. Single Ended (SE) SCSI and Low Voltage Differential (LVD) SCSI devices and controllers can generally be intermixed, although with some performance implications. Differential devices and controllers cannot be intermixed with anything else.

Differential SCSI tape drives are normally recommended for large, high-performance systems such as the NUMA-Q. This provides a more robust interface and more flexible cabling options. The only SCSI adapter available for and supported on the NUMA-Q systems is a differential SCSI adapter. Make sure that any tape drives selected for use on a NUMA-Q EFS have a differential SCSI interface.

4.4.2 Installing SCSI tape drives A NUMA-Q SCSI adapter must be installed by trained IBM service personnel. Once it is installed, the DYNIX/ptx system will auto-detect it at boot time and configure it into the DYNIX/ptx configuration. If a tape drive is connected to the SCSI adapter, it will also be detected and configured by DYNIX/ptx.

42 NUMA-Q and S/390 Emulation 4.4.2.1 Defining SCSI tape drive(s) to FLEX-ES SCSI tape drives are configured to FLEX-ES and assigned to the S/390 operating system via the resource definitions and system definitions. A sample resource definition for a SCSI tape drive, at SCSI address 5 would look like this: cu1383: cu 3490 # 3490 Tape Control DLT tape drive interface local(1) # one local interface device(00) 3490 /dev/scsibus/scsibus0c devopt 'scsitarget=5' device(01) 3490 OFFLINE end cu1383

We have arbitrarily included a second drive in the definition; it is not connected to the SCSI drive.

4.4.2.2 S/390 access to tape drive A matching system definition for the above resources would be: cu devad(0x380,2) path(2) resource(cu1383) # DLTs on SCSI adapter

The first tape device (380) will be initially defined to the SCSI tape device on SCSI address 5 and the second tape device (381) will initially be offline. A physical tape drive can be made available at address 381 via a mount command. Note that the two tape drives need not be the same type of drive; one could be a DLT and the other a 3490. Both would be presented to the S/390 system as 3490s.

4.5 Multiple S/390 CPU complexes (FLEX-ES instances) One of the very useful features of FLEX-ES is the ability to define and use multiple S/390 CPU complexes running on a single NUMA-Q EFS. This is also referred to as running multiple FLEX-ES instances; each FLEX-ES instance provides an S/390 image. This multiple instance ability is sometimes referred to as being “LPAR-like” but this is not an accurate description because there are important differences between /SM LPAR capabilities and the FLEX-ES multiple instance capabilities. This section will describe how to define and configure multiple S/390 instances and will also point out some of the differences from normal LPAR capabilities.

4.5.1 Building a multi-instance configuration Once the structure of the system definition files are understood, it is very easy to define a multiple instance implementation. It is important to keep in mind that the FLEX-ES resource definitions and system definitions are really very separate and distinct. A defined system utilizes resources defined by a resource definition but there is no hard link between the two; most currently operational resources (that is, defined in the resources file that was activated with the resadm command) can be used by any system instance that is subsequently started.

It is generally easiest to start with a single system and then add more systems or combine multiple resource files into one resource file. This was the approach we took for our project. We first defined and tested the three operating systems (VM/ESA, VSE/ESA and OS/390) as completely separate entities. Each operating system environment was defined by a configuration file which contained both the system definition and the resource definitions. (Refer to the

Chapter 4. Typical customization tasks and considerations 43 sample configuration files in Appendix A, “Configuration file listings” on page 115 for specific details.)

To change from one operating system to another, we had to terminate all the running resources (resadm -T) and refresh it with the new set of resources (resadm -x resource-file-name). Once the new resources were started, the new system could be started via the flexes command.

After each system was tested individually, we combined them into a multi-instance configuration by: • Ensuring that all resources for each system had unique resource names • Separating the resource definitions from the system definitions • Combining all of the resource definitions into one file and compiling it (see the combined source file in “FLEX-ES configuration for combined systems” on page 124) • Compiling each system configuration individually

At this point we had one large resource definition file (combined.rescf) and three different system configuration files. We then started the combined resource file (resadm -x /usr/flexes/rundir/combined.rescf ) and verified that all desired resource names were present by doing a resadm -r command. Next we started the individual systems using the same startup scripts that had been developed for each system individually.

In practice, it took several iterations through this process to get a fully debugged and operational set of system definition files, startup scripts, and resource file. Once this was done, this became our standard resource definition file and was used without significant changes for all the other exercises we performed. In a sense, the resource definition file is like an IOCDS: somewhat tedious to set up correctly, but, once done, it is stable for long periods.

4.5.2 Structure of multi-instance configuration file Although we separated the system definitions from the resource definitions, this was done primarily as an operational convenience since different people were working on each of the systems. It may be easier to visualize the necessary definitions as a single configuration file structured like the following: System VSEsystem: parameters for the VSE system end VSEsystem

System VMsystem: parameters for the VM system end VMsystem

System OS390system: parameters for the OS390 system end OS390system

Resource combined: all resources for all systems end combined

44 NUMA-Q and S/390 Emulation The resources may be defined in any order. However, for human usability, we recommend one of two types of organization: • By system: All resources for each system grouped together, probably in device number or device type sequence within system. • By device type: Some users may prefer to see all resources of the same types grouped together: all disks in one section, all tapes in another section, and all network devices in a third.

Because we combined multiple resource files to form one, we essentially implemented the “by system” organization.

4.5.3 Starting and running multiple instances In our projects, we were primarily running DYNIX/ptx and FLEX-ES in a telnet environment rather than using some form of graphical interface such as X Windows. In this mode it seemed to work best to start each S/390 system instance from a separate telnet session, each logged into the flexes userid. In a graphical environment, it may be better to initiate multiple terminal windows on the desktop and start each system instance in a different window. This can lead to a very busy desktop, however, especially if the same desktop is also used to host multiple 3270 sessions.

Once the combined resources were running, each system was started exactly as it had been when running as a single instance. We used the same sequence of: 1. Start the system via the flexes command and leave a Command Line Interface prompt available. 2. Connect a 3270 session to the system console resource. 3. Issue the IPL command from the Command Line Interface prompt.

4.5.4 Multi-instance considerations and suggestions During this process we found a number of unexpected (although understandable) restrictions and requirements. We also developed some of our own guidelines to simplify the creation and future maintenance of a multi-instance environment. • The resources names for each operating system (instance) were prefixed by a unique string (VSE, VM, OS) so that there were no duplicated resource names. • If a system definition is to use a Parallel Channel Adapter then a memory resource must be defined in the resource definition file and the name of this resource must match the name of the system with the PCA. This memory resource specifies the amount of the reserved contiguous memory that this system is to occupy.

4.5.4.1 Memory size considerations The memory statement (resources definition) describes the amount of memory available for the named S/390 system when a parallel channel adapter is installed. When a system instance is started, it uses memory specified by this resource. Each system also specifies how much memory it uses in the memsize statement. FLEX-ES itself requires some of this memory, typically about 3% of the total. If the memory statement indicates that 512 MB is available, then a maximum of about 496 MB (.97 * 512) will be available for any running systems.8

Chapter 4. Typical customization tasks and considerations 45 • If only one system instance is active, then its memsize must be 496 or less. • If multiple instances with PCAs are active, the total of their memsize values must be 496 or less. • Any system that does not include a PCA does not require a corresponding memory statement and will not occupy memory in the reserved contiguous memory area. These systems will be allocated memory from normal NUMA-Q DYNIX/ptx memory resources.

Emulated expanded storage does not come out of this memory resource. We were able to specify and use Expanded Storage for each instance by specifying the essize parameter in the System Definition.

4.5.4.2 Moving devices between instances Some devices can be dynamically moved from one instance to another. SCSI tape drives, for example, can be moved from one instance to another by a series of mount commands, such as:

VM Instance VSE Instance Comments mount 181 SCSI-tape-address Give tape to VM mount 181 OFFLINE Take tape from VM mount 590 SCSI-tape-address Give tape to VSE mount 590 OFFLINE Take tape from VSE mount 181 SCSI-tape-address Give tape to VM

The appropriate operating system commands should also be used when adding or removing a device. Most emulated devices can be dynamically reallocated in this manner. Devices on a Parallel Channel Adapter channel cannot be moved from one instance to another. The PCA is essentially dedicated to an instance when that instance is started and cannot be moved. The channel and all devices on the channel are dedicated to the system to which it is defined as long as that system is active.

4.5.4.3 Exploiting multiple instances In addition to the basic multi-instance capability, there are other FLEX-ES capabilities which tend to help exploit the basic function: • Virtual channel-to-channel connections can be defined between instances, providing very fast and inexpensive connectivity. • Emulated S/390 DASD volumes can be shared between S/390 images (with the normal shared DASD cautions) to allow multi-system access to data. We used this ability briefly and it appeared to function properly in an OS/390 shared DASD environment. • Many devices can be dynamically moved between instances, allowing more efficient use of relatively expensive devices, such as SCSI tape drives.

There are several ways to use the multiple image capabilities, ranging from the obvious to the rather obtuse. Keep in mind that in some cases, it may be more appropriate to utilize VM/ESA to provide the multi-image capability rather than the FLEX-ES facilities. The examples below are intended as possible solutions only, not necessarily the best solution.

8 A different group, using OS/390 under EFS, found this number to be 492 MB. We did not attempt to find the maximum exact size that couldbeused.

46 NUMA-Q and S/390 Emulation Production and development Many information technology environments would like to separate the production system from the development system to minimize the possible negative impact on production schedules.

System test and migration Another form of this is a system test facility in which new versions of the operating system are generated and tested in a different image prior to putting into production use. In the case of a migration from one operating system type or version, the two systems can actually operate at one time, sharing some devices if appropriate, while the workload is migrated from the old system to the new in a controlled and relatively relaxed manner.

Isolate Internet-accessible components As S/390 systems are more frequently connected directly to the Internet to allow customer/user access, many installations are concerned about exposing their critical business data to possible unwanted intrusion. Some customers have implemented a second S/390 image that is used exclusively as the Internet server facility. All Internet access comes into this image, which has connectivity to only a limited subset of the system data. Data to be served by the Internet Server can be placed on shared DASD by the production system, but disks containing key business critical files are not even defined to the Internet Server configuration. This provides effective isolation of the two systems while still allowing easily controlled data transfer between them.

4.5.5 How multiple FLEX-ES instances compare to LPARs Although the FLEX-ES multiple instance capability is sometimes referred to as “LPAR-like”, there are significant differences. The preferred terminology is “multiple instances” or even “multiple S/390 images”. Some of the differences between PR/SM LPAR and multiple instances are: • Lack of processor capping controls: LPARs provide LPAR Capping so an installation can set a ceiling on the processor utilization of one LPAR. This can prevent a “test LPAR” from consuming more than 20% of the processor, for example. FLEX-ES multiple instances do not provide this capping function, and there is nothing to limit the processor resource consumed by an instance. • Lack of relative weighting: LPARs within a processor complex can be assigned different relative weights, effectively providing a form of assigning priorities. With multiple FLEX-ES instances, all instances run with the same priority. • No SIE Assist: In a system with PR/SM, VM/ESA can utilize the PR/SM support to provide some enhancements, such as SIE I/O assist and running multiple V=F guests. Since FLEX-ES systems do not have PR/SM, VM/ESA cannot exploit these capabilities. • Each logical partition can be assigned a combination of dedicated and/or shared processors, and this can be varied dynamically. With FLEX-ES, all processors for all instances should be shared, or all should be dedicated. • LPAR environments equipped with ESCON channels and EMIF capabilities can dynamically move many devices between LPARS. At the time of writing, FLEX-ES does not support ESCON channels and devices attached to its parallel channel adapter cannot be moved between instances.

Chapter 4. Typical customization tasks and considerations 47 While the FLEX-ES multiple instance capability offers some very useful and powerful functions, users should understand its limitations. Customers who require some of these capabilities, but want to use a NUMA-Q EFS, should consider using VM/ESA. It provides many of the partitioning and resource controls that are available in LPAR environments, along with additional capabilities.

4.6 Virtual CTC connections FLEX-ES can provide what appears to be a channel-to-channel connection between two S/390 images running on a single NUMA-Q EFS. This CTCA connection is very similar to the virtual channel-to-channel connection facility provided by VM/ESA, but it is not identical. The FLEX-ES CTCA connection emulates a standard channel-to-channel adapter, not a 3088, which is what the VM/ESA virtual CTC emulates. This section will describe how to define these emulated CTCA connections and how they might be used by the S/390 systems.

For the purposes of this redbook, emulated CTCAs between two S/390 instances on different nodes via a LAN connection will be referred to as emulated CTCAs. Emulated CTCAs between two S/390 instances on the same node, with no physical connection between them, will be termed a virtual CTCA.

4.6.1 Defining virtual CTC connections A FLEX-ES virtual CTCA is defined by a combination of resource definitions and system configuration definitions. A resource is defined with control unit type of CTC and two local interfaces. Then a device is added to each system configuration referencing that same CTC control unit. The relevant parts of a sample resource configuration file would look like this: resources node1: ctc1: cu ctc interface local(2) device(00) ctc end ctc1 end node1

4.6.2 Associating virtual CTC with an S/390 instance A possible set of corresponding system definition statements are show below. These statements would result in system1 and system2 both having a CTCA at address 100 which is logically connected to the other system. Note that the addresses need not be the same; one system could have seen the CTCA at 200 and the other system at 330.

The relevant parts of two system configuration definitions might look like this: system system1: channel(0) local cu devad(0x100,1) path(0) resource(ctc1) end system1

system system2: channel(0) local cu devad(0x100,1) path(0) resource(ctc1) end system2

48 NUMA-Q and S/390 Emulation 4.6.3 Using a virtual CTCA The most common way to use a virtual CTCA would be for VTAM or TCP/IP connectivity between two systems. One system could be a network “front-end” and would route communications to the other system via the virtual CTCA.

VM/ESA systems will detect the virtual CTCA automatically at IPL and no additional definition is necessary. For VSE/ESA systems, the appropriate ADD statement(s) must be added to the IPL procedure; for example: ADD 400,CTCA,EML

OS/390 systems will need to define the CTCA in the IODF.

Once these definitions have been done and the systems restarted, the CTCAs should be available for use by TCP/IP or VTAM.

For a complete example of implementing an SNA connection via a virtual CTCA between a VSE/ESA and a VM/ESA system, see “Connectivity” on page 57.

4.7 Memory use There are numerous ways that memory within a NUMA-Q EFS system can be used. This section will explain these different uses and when and how they can be exploited.

Within the configuration files (system and resource) there are a number of parameters for allocating memory for different purposes. The possible uses and corresponding parameters are: • S/390 Expanded memory specified in the system essize parameter. • Processor Cache memory specified in the system cache parameter. • Disk cache memory specified via the resource trackcachesize parameter. • S/390 Central Storage/memory: specified in the system memsize parameter, but may also require a matching resource memory statement.

Memory for these purposes can come from two possible sources: • General server memory: memory within the NUMA-Q EFS and controlled by DYNIX/ptx that can be used for whatever task or process requires memory. It may be paged out to swap areas on disk if required. We will refer to this as server memory. • Contiguous memory: memory within a block of storage which was reserved at DYNIX/ptx boot time by the FLEX-ES Memory Manager. This memory is contiguous, dedicated to FLEX-ES use, and is non-pageable. This contiguous memory is required for instances which include use of a PCA.

4.7.1 S/390 expanded memory Expanded memory for an S/390 instance is specified in the essize statement within the system configuration definition for a system. The expanded storage size is specified in megabytes and is allocated out of server memory. The statement essize(512) would allocate 512 megabytes of server memory to the instance in which the parameter is defined. The maximum expanded memory size is typically in the range 2 to 2.5 GB.

Chapter 4. Typical customization tasks and considerations 49 4.7.2 Processor Cache FLEX-ES utilizes a number of advanced techniques to achieve high performance of S/390 instruction emulation. One of these techniques is to do a form of pseudo-compilation of S/390 instructions into an intermediate format and then store (cache) this intermediate result for later reuse. The size of the cache used to hold this intermediate result for later reuse can be specified via the cachesize parameter in the system configuration definition. This amount used by this configuration is “per processor” and is specified in units of K (kilobytes). A normal allocation would be in the range of 1024 to 4096 and this memory is allocated from server memory. In a system defined with three S/390 processors, the statement cachesize(2048) would result in a total of six megabytes of server memory being allocated for processor cache.

4.7.3 Disk Data Cache It is a well recognized that a disk cache can provide dramatic performance improvements. FLEX-ES does some disk caching by default, providing one cylinder of cache per device at the device level. Without specifying any additional cache, there will be one cylinder worth of cache per disk device. Additional cache can be specified at the device or control unit level. The trackcachesize parameter can be specified on either the resource device statement or the resource control unit statement. The cache size is specified in terms of tracks. A 3390 cylinder has 15 tracks, so the default (minimum) cache size is 15 tracks (one cylinder in size, but specified in terms of tracks).

Consider the following resource definition: vmdasd: cu 3990 interface local(1) options ‘trackcachesize=1500’ # floating disk cache # Override default trackcachesize (1 cyl) on 240RES, 240W01 and 240W02 device(00) 3390-3 /dev/vx/rdsk/s390dg/0133903s1 devopt ‘trackcachesize=150’ device(01) 3390-3 /dev/vx/rdsk/s390dg/0233903s1 devopt ‘trackcachesize=300’ device(02) 3390-3 /dev/vx/rdsk/s390dg/0333903s1 devopt ‘trackcachesize=150’ device(03) 3390-3 /dev/vx/rdsk/s390dg/0433903s1 end vmdasd

This would allocate 150 tracks (10 cylinders) for device(00), 300 tracks for device(01), 150 tracks for device(02) and the default 15 tracks for device(03). The trackcachesize=1500 at the control unit level exceeds the of the device cache sizes by 885 (1500 - (150+300+150+15) ), so an amount of memory equal to 885 tracks is allocated to a floating cache. This floating cache is available for any device on that control unit.

4.7.4 S/390 central storage The specification of S/390 central storage (or central memory)isthemost complex because of the possible use of a Parallel Channel Adapter and corresponding required use of contiguous memory. The simplest case is the definition of a single system with no PCA. The most complex case is the definition of two or more systems (instances), one or more with a PCA and one or more without a PCA. When a PCA is used by an S/390 instance, the resource configuration must include a memory resource statement, the name of which must match the name of the system configuration.

We will consider several examples with differing memory configurations:

50 NUMA-Q and S/390 Emulation Single system, no PCA This system (Figure 7 on page 51) has a single S/390 instance and no Parallel Channel Adapter. It does include some expanded storage for the S/390 system.

S/390 Disk Cache

Central Processor Cache Storage S/390 Expanded Server storage Memory

DYNIX

Figure 7. Single S/390 Instance, no PCA

Single system using a PCA This system (Figure 8) is using a Parallel Channel Adapter, so a preallocated contiguous memory area must be used and the S/390 storage allocated from this contiguous memory. Memory for the expanded storage (ES), processor cache and disk cache can still be allocated from normal server memory.

S/390 C.S. Disk Cache for Processor Cache VM1 S/390 E.S. for Contiguous VM1 Memory Server Memory DYNIX

Figure 8. Single System with PCA

Multiple instances with PCA The next example (Figure 9 on page 52) shows a configuration with a PCA and multiple S/390 instances. In the example shown, there are two active instances (VM1 and OS/390) which are using PCA adapters and one other instance (VM2) which is not using a PCA. The two instances using PCAs have their central memory allocated out of the contiguous memory and the other instance allocated its memory out of normal server memory. The VM1 and OS/390 instances have S/390 expanded storage allocated out of server memory, and VM2 does not have any S/390 expanded storage defined.

Chapter 4. Typical customization tasks and considerations 51 Disk Cache S/390 C.S. All Devices for VM1 Processor Cache S/390 E.S. S/390 C.S. S/390 for VM1 for C.S. S/390 E.S. OS390 for for OS390 VM2 Contiguous Memory Server Memory DYNIX

Figure 9. Multiple Instances with PCA

4.7.4.1 Considerations with a PCA When there is no Parallel Channel Adapter in a configuration, the memory allocation and usage is relatively simple. All memory for all uses is allocated from the normal server memory. The primary concern in this case is that the server memory not be over allocated, causing DYNIX to swap memory to and from its disk swap areas and slowing the S/390 system(s).

When a PCA is to be used, additional requirements are introduced because all I/O data transfers must be done to and from a block of contiguous memory. This block of memory is allocated at DYNIX boot time by the FLEX-ES memory manager. The normal FLEX-ES install script sets a maximum allocation of 512 MB contiguous memory. Note that the maximum contiguous memory may vary by system configuration.

For any system (instance) that uses a channel adapter, there must be a memory resource defined in the resource configuration file. This memory clause defines the amount of memory, within the contiguous memory area, that is to be allocated to the instance with the same name as the memory clause. When the system is started, the specified amount of memory is allocated out of the contiguous area for this instance. Multiple instances can share the contiguous memory area at one time as long as the total of the memory resource statements of the active instances do not exceed approximately 97% of the size of the contiguous area. (The other 3% is required for FLEX-ES internal purposes.)

For example, here is a system with 512 MB in the reserved contiguous memory area and the following statements in the resource definition file: vm1: memory 128 end vm1 vm2: memory 96 end vm2 vm3: memory 256 end vm3 os390a: memory 256 end os390a os390b: memory 240 end os390b

Instances vm1, vm2, and vm3 could all run at the same time since the total memory resource (480MB) is less than 97% of the contiguous area (.97 x 512 = 496). On the other hand, instances vm3 and os390a could not run at the same

52 NUMA-Q and S/390 Emulation time because they would exceed the “97% rule”. System os390b could run at the same time as either os390a or vm3, since either combination would be within the 496 MB limitation.

4.7.4.2 Memory recommendations Based on the above information and requirements and on our observations, we can make the following very general suggestions: • Use expanded storage Most systems with a Parallel Channel Adapter will be limited as to the amount of S/390 Central Storage that can be allocated; this is normally about 496 MB. For a system with several enabled S/390 processors, this will probably result in a memory-constrained system. In configurations such as this, use expanded storage whenever possible. Both VM/ESA and OS/390 can make very effective use of expanded storage. (VSE/ESA does not utilize ES directly, but some vendor software products can do so.) • Exploit disk cache Although NUMA-Q provides a very fast disk I/O subsystem, the old guideline of “the best I/O is no I/O” is still very true. Specifying a large disk cache at the control unit level is very simple and can offer significant performance improvements. The disk cache comes out of server memory, not the contiguous memory area, and in most NUMA-Q EFS systems, this will not be a constrained resource. • Only use contiguous memory when necessary If an S/390 instance does not use a Parallel Channel Adapter, do not define the corresponding memory statement in the resource definition file. Without the memory statement, the instance will have its central storage allocated from server memory which is normally a relatively unconstrained resource.

There is, however, one apparent disadvantage to making extensive use of server memory. When we defined a VM system with 1 GB of central storage and 512 MB of expanded storage, all allocated out of server memory, there was a very significant delay when starting the instance. On our test system this took about two or three minutes while starting an instance within the contiguous area took only a few seconds. FLEX-ES or DYNIX/ptx seems to require significant time to allocate and prepare the server memory for use by the S/390 system.9

9 This may have been due to a bug in our prerelease code. We suggest that this not be considered a long-term problem.

Chapter 4. Typical customization tasks and considerations 53 54 NUMA-Q and S/390 Emulation Chapter 5. VSE/ESA on NUMA-Q EFS

We expect that VSE/ESA customers may be especially interested in a NUMA-Q EFS system. It can provide a substantial performance boost for many VSE/ESA installations without encountering the hardware and software price levels of a Multiprise 3000 system.

This chapter describes our installation and use of a small VSE/ESA system on our NUMA-Q EFS machine.

5.1 Planning an installation Before you install the VSE/ESA system on NUMA-Q EFS you should decide on a general process. You can: • Migrate your current system, using tape • Install a new system from tape • Install a system from CD-ROM, if this option is available to you

These all require the same general steps: • Create emulated DASD and disk space • Prepare a resource configuration file • Low-level disk format using CKDFMT • Initialize DASD using ICKDSF •RestoreDASDvolumes • Load resources into the resource manager • Create an IPL script •IPLthesystem

DASD emulation performed under FLEX-ES provides the disk storage environment for the installation. This is described briefly in “Configuring emulated disk volumes” on page 33, and in more detail in Appendix B.4, “Allocating S/390 volumes” on page 133. The DYNIX/ptx Storage Volume Manager (SVM) should be used to create the disk space for the S/390 emulated DASD devices.

All NUMA-Q EFS S/390 volumes (system and user) must be initialized using ICKDSF before you can restore any data or system volumes. When all restores are complete, prepare a resources configuration file and compile it. This will create a system configuration file and a resource configuration file. The resource configuration file is used to activate resources in the Resource Manager. A shell script is then used to IPL the system.

5.2 Basic VSE/ESA migration or installation You can migrate your existing VSE/ESA system to NUMA-Q EFS using backup tapes. You would simply restore the tapes on the NUMA-Q EFS system. (This assumes you have a compatible tape drive on your NUMA-Q EFS system. Tape drives are discussed in “Tape drives” on page 40.)

Before doing this, you need to create the emulated disk volumes on the NUMA-Q EFS system and initialize them with the FLEX-ES utilities, as described in the previous chapter. ICKDSF is then used to perform the initial stand-alone restore of your system volumes. Once that is complete, you can restore user volumes.

© Copyright IBM Corp. 2000 55 Normal operational procedures should be followed to restore the rest of the system.

Once your VSE/ESA system has been restored, you can upgrade to the latest VSE/ESA release. This can be a new installation or a Fast Service Upgrade (FSU).

5.3 Installing the AD CD-ROM We elected to install a VSE/ESA system from CD-ROM. The steps we used are detailed in “Installing the VSE/ESA AD CD-ROM system - CKD format” on page 88 and will not be repeated here.

Briefly, our configuration included: • Memory: 220 MB • 2540 card reader at address 00C • 2540 card punch at address 00D • 1403 printer at address 00E • 3390-3 DASD at addresses 120-124 • 3480 tape at address 181 (FLEX-ES FakeTape) • CTCA at addresses 600 and 620 • 3172 pair at addresses 800-801 (Ethernet)

The FLEX-ES configuration definitions we used are listed in Appendix A, “Configuration file listings” on page 115.

5.3.1 Shell script The detailed installation steps (in “Installing the VSE/ESA AD CD-ROM system - CKD format” on page 88) include the creation of a DYNIX/ptx shell script. A few comments on this shell script, which we named shvsec, may be helpful. We placed it in the /usr/flexes/rundir directory.

The script first modifies the DYNIX/ptx PATH environment to include /usr/flexes/bin, which is where most of the FLEX-ES executables are found. It then mounts two terminals, causing them to appear (by name) in the Terminal Solicitor listing. The read command causes the shell script to pause and wait for terminal input. The input is ignored; it is the pause that is useful. After the user enters something (such as an Enter key) at the terminal, an IPL command is generated by the shell script. The last line of the shell script starts the FLEX-ES CLI interface in interactive mode and leaves it running on the terminal.

The purpose of the pause (produced with the read command) is to allow the operator to access (activate) the 3270 terminal(s) that should be active before the IPL starts. This is done using the Terminal Solicitor.

It is not necessary to include the read and ipl commands in the shell script. The ipl command could be entered at the flexes> prompt that is presented by the flexescli command that is left running when the shell script ends.

5.3.2 Terminal Solicitor Before you can IPL the VSE/ESA system, you must connect the 3270 device that will be the VSE/ESA system console. This is done by establishing a 3270

56 NUMA-Q and S/390 Emulation connection with the FLEX-ES Terminal Solicitor. In our shell script, the master console (at address 01F) is mounted with the name vseccons. The operator should access the Terminal Solicitor (by opening a TN3270 session to port 24 on DYNIX/ptx) and then select the device named vseccons. The TN3270 session is then switched to the 3270 device at address 01F of the VSE/ESA system. If this step is not performed, the IPL will hang because there is no master console available.

5.4 Multi-system setup There are several issues to keep in mind when setting up more than one VSE/ESA CPU complex: • Resource configuration file parameters • System section: •Thesystem name in the system section of the resource configuration file must be unique. •Thememsize parameter must match the virtual machine size of the VSE/ESA system. • Number of CPUs assigned to this system, and whether any should be dedicated, should be reviewed. • Channels available to each system must be reviewed. • Control units defined to each system must be reviewed and have unique names. • Resources section: • The resource name in the resources section must match the unique name in the system section. • Memory size of the total system must not exceed the fixed memory size available if parallel channel adapters are used. • Devices defined to this system should match your naming conventions. • Shell script parameters: • A different script and system name is used for each instance • New devices need unique names

For an example, see “Installing the VSE/ESA AD CD-ROM system - CKD format” on page 88

5.5 Connectivity There are many ways to connect a VSE/ESA system to a network. A few are discussed here.

5.5.1 TCP/IP for VSE/ESA TCP/IP for VSE/ESA can use the LAN cards (token ring or Ethernet) as an emulated LAN device (3172). We used TN3270 sessions and FTP over an Ethernet LAN and they worked as expected.

Changes were made to the VSE IPINIT01.L as follows: SET IPADDR = 9.12.17.193

Chapter 5. VSE/ESA on NUMA-Q EFS 57 This is the IP address used on the LAN adapter for our VSE/ESA system.1 SET MASK = 255.255.255.0 DEFINE LINK,ID=NUMA_3172,TYPE=3172,DEV=300,MTU=1500 DEFINE ADAPTER,LINKID=NUMA_3172,NUMBER=0,TYPE=ETHERNET DEFINE ROUTE,ID=LOCAL_NET,LINKID=NUMA_3172,IPADDR=0.0.0.0

The following define statements are for IP addresses for other S/390 CPU complexes running on our NUMA-Q EFS system: DEFINE NAME,NAME=VMESA,IPADDR=9.12.17.191 DEFINE NAME,NAME=DYNIX,IPADDR=9.12.17.190 DEFINE NAME,NAME=OS390,IPADDR=9.12.17.192

5.5.2 Integrated communications adapter The FSI Integrated Communications Adapter (ICA) was used to connect one of our VSE/ESA systems to a remote 3174-61R using limited distance modems.

The FSI ICA definition can be found in “FSI Integrated Communications Adapter (ICA)” on page 25. The VSE/ESA definitions are included here: 1. An ADD statement in the IPL procedure for the ICA at address 038, defined as a 3705 device type model 10. The model 10 is required for ICAs. See VSE/ESA System Control Statements for details: 01F,$$A$SUPX,VSIZE=250M,VIO=512K,VPOOL=64K,LOG ADD 00E,1403 ADD 01F,3277 ADD 038,3705,10 ADD 050,2703,EML ADD 150:151,ECKD 2. The channel-attached VTAM definition for the ICA. The nonswitched SDLC line defines a physical unit (PU) with multiple logical units (LUs). The MAXDATA parameter of 265 is required for 3174s: VSEICA VBUILD TYPE=CA ICAGRP GROUP LNCTL=SDLC,DIAL=NO * * ICA TO REMOTE 3174 ICALINE LINE ADDRESS=038,ISTATUS=ACTIVE * ICAPU PU ADDR=C4,PUTYPE=2,MAXOUT=7,MAXDATA=265,ISTATUS=ACTIVE ICALU1 LU LOCADDR=2,MODETAB=IESINCLM,USSTAB=VTMUSSTR, * DLOGMOD=SP32703S,ISTATUS=ACTIVE ICALU2 LU LOCADDR=3,MODETAB=IESINCLM,USSTAB=VTMUSSTR, * DLOGMOD=SP32703S,ISTATUS=ACTIVE

5.5.3 Emulated channel-to-channel adapter An emulated channel-to-channel adapter (CTCA) was used between a VSE/ESA system and a VM/ESA system. The VSE/ESA system ran natively on FLEX-ES, not as a VM/ESA guest. The FLEX-ES CTC definition can be found in “FLEX-ES configuration for VM/ESA” on page 118 plus “FLEX-ES configuration for VSE/ESA - CKD format” on page 120.

The VSE/ESA definitions for the CTC at address 400 include:

1 We used a private Ethernet hub. None of the 9.12.17.x addresses we used are assigned to us, so do not try to connect to these addresses. We should probably have used 10.x.x.x addresses.

58 NUMA-Q and S/390 Emulation 1. An ADD statement in the IPL procedure for the CTC at address 400, defined as a CTCA device type. The EML operand causes IPL to ignore device sensing and add the device as the type specified in the ADD command. See VSE/ESA System Control Statements for details: 01F,$$A$SUPX,VSIZE=250M,VIO=512K,VPOOL=64K,LOG ADD 00E,1403 ADD 01F,3277 ADD 038:03D,3705,10 ADD 050,2703,EML ADD 150:151,ECKD ADD 200:203,3277 ADD 300:301,CTCA,EML ADD 400,CTCA,EML ADD 570:571,3490 ADD 580,3422 ADD 590,3490 2. The VTAM start options list (ATCSTRxx) contains parameters that should match ones in other VSE/ESA definitions and in VM/ESA. The SYS2CDRM SSCPNAME must match the VSE/ESA CDRM name. The USIBMPC NETID must match the netid in VM/ESA. The HOSTSA=2 subarea number must be defined in the VSE/ESA PATH table and in VM/ESA as the destination subarea: SSCPID=1, * SSCPNAME=SYS2CDRM, * NETID=USIBMPC, * HOSTSA=2, * HOSTPU=PU3006, * MAXSUBA=255, * CONFIG=00, * NOPROMPT, * IOINT=0, * SGALIMIT=0, * BSBUF=(28,,,,1), * CRPLBUF=(60,,,,1), * LFBUF=(70,,,,11), * IOBUF=(70,288,,,11), * LPBUF=(12,,,,6), * SFBUF=(20,,,,20), * SPBUF=(210,,,,32), * XDBUF=(6,,,,1) 3. The VTAM cross-domain resource manager (CDRM) definition should include the SSCPNAME for VSE/ESA, the SSCPNAME for VM/ESA, as well as their corresponding subarea numbers: CDRM3006 VBUILD TYPE=CDRM SYS2CDRM CDRM SUBAREA=2,CDRDYN=,CDRSC=OPT SYS1CDRM CDRM SUBAREA=1,CDRDYN=YES,CDRSC=OPT 4. The VTAM cross-domain resource (CDRSC) definition includes the resource available locally to VSE/ESA and to VM/ESA: CDRS3006 VBUILD TYPE=CDRSC NETWORK NETID=USIBMPC CICS3006 CDRSC 5. The VTAM PATH table defines two explicit routes to subarea one (VM/ESA) over transmission group four:

Chapter 5. VSE/ESA on NUMA-Q EFS 59 * FROM SA 2 TO 1 * TOVM24 PATH DESTSA=1, * ER1=(1,4),ER2=(1,4),VR1=1,VR2=2 6. The VTAM CTCA definition to VM/ESA. PUTYPE=4 is required for CTCAs and transmission group four (TG4) is used for all communications: VMCTCA1 VBUILD TYPE=CA OVN400G GROUP LNCTL=CTCA, * DELAY=.000, X PUTYPE=4, X REPLYTO=3.0 * * CTCA TO VM240 OVN400L LINE ADDRESS=400, * ISTATUS=ACTIVE * OVN400PU PU ISTATUS=ACTIVE,TGN=4

The VM/ESA definitions for the CTC at address 600 include: 1. The VTAM CTCA definition to VSE/ESA. PUTYPE=4 is required for CTCAs and transmission group four (TG4) is used for all communications: OVN600 VBUILD TYPE=CA * OVN600G GROUP LNCTL=CTCA, * DELAY=.000, PUTYPE=4, X REPLYTO=3.0 * CTCA TO VSETCP OVN600L LINE ADDRESS=600, * ISTATUS=ACTIVE * OVN600PU PU ISTATUS=ACTIVE,TGN=4 2. The VTAM cross domain resource (CDRSC) definition includes the resource available to VM/ESA on the VSE/ESA system: VBUILD TYPE=CDRSC * NETWORK NETID=USIBMPC * CICS3006 CDRSC CDRM=SYS2CDRM 3. The VTAM PATH table defines two explicit routes to subarea two (VSE/ESA) over transmission group four: * TGN4 = CTC TO VSETCP * PATH DESTSA=02, - ER1=(02,4),VR1=1, - ER2=(02,4),VR2=2

60 NUMA-Q and S/390 Emulation Chapter 6. VM/ESA on NUMA-Q EFS

This chapter discusses how VM/ESA can be installed and used on a NUMA-Q EFS system. It also illustrates the advantages of adding VM/ESA in this environment, to provide greater flexibility in running other operating systems directly under VM/ESA’s control, rather than under separate instances of FLEX-ES.

6.1 General Because of VM/ESA’s architecture, there is a special consideration when running under FLEX-ES. When VM/ESA is running in basic mode and has no work to dispatch on a processor, it performs a loop (running in storage key 3) looking for work in the other processors’ work queues. This is known as an active wait and is used even in uniprocessor configurations. The effect of an active wait is that the S/390 processors that VM/ESA manages always run 100% busy. On NUMA-Q EFS, this means that S/390 processors (and thus Pentium processors) assigned to VM/ESA will be 100% busy. This is a potentially undesirable effect, especially when sharing Pentium processors between multiple FLEX-ES instances. Additionally, the associated processor activity lights on the NUMA-Q cabinet’s face will not accurately reflect true system activity.

VM/ESA will not use active wait if it detects that it is running in an LPAR or in a virtual machine as a guest of a higher level VM/ESA. To prevent VM/ESA from using active wait on NUMA-Q EFS, FLEX-ES can be told to turn on a bit in the system SCPINFO indicating to VM/ESA that it is running in an LPAR. The parameter feature lpar can be placed in the FLEX-ES configuration file in the system definition for the VM/ESA system to accomplish this. Setting this SCPINFO bit is the only effect of the feature lpar parameter. Aside from eliminating the active wait, this means that guest operating systems running under VM/ESA (when it believes it is running in an LPAR) inherit the same restrictions as if running under VM/ESA in a real LPAR. For further details on the types of restrictions, consult ‘VM/ESA: Running Guest Operating Systems’, SC24-5755.

If you are not already running VM/ESA, then you might consider it for controlling your other S/390 operating systems, since VM/ESA has: • Facilities for setting usage parameters by each guest, like giving a production guest more CPU than test/development guests. • The capability of sharing devices, real or emulated, among all its guests. This allows you to easily give access to tape drives to any one guest, without system interruptions. • The flexibility of making dynamic changes to any guest, without having to modify the FLEX-ES configuration, or affecting any of the other guests.

6.2 Configuration On our NUMA-Q system, we used the following configuration for VM/ESA: • Memory: 128 MB • Expanded storage: 2 GB

© Copyright IBM Corp. 2000 61 • 2540 card reader at address 00C • 2540 card punch at address 00D • 1403 printer at address 00E • 3270 terminals at addresses 020-02F • 3390-3 DASD at addresses 120-124 • 3480 tape at address 181 (FLEX-ES FakeTape) • CTCA at addresses 600 and 620 • 3172 pair at addresses 800-801 (Ethernet)

The FLEX-ES configuration necessary to satisfy this is shown in “FLEX-ES configuration for VM/ESA” on page 118.

6.3 Installation In our case, after the five 3390-3 DASD had been defined through the DYNIX/ptx volume manager (SVM), we unzipped the DASD images from the VM/ESA preconfigured CD-ROM, and fed the output to FLEX-ES’ conversion program (ckdconvaws), which restored the DASD to our emulated 3390-3. All the steps necessary to get to this point, defining the VM/ESA system and resources, up to the IPL point are covered in “Installing the VM/ESA AD CD-ROM system” on page 83.

6.4 VM/ESA IPL Running the shell script from “Installing the OS/390 AD CD-ROM system” on page 79 will lead you to the normal stand-alone program loader (SAPL) panel. From there you can perform the standard VM/ESA IPL, as you would on any VM/ESA-supported processor.

6.4.1 Recustomization for NUMA-Q Since the preconfigured CD-ROM is targeted for a P/390, the VM/ESA system has been specifically customized for that environment. For example, the first thing that happens when userid OPERATOR is autologged and IPLed, is that it tries to ”talk” to OS2, to get the directory source (USER.DIR). This, of course, is no longer applicable. Here are some of the items we did to make the preconfigured for P/390 VM/ESA more “NUMA-Q-friendly”: • Copy OPERATOR’s PROFILE EXEC to PROFILE P390, and remove the P/390-only sections. • Repeat this step for userid MAINT. • Rename USER DIRECT on MAINT’s 2C2 minidisk, and replace it with the current source directory, DIRECT P390 from OPERATOR’s 191 minidisk. • Comment out the following statements for MAINT’s entry in the USER DIRECT copied above, as they only pertain to P/390 emulated devices: MDISK 19F FB-512 0 END YDISK MR ALL MDISK 292 FB-512 16 2048 FLOPPY MR ALL • Change the System_Identifier_Default in the SYSTEM CONFIG, from P/390 to VMNUMA-Q, on MAINT’s CF1 disk (you need write access first). • Change the LOCAL LOGO to replace the P390 block letters.

62 NUMA-Q and S/390 Emulation • Change the SYSTEM NETID file on MAINT’s 490 and 190 to specify VMNUMA-Q and the actual CPU ID, instead of P/390. Resave CMS which is required whenever changes are made to 190 or 19E: access 193 c sampnss cms ipl 190 clear parm savesys cms

6.5 Differences when running VM/ESA on the NUMA-Q Operating VM/ESA on the NUMA-Q under FLEX-ES is quite transparent. In this section, we point out the use of expanded storage (XSTORE), and how the FLEX-ES emulation affects I/O measurements.

6.5.1 Use of expanded storage If using the Parallel Channel Adapter (PCA), the current FLEX-ES implementation is restricted to a maximum of about 512 MB of S/390-enabled storage at resource definition time. Since we had 8 GB of storage on our NUMA-Q machine, this was the time to take advantage of this available storage, and define it as expanded storage. In our configuration, we used the essize(2048) parameter to get 2 GB of expanded storage addressable by VM/ESA. This showed up as follows on our system: q xstore XSTORE= 2048M online= 2048M XSTORE= 2048M userid= SYSTEM usage= 0% retained= 0M pending= 0M XSTORE MDC min=0M, max=2048M, usage=0% XSTORE= 2048M userid= (none) max. attach= 2048M Ready; T=0.01/0.01 14:56:00

There may be limited benefit in allowing minidisk caching, since the underlying I/O is done on the NUMA-Q, and is already being cached. If you wish to turn it off, use the CP RETAIN, and the SET MDC SYSTEM OFF commands to do so.

6.5.2 I/O measurements Because the emulated I/O is not seen by VM/ESA, there are no counters to query. This is because the sub-channel measurement blocks are not filled in by the FLEX-ES emulation. For instance, if you look at the VM/ESA Real Time Monitor (RTMESA) main output display, you might see something like this:

|VM/ESA CPU nnnn SERIAL 0D70D9 120M DATE 11/08/00 START 14:42:18 END 14:42:48| |* | %CPU %CP %EM ISEC PAG WSS RES UR PGES SHARE VMSIZE TYP,CHR,STAT | RTMESA .25 .17 .07 .53 .00 479 500 .5 0 3%A 6M VUS,QDS,SIMW | MAINT .25 .11 .13 .50 .00 246 246 .0 0 100 64M VUS,IAB,IDLE | SYSTEM .06 .06 .00 .00 .00 0 1476 .0 521 ..... 2G SYS, | | |<--- DEVICE ---> <----- DEVICE RDEV DATA ------> <-- MEASUREMENT FACILITY->| |* | DEV TYPE VOLSER IOREQST SEC %Q %ER R %LK LNK PA %UT ACC FPT DCT CN %CN| | |

Chapter 6. VM/ESA on NUMA-Q EFS 63 | | | | || |<------CPU STATISTICS ------> <-- VECTOR ---> | NC %CPU %US %EM %WT %SY %SP XSI %SC NV %VT %OT RSTR %ST PSEC %XS XSEC TTM |-> 2 2.1 1.0 .82 198 .23 .00 42 87 0 .00 .00 0 3 0 0 0 0.051 |<-.. 1.7 .89 .56 198 .20 .00 34 90 .. .00 .00 0 7 0 0 0 0.080

The middle section, which usually reports I/Os per device, is empty because there is nothing for RTMESA to report. However, the CP MONITOR can compute I/O times more accurately, and be extracted using the IBM Performance Reporting Facility (VMPRF), or other vendor products that work with monitor data.

6.6 Running guest operating systems There are no technical changes when running guests under VM/ESA on the NUMA-Q machine, as opposed to the P/390, R/390, or other VM/ESA-supported hardware. The only operational difference would be that, if you are using the feature lpar statement in the FLEX-ES configuration, to prevent VM/ESA from doing active waits, then VM/ESA will inherit all the characteristics of running on a real LPAR, with support for only one V=R guest, and no guests will be able to use the fixed virtual (OPTION V=F) CP directory definition, just as you would expect on standard S/390 hardware.

6.7 Access to VM/ESA files from NUMA-Q Using network file system (VMNFS), which is a feature of the TCP/IP support supplied in the VM/ESA 240 base, allows DYNIX/ptx to have access to files on the VM/ESA host as if they were part of its own files. However, after using the DYNIX/ptx mount command, which requires inclusion of the password for the target userid, subsequent mount or commands will display this password. This is known traditional UNIX behavior. For example, using MAINT 191 as a target and IP address 9.12.17.191 for the VM/ESA host, we do the mount, and then display what is mounted via the mount command again, without parameters: # mount -F nfs -o soft,ro 9.12.17.191:maint.191,ro,lines=, \ tran=yes,u=maint,p=maint /mnt # mount (... skip system mounts ...) /mnt on 9.12.17.191:maint.191,ro,lines=nl,tran=yes,u=maint,p=maint read only on Thu Nov 9 14:56:20 2000 type nfs (soft/noquota)

Note that both the user and password are displayed. They are also present using the df command. To alleviate this, the TCP/IP component of VM/ESA supplies the source for a sample C program called mountpw. You need to copy this source program to DYNIX/ptx, and run make against it. The source CMS file is available on TCPMAINT’s 592 as MOUNTPW C. It includes a long, detailed prolog that you should review. After compiling it, invoke it just before you execute the mount command: # ./mountpw 9.12.17.191:maint.191,p=maint,u=maint (Now execute the previous mount command *WITHOUT* the user and password:) # mount -F nfs -o soft,ro 9.12.17.191:maint.191,ro,lines=nl,tran=yes

64 NUMA-Q and S/390 Emulation # mount (... skip system mounts ...) /mnt on 9.12.17.191:maint.191,ro,lines=nl,tran=yes read only on Thu Nov 9 15:32:11 2000 type nfs (soft/noquota

Now neither the user nor the password are shown. Note that, as per the information in the prolog of mountpw, you have about 5 minutes to do the mount after mountpw has been executed. You now have read access to MAINT’s 191; by prefixing your commands with the mount point, for example ls -l /mnt/pr* you can list all the files starting with pr* in the form filename.filetype (lowercase).

Chapter 6. VM/ESA on NUMA-Q EFS 65 66 NUMA-Q and S/390 Emulation Chapter 7. OS/390 on NUMA-Q EFS

This chapter describes our experiences running a conventional OS/390 system on a NUMA-Q EFS platform.

7.1 Planning and installation The process of installing an OS/390 system on a NUMA-Q server running DYNIX/ptx and FLEX-ES emulation requires some planning. General steps include the following: • Plan and define emulated DASD devices using DYNIX/ptx SVM. • Install the OS/390 volumes. This might be from backup tapes or from a CD-ROM distribution. Backup tapes could be installed with stand-alone ICKDSF. CD-ROMs (assuming the P/390 versions) are installed using an unzip utility and the FLEX-ES conversion program ckdconvaws. • Prepare a resource configuration file by defining real (if applicable) and emulated hardware devices, and a system definition file. • Compile the system and resource definition files using the FLEX-ES resource compiler cfcomp. • Activate the resource definition using the FLEX-ES resource manager resadm • Create an IPL script.

The specific commands and files we used for these steps are described in “Installing the OS/390 AD CD-ROM system” on page 79.

7.1.1 Package selection We chose an OS/390 AD CD-ROM system (OS/390 release 2.9) for the ITSO projects used to produce this redbook. Not all readers are familiar with these “AD” systems; we briefly explain them here.

An AD system (a shortened form of Application Development system) is a prepackaged OS/390, with a number of priced features and additional program products included.1 Considerable customization has already been done, making the system immediately usable for many functions. The AD systems are available only to members of IBM’s PartnerWorld for Developers (formerly known as Partners in Development, or PID) who obtain systems through the PID program. They are not available to general IBM customers.

Why did we use it for our NUMA-Q EFS projects? We used it primarily because it provides a very easy way to install a useful OS/390 system. We could have built an OS/390 system starting with a ServerPac, in the same way most OS/390 customers build their systems. However, this requires considerably more time and effort and would have detracted from the time spent working with NUMA-Q EFS elements.

In general, an OS/390 AD system is a rather straightforward implementation of OS/390 and contains no magic components or “clever” setups. The experience and results of using it on NUMA-Q EFS should be about the same as using any other straightforward OS/390 implementation.

1 There are AD systems available for VM/ESA and VSE/ESA also. The discussion in this chapter is about OS/390, so we limit this discussion to the OS/390 AD systems.

© Copyright IBM Corp. 2000 67 The AD CD-ROM systems, as the name implies, are distributed on CD-ROMs. This aspect is not common to other OS/390 packaging, but does not affect the characteristics of the system once it is installed. The CD-ROMs are not seen by the S/390; they are processed2 by a server (usually with OS/2) that routinely handles CD-ROM drives.

7.1.2 OS/390 device configuration The OS/390 AD CD-ROM release 9 can provide basic, useful operation using the following devices:

Address Device VOLSER DYNIX or Comments FLEX-ES

00C 2540R Emulated card reader

00D 2540P Emulated card punch

00E 1403-N1 Emulated printer

A80 3390-3 OS39R9 2133903s1 IPL volume; OS/390 libraries

A81 3390-3 OS3R9A 2233903s1 More OS/390 basic libraries

A82 3390-3 OS39M1 2333903s1 Paging, parmlib, catalogs, etc

A87 3390-2 OS39H9 2433903s1 System HFS data sets

A8A 3390-1 WORK01 2633901s1 (Not part of AD) STORAGE vol

A8B 3390-1 WORK02 2733901s1 (Not part of AD) STORAGE vol

A8C 3390-1 WORK03 2833901s1 (Not part of AD) STORAGE vol

560 3480 SCSI drive emulating 3490

700-701 3270 osmstcon, osaltcon NIP & OS/390 master console

702-71F 3270 osterm1-30 Local non-SNA VTAM terminals

900-90F 3270 Local non-SNA VTAM terminals

E20, E21 CTC /dev/net/pe2 Used for LAN TCP/IP interface

E22 CTC General CTC definition

The AD system can use a much larger set of addresses and devices than shown here.3 These represent a basic useful system. Some of the addresses are arbitrarily chosen from the larger set provided with this AD system. The three WORKxx volumes shown in the table are not distributed with the AD system. We added these later, to provide scratch volumes and local storage space. The AD volumes containing the DLIBs, DB2, CICS, and IMS are not included in this list; they are not necessary for basic operation and we decided to not install them.

We defined an emulated card reader, punch, and printer although we did not use them during our projects. They are shown in the table above, but could be omitted.

2 This is true when installing on a P/390, Integrated Server, or Multiprise 3000. The processing consists of UNZIPing PC files. Each file contains an emulated 3390 disk volume. 3 This means that the IODF distributed with the AD system contains a large number of defined devices and addresses. We selected a subset of these already-defined devices and addresses.

68 NUMA-Q and S/390 Emulation At the system level, we defined a S/390 system with 492 MB central storage and 1 GB expanded storage. The 1 GB expanded storage was somewhat arbitrary. We had ample NUMA-Q server storage (8 GB). Our early NUMA-Q EFS code had a problem with expanded storage definitions larger than about 2 GB, so we arbitrarily selected 1 GB expanded storage for OS/390.

We initially did not use a Parallel Channel Adapter with OS/390. Nevertheless, we observed the storage limitation that applies when parallel channel adapters are used.4 The limitation is 512 MB defined S/390 central storage, minus a small amount of overhead storage for FLEX-ES. This overhead amounted to 20 MB, leaving 492 MB as the largest central storage that could be defined when parallel channel adapters are used.

Initially, we installed our OS/390 AD 2.9 system as a single FLEX-ES instance. Our OS/390 was the only defined and active system. Once the testing was completed, we proceeded to test multiple FLEX-ES instances setup. We had one or more of OS/390 systems active concurrently with VM/ESA and VSE/ESA systems.

The FLEX-ES resource configuration file consists of two sections: 1. The system section 2. The resources section

These can be defined in two files or in separate sections of a single file. For working with a single S/390 instance, we thought that using a single file was easier. (Later, when running multiple S/390 instances, separate files were easier to manage.)

The system definition section defines the system name and other system resources such as: • Central memory size available to this system • Extended memory size • Number and type of CPUs • Number and usage of channels • Control units for all system devices

The resources section defines a set of resources for a single system or multiple systems such as: • Central memory size available to the whole complex • Interfaces for all the control units defined in the system section • Devices for all the control units defined in the system section

The FLEX-ES configuration files (resources and systems) must be compiled, using the command cfcomp, as shown here: $ cfcomp sysconf.s390 Start FLEX-ES Configuration Utility Configuration processing *SUCCEEDED* with no errors Data Space Manager Terminated

Compiling the file sysconf.s390 (see “FLEX-ES configuration for OS/390 - single instance” on page 115 for a full listing) produces two resource configuration files

4 We later intended to run OS/390 concurrently with VM and VSE, and VSE used a Parallel Channel Adapter.

Chapter 7. OS/390 on NUMA-Q EFS 69 named os39029.syscf and s390.rescf. This corresponds to a standard FLEX-ES naming convention whereby compiled (ready to use) system definitions have the file name suffix syscf and resource definitions have the file name suffix rescf. These two files are produced whether the source definitions are in one file or two files.

Once compiled, the resources are activated with the resadm command: # cd /usr/flexes/bin # ./resadm -x /usr/flexes/rundir/s390.rescf

You can list all active resources with the same command: /usr/flexes/rundir $ resadm -r Resource: CPU Flags: READY Type: CPU Port: 9365 Pid: 17483 Resource: MEMORY Flags: READY Type: MEM Port: 9366 Pid: 17484 Resource: CHANNEL Flags: READY Type: CHAN Port: 9369 Pid: 17485 Resource: os2821 Flags: READY Type: CU Port: 9368 Pid: 17486 Resource: os3480 Flags: READY Type: CU Port: 9370 Pid: 17487 Resource: os3274 Flags: READY Type: CU Port: 9374 Pid: 17488 Resource: osdasd Flags: READY Type: CU Port: 9375 Pid: 17489 Resource: os3172 Flags: READY Type: CU Port: 9377 Pid: 17490 Resource: osctc Flags: READY Type: CU Port: 9376 Pid: 17491 Resource: NETCU Flags: READY Type: NETCU Port: 9384 Pid: 17492 Resource: TS3270 Flags: READY Type: TS3270 Port: 9381 Pid: 17493 /usr/flexes/rundir $

Once the resources are active, the system is ready to be IPLed.

Appendix A, “Configuration file listings” on page 115, contains more examples of configuration files we used to define our systems in both single and multiple instance scenarios.

7.2 Operation and use The OS/390 IPL starts when you execute an ipl command. The ipl command can be entered from the flexes console or in a startup script. See Chapter 8.5, “Installing the OS/390 AD CD-ROM system” on page 79, for details of the script and IPL steps we used.

Once our OS/390 was operational, we tried a few typical OS/390 commands, as shown below (the operator commands are shown in bold type): - 11.06.30 d m=stor 11.06.32 IEE174I 11.06.30 DISPLAY M 454 REAL STORAGE STATUS ONLINE-NOT RECONFIGURABLE 0M-492M ONLINE-RECONFIGURABLE NONE PENDING OFFLINE NONE

- 11.35.17 d m=estor 11.35.17 IEE174I 11.35.17 DISPLAY M 653 EXTENDED STORAGE STATUS

70 NUMA-Q and S/390 Emulation ONLINE-NOT RECONFIGURABLE 0M-1024M

- 11.42.47 d m=cpu 11.42.47 IEE174I 11.42.47 DISPLAY M 662 PROCESSOR STATUS ID CPU SERIAL 0 + 0D70D91245 1 + 1D70D91245 3 + 3D70D91245

+ ONLINE - OFFLINE . DOES NOT EXIST

- 11.43.44 d a,l 11.43.44 IEE114I 11.43.44 2000.314 ACTIVITY 665 JOBS M/S TS USERS SYSAS INITS ACTIVE/MAX VTAM OAS 00002 00011 00001 00027 00012 00001/00010 00007 JES2 JES2 IEFPROC NSW S VTAM VTAM VTAM NSW S DLF DLF DLF NSW S RACF RACF RACF NSW S VLF VLF VLF NSW S INETD4 STEP1 BPXOINIT OWT AO LLA LLA LLA NSW S TSO TSO STEP1 OWT S SDSF SDSF SDSF NSW S TCPIP TCPIP TCPIP NSW SO PORTMAP PORTMAP PMAP OWT SO FTPD1 STEP1 FTPD OWT AO NFSS NFSS GFSAMAIN NSW SO P390 OWT

- 11.41.55 d u,,,a80,16 11.41.55 IEE457I 11.41.55 UNIT STATUS 659 UNIT TYPE STATUS VOLSER VOLSTATE 0A80 3390 S OS39R9 PRIV/RSDNT 0A81 3390 A OS3R9A PRIV/RSDNT 0A82 3390 A OS39M1 PRIV/RSDNT 0A83 3390 F-NRD /RSDNT 0A84 3390 F-NRD /RSDNT 0A85 3390 F-NRD /RSDNT 0A86 3390 F-NRD /RSDNT 0A87 3390 A OS39H9 PRIV/RSDNT 0A88 3390 F-NRD /RSDNT 0A89 3390 F-NRD /RSDNT 0A8A 3390 A WORK01 STRG/RSDNT 0A8B 3390 A WORK02 STRG/RSDNT 0A8C 3390 A WORK03 STRG/RSDNT 0A8D 3390 F-NRD /RSDNT 0A8E 3390 F-NRD /RSDNT 0A8F 3390 F-NRD /RSDNT

The processor type (in the d m=cpu response) is 1245. This is the processor type for a NUMA-Q EFS system. Otherwise, the displayed responses are exactly the same (except, perhaps, for storage sizes) as would be found when running on any S/390 hardware.

7.2.1 IODF requirements OS/390 requires an IODF data set that defines the I/O configuration seen by the software. This normally matches the IOCDS defined for the S/390 I/O hardware configuration. The NUMA-Q EFS platform does not have an IOCDS. All resources are defined in FLEX-ES system and resource files, compiled with the

Chapter 7. OS/390 on NUMA-Q EFS 71 FLEX-ES resource compiler cfcomp, and then activated by the FLEX-ES resource manager resadm command.

An IODF is still required within an OS/390 system, but the HCD input to generate it does not need to define control unit details. That is, a simple device definition (device number, type, optional features) is all that is required. OS/390 dynamic I/O redefinition capability is not available.

In response to a d ,config(all) command, we received the following: - 11.38.06 d ios,config(all) 11.38.06 IOS506I 11.38.06 I/O CONFIG DATA 656 ACTIVE IODF DATA SET = SYS1.IODF01 CONFIGURATION ID = CBIPO EDT ID = 00 HARDWARE SYSTEM AREA DATA COULD NOT BE OBTAINED ELIGIBLE DEVICE TABLE LATCH COUNTS 0 OUTSTANDING BINDS ON PRIMARY EDT

You can, however, perform software dynamic configuration changes via OS/390 Hardware Configuration Definition (HCD) dialogs or the OS/390 activate command, provided the affected devices are included in the FLEX-ES definition files.

As we mentioned before, in addition to OS/390 2.9 system installation, we also installed a very small OS/390 1.1 system. In order to exercise FLEX-ES shared DASD support, we needed to add more 3390 devices to our OS/390 1.1 configuration5. We defined the required DASD devices in a new IODF, which was activated dynamically with the OS/390 activate command.

The d ios,config(all) command results on OS/390 1.1 system before the configuration changes produced: - 12.49.33 d ios,config(all) 12.49.33 IOS506I 12.49.33 I/O CONFIG DATA 782 ACTIVE IODF DATA SET = SYS1.IODF00 CONFIGURATION ID = CBIPO EDT ID = 00 HARDWARE SYSTEM AREA DATA COULD NOT BE OBTAINED

The activate command was: (IODF01 is the new configuration) - 12.53.24 activate iodf=01,soft 12.53.36 IOS501I ACTIVATE CLEANUP COMPLETE 12.53.36 IOS500I ACTIVATE RESULTS 801 ACTIVATE COMPLETED SUCCESSFULLY NOTE = A843,NO HARDWARE CHANGES ALLOWED. HARDWARE DOES NOT SUPPORT THE DYNAMIC RECONFIGURATION CAPABILITY. COMPID=SC1XL NOTE = 0100,SOFTWARE-ONLY CHANGE COMPID=SC1C3

A d ios,config(all) command entered after the activate command produced: - 12.58.06 d ios,config(all) 12.58.06 IOS506I 12.58.06 I/O CONFIG DATA 806 ACTIVE IODF DATA SET = SYS1.IODF01 CONFIGURATION ID = CBIPO EDT ID = 00

5 The older OS/390 1.1 IODF, as distributed on CD-ROM, did not have any 3390 addresses in common with the OS/390 2.9 system. We added a string of sixteen 3390s, starting at address (device number) A80.

72 NUMA-Q and S/390 Emulation HARDWARE SYSTEM AREA DATA COULD NOT BE OBTAINED

We ran a number of jobs using scratch disks heavily in a shared DASD mode without problems. We used basic RESERVE/RELEASE controls that are built into OS/390, depending on emulated RESERVE and RELEASE functions. We did not set up or use GRS.

7.2.2 System performance monitors Because I/O sub-channel blocks are not maintained by FLEX-ES software emulation, the OS/390 Resource Measurement Facility (RMF) is not fully supported. You can run RMF, or other system performance monitors, but some of the reporting (especially when it comes to I/O activity) will not be complete.

When we started RMF it reported the absence of an IOCDS in this environment and automatically terminated I/O queuing activity reporting: - 11.22.34 s rmf - 11.22.35 STC00439 $HASP373 RMF STARTED 11.22.36 STC00439 ERB100I RMF: ACTIVE 11.22.36 STC00439 ERB265I RMF: IOCDS INFORMATION UNAVAILABLE TO RMF. RESPONSE CODE 01F0 11.22.37 STC00439 ERB260I ZZ : I/O QUEUING ACTIVITY RMF REPORT TERMINATED 11.22.38 STC00439 ERB100I ZZ : ACTIVE

SYS1.LOGREC may not have hardware error information. Consequently, any report produced by Environmental Recording and Editing Program (EREP) will have limited value when it comes to hardware detected errors. Due to time constraints, we did not investigate this area in more detail.

7.2.3 Security As we explained in detail in “General concepts” on page 15, FLEX-ES is a layer of software that resides and operates between an OS/390 system and an underlying DYNIX/ptx system. All the security features and functions that come with an OS/390 system work as on any other S/390 platform. However, it is possible for a DYNIX/ptx user with sufficient privilege to gain access to the contents of an emulated DASD or central storage associated with an emulated CPU, and so forth.

A NUMA-Q EFS owner must plan and manage traditional UNIX security functions for the underlying DYNIX/ptx system, as well as traditional S/390 security management. If the NUMA-Q EFS platform is used only for S/390 operation, this can be simple. If, as expected, the platform is used for other workloads in addition to S/390, this can become more complex. We are not aware of any surprises when managing DYNIX/ptx security, but it is an area that cannot be overlooked.

7.2.4 Parallel Channel Adapter (PCA) We later added a PCA to our configuration and connected an IBM 3174 communication controller. We used this with the local 3270 terminals defined at addresses (device numbers) 900-91F. We changed our system definition as follows: system os39029: ...

Chapter 7. OS/390 on NUMA-Q EFS 73 channel(2) blockmux oschpbt0 # pca card ... cu devad(0x900,16) path(2) unitadd(0x00) interlocked # real terminals ... end os39029

And we changed our resource definitions as follows: resources combined: ... os39029: memory 512 # (VM=128M,MVS=256M,VSEC=64M,VSEF=64M) end os39029 ... oschpbt0: blockmux /dev/chpbt/ch0 end oschpbt0 ... end combined

Note that a label for the memory statement in the resource definition file must match a name on a system statement in the system definition file.

The 3270 devices at addresses of 900-90F were already defined in the OS/390 IODF data set and the appropriate VTAM definitions existed as well. We used one of these real terminals as a TSO screen and another as an OS/390 system console.

If it can be managed, we suggest using a display attached to a “real” 3174 control unit as the OS/390 master console. This avoids the need to work with the Terminal Solicitor (to obtain a TN3270 session with the 3270 device that will be the master console) before IPLing. This is not a requirement, but it would be a convenience. It also bypasses potential security windows whereby a general user (via the Terminal Solicitor) might occupy the device session that would be the master console after IPL.

7.2.5 Multi-system setup We first installed a single OS/390 2.9 system and defined all of the required resources to FLEX-ES. This configuration (as we explained earlier) is often referred to as the FLEX-ES single instance. Once this setup was tested, we installed OS/390 1.1 (a “one-pack” system) and ran it concurrently with OS/390 2.9, VM/ESA, and VSE systems. That is often called a FLEX-ES multiple instances setup. The system configuration files that we used are listed in Appendix A, “Configuration file listings” on page 115 .

All defined and available 3380/3390 DASD devices were varied online to both OS/390 systems at the same time. We ran a few jobs utilizing the shared DASD devices and they all completed with no reported errors.

The multiple instances (multi-system) setup should not be considered fully equivalent to LPAR use. It offers an ability to run multiple systems concurrently, and all the systems can share some of the resources. However, it has neither the tuning nor dynamic reconfiguration capabilities of Processor Resource/Systems Manager (PR/SM) and no equivalent to EMIF is available.

74 NUMA-Q and S/390 Emulation 7.2.6 TCP/IP for OS/390 OS/390 UNIX Systems Services (USS) and TCP/IP come pre-configured with the OS/390 AD CD-ROM system. We established TCP/IP connectivity after simple customization to set an IP address. We used TN3270 sessions and FTP over an Ethernet LAN, using an emulated 3172 at addresses E20-E21.

7.2.7 FLEX-ES FakeTape on OS/390 FakeTape is a trademark of Fundamental Software, Incorporated, of Fremont, CA. It emulates tape devices using DYNIX/ptx disk files instead of tape drives. Provided the appropriate tape devices are defined in the OS/390 IODF configuration data set, FakeTape will emulate any type of tape drive from 3420 to 3490-E. Because FakeTape always writes and reads the same format to/from the DYNIX/ptx disks, it operates at the same speed for all different emulated tape device types. We ran several tape jobs using IEBGENER, IEBCOPY and DFDSS and they all performed well.

This is one of the jobs we executed; it is a full backup of a 3390-3 device using DFDSS: //DFDSS JOB 1,PJC,MSGCLASS=X //BACKUP EXEC PGM=ADRDSSU,REGION=4096K //SYSPRINT SYSOUT=* //DISK1 DD DISP=SHR,UNIT=SYSDA,VOL=SER=OS39M1 //TAPE1 DD DSN=DFDSS.OS39M1.BKP.NOV1500,DISP=(NEW,CATLG,DELETE), // UNIT=3480,LABEL=(1,SL),VOL=SER=M1TAPE //SYSIN DD * DUMP FULL - INDDNAME(DISK1) - OUTDDNAME(TAPE1) - ALLDATA(*) - ALLEXCP - OPTIMIZE(4) /*

We submitted the job and the IEF233A mount request was issued by OS/390. We then entered the FLEX-ES mount command from the flexes prompt, selecting an appropriate DYNIX/ptx directory and file name for the FakeTape file we were about to create: flexes> mount 560 /usr/flexes/m1tape

We next needed to reply to an IEC704A message with a volser name. The “blank tape” that FakeTape mounted for us had no existing volser. This is the same action that would be taken if an operator mounted a blank tape on a “real” tape drive in the same situation. JES2 JOB LOG --SYSTEM SYS1 --NODE N1

---- WEDNESDAY, 15 NOV 2000 ---- IRR010I USERID P390 IS ASSIGNED TO THIS JOB. ICH70001I P390 LAST ACCESS AT 10:49:55 ON WEDNESDAY, NOVEMBER 15, 2000 $HASP373 DFDSS STARTED - INIT 1 - CLASS A - SYS SYS1 IEF403I DFDSS - STARTED - TIME=10.56.58 *IEF233A M 0560,M1TAPE,,DFDSS,BACKUP,DFDSS.OS39M1.BKP IEC512I LBL ERR 0560, ,NL,M1TAPE,SL,DFDSS,BACKUP,DFDSS.OS39M1.BKP *IEC704A L 0560,M1TAPE,SL,NOCOMP,DFDSS,BACKUP,DFDSS.OS39M1.BKP *07 IEC704A REPLY 'VOLSER,OWNER INFORMATION','M'OR'U'

Chapter 7. OS/390 on NUMA-Q EFS 75 R 07,M1TAPE IEC705I TAPE ON 0560,M1TAPE,SL,NOCOMP,DFDSS,BACKUP,DFDSS.OS39M1.BKP IEF234E K 0560,M1TAPE,PVT,DFDSS,BACKUP IEF404I DFDSS - ENDED - TIME=11.15.28

FakeTape emulation worked for both SL and NL tapes.

76 NUMA-Q and S/390 Emulation Chapter 8. Loading CD-ROM systems

CD-ROMs containing S/390 operating systems and other program products are available for a limited number of purposes. There are currently two groups of these CD-ROMs: • Those available to any customer with the proper license for the product. At the time of writing, only VM/ESA is available in this group. • Those available only to members of IBM’s S/390 Partners in Development (PID) organization.1 Current releases of VSE/ESA, VM/ESA, and OS/390 -- all with additional program products typically used by developers -- are available on CD-ROM. These are known as Application Development (AD) systems, and, by extension, the term “AD CD-ROM system” is often used.

The CD-ROM versions are based on the same versions available (on tape) through the standard IBM software distribution processes. There are no modifications involved due to the use of CD-ROMs for distribution. However, the versions distributed on CD-ROM, especially the PID versions, are considerably more customized than the standard IBM software distribution versions.

In general, the PID versions (once installed on the system disks) are immediately ready for use. A set of userids is provided, for example. Minor additional customization (such as setting IP addresses) is required, but very little work is required compared to, for example, a ServerPac distribution of OS/390. The penalty for this immediate usability is that many configuration and customization decisions have been made by IBM. In general, the resulting systems are quite suitable for a small development organization, but would probably not be suitable for a large, highly-structured production installation. Since the target for the PID CD-ROM systems is smaller development organizations, the PID CD-ROM systems have been very well received.

If you are a PID member, or if you use the VM/ESA CD-ROM distribution, this chapter may be of interest. If you are in neither of these groups, then the resource definitions and startup scripts explained in this chapter may be of interest, but you might skip the mechanics of working with the CD-ROMs.

8.1 Basic CD-ROM formats The fundamental format of the CD-ROMs is PC-compatible. That is, any DOS, OS/2, or Windows operating systems (and, as far as we know, most PC UNIX operating systems) can recognize the files and directories on the CD-ROMs. The CD-ROMs typically contain README files (in ASCII), P/390 DEVMAP files (binary), an OS/2 UNZIP program (binary), along with an AIX UNZIP version, and one or more files containing the S/390 materials. From a PC viewpoint, these S/390 materials are binary files and are often very large files.

The S/390 material currently will be in one of these forms: • ZIPed files in AWSCKD format. AWSCKD is a P/390 device manager program that emulates 3380 and 3390 devices. In general, the complete 3380 or 3390 volume is in a single PC file. All the PID releases of OS/390 are in this format.

1 This is part of the larger IBM PartnerWorld for Developers, and both names are used for the group. The AD CD-ROM systems are available to PID members who obtained systems through the PID program.

© Copyright IBM Corp. 2000 77 PID releases of VSE/ESA and VM/ESA also contain a set of these AWSCKD files. • ZIPed files in AWSFBA format. AWSFBA is a P/390 device manager program that emulates S/390 FBA devices. PID releases of VSE and VM/ESA contain both CKD and FBA volumes. However, the VM/ESA FBA volumes are only partial (mini) FBA, and therefore, were not suitable for our needs. VM/ESA’s installation for our NUMA-Q EFS system needed to use the AWSCKD versions. • Files in OMA/2 format. The general releases of VM/ESA are available in this format. OMA/2 is a well-defined format for CD-ROM data. With appropriate PC interface programs, files in OMA/2 format appear to the S/390 as a 3422 tape volume. (Note: see the example in “Load VM/ESA service from CD-ROM (OMA/2)” on page 87). • Files in AWSTAPE format. AWSTAPE is a P/390 device manager that emulates a 3420 tape drive; that is, the S/390 operating system regards the AWSTAPE device as a tape drive. At the time of writing there are no formal IBM distributions in this format, but it is sometimes used for informal distributions.

OMA/2 and AWSTAPE are very similar in function, although the data format on the CD-ROM is different. Each has advantages and disadvantages.

The AWSCKD and AWSFBA formats are, in essence, complete images of S/390 disk volumes. Within an AWSCKD file, for example, tracks and cylinders are defined. If it is an OS/390 volume, there will be a standard label, a VTOC, probably a VTOC index, and whatever files appear on that volume in an OS/390 context.

In all cases (AWSCKD, AWSFBA, OMA/2, and AWSTAPE), the data is in S/390 format. For example, text is EBCDIC and executables are S/390 binary files suitable for execution by OS/390, VM/ESA, or VSE/ESA.

AWSCKD and AWSFBA files on the CD-ROM are usually in ZIP format simply to save space. It is usually possible to ZIP an AWSCKD 3390-3 (2.8 GB) so that it fits on a CD-ROM (about 600 MB). There is no basic requirement that ZIP files be used and some smaller VSE disk images might not be zipped.

The AWSCKD format was developed for P/390s where the underlying operating system used to emulate CKD drives is OS/2. OS/2 is a 32-bit operating system and has the usual restriction that a single file cannot be larger than 2 GB. An AWSCKD-emulated 3390-3 requires more than 2 GB, and is into two AWSCKD files, the first is 2 GB and the second is about .8 GB2. These two files (for emulated 3390-3 volumes) are handled differently on the PID OS/390 and VM/ESA CD-ROMs.3 For OS/390, the two files are placed in a single ZIP file. For VM/ESA, they are in separate ZIP files. This difference (a single ZIP file versus two ZIP files for an emulated 3390-3) requires different techniques for loading these files for use by FLEX-ES.

2 The two files contain the characters _1and_2 as the last two characters of each file name. P/390 utilities recognize that the second file is a continuation of the first. Unlike FLEX-ES, the P/390 does not emulate 3390-9 drives. If it did, it would use four 2 GB files plus a 1 GB file to equal the 9 GB contained on a 3390-9 drive. 3 Why? Separate teams produced the VM and OS/390 AD CD-ROM systems and did not coordinate this detail. There is no technical reason for them to be different.

78 NUMA-Q and S/390 Emulation 8.2 FLEX-ES formats FLEX-ES can handle all the CD-ROM formats discussed above. FLEX-ES emulates CKD and FBA drives, but the internal format (in the raw disk space of DYNIX/ptx used for emulation) is different than the AWSCKD format. The CD-ROM formats are handled as follows: • FLEX-ES provides a utility to convert AWSCKD format to the FLEX-ES format for emulated CKD drives. • AWSFBA files do not need to go through a FLEX-ES conversion process. Once unzipped, they can be written to the target extent. DYNIX/ptx requires that the data be written in some multiple of 512 bytes and this must be specified on the utility that is performing the writes. 1024K worked well for our usage. • Both OMA/2 format and AWSTAPE format are automatically recognized and handled by the FLEX-ES FakeTape emulated tape drive support.4

The only problem with these formats is that, at the time of writing, neither DYNIX/ptx nor FLEX-ES included an UNZIP utility that could process the zipped files on the CD-ROMs.

8.3 Obtaining an UNZIP program A PC-compatible UNZIP program, that runs under DYNIX/ptx, is available at the FSI support.funsoft.com site. You can download it (in binary, of course) and place it in a DYNIX/ptx file. We placed it in /usr/flexes/unzip-5.40/unzip. (In retrospect, /usr/local/bin would probably be a better location for it.) Please note that gunzip and similar programs are not compatible with the PC format used on the CD-ROMs.

8.4 ptx volume names All the examples in this chapter (and in most of this redbook) use ptx volume names (or SVM volume names, if you prefer) that look something like this: /dev/vx/rdsk/s390dg/4133903s1

The naming convention we used is described in “Configuring emulated disk volumes” on page 33. All the S/390 volumes were created in the same directory, /dev/vx/rdsk/s390dg. A raw disk,5 such as 4133903s1, will hold all (or part of) an S/390 volume.

8.5 Installing the OS/390 AD CD-ROM system We installed the PID OS/390 V2R9 AD CD-ROM release on our NUMA-Q EFS system. This was the most current OS/390 AD CD-ROM system available at that time. This release is packaged on ten CD-ROMs, with one zipped AWSCKD volume per CD-ROM.

4 FakeTape will not create OMA/2 output “tapes”. It only reads input “tapes” in OMA/2 format. FakeTape can produce AWSTAPE output “tapes”, but the output file must first be initialized with the initawstape utility command. 5 It would be somewhat inaccurate to label this a file name, since no file system is created on the raw disk. However, most people will use the term file name. You should remain aware that you are dealing with raw disks here.

Chapter 8. Loading CD-ROM systems 79 8.5.1 System device layout The first four volumes (first four CD-ROMs) contain an IPLable system and we installed only these volumes6. We also created three 3390-1 volumes for work space.

The AD system is already customized in many ways, including IODF device definitions. It is not the purpose of this redbook to discuss general AD system design. The following sections mention specific S/390 device numbers (“addresses”) such as A80 for the IPL volume. These addresses are included in the AD system; that is, they are included in the IODF distributed with the system.

We decided our initial OS/390 system (using the AD CD-ROM) would use the devices listed in “OS/390 device configuration” on page 68. This represents a basic but useful OS/390 AD CD-ROM system. We did use some of the devices we defined.

8.5.2 Installation tasks We needed to perform the following tasks to install and IPL the system: 1. Create the SVM/ptx volumes needed to contain the emulated 3390 drives. The creation of these SVM/ptx volumes is described in “Configuring emulated disk volumes” on page 33, and is not repeated here. 2. UNZIP the required CD-ROM files and convert them to the FLEX-ES CKD format. 3. Create and install the FLEX-ES system and resource definitions needed for our OS/390 system. 4. Start the necessary 3270 terminal session(s) and IPL the system.

8.5.2.1 Unzipping CD-ROM files We switched to superuser mode, placed the first CD-ROM in the NUMA-Q CD-ROM drive, and entered the following commands from the DYNIX/ptx system console: # mount -F cdfs /dev/dsk/cd0 /mnt mount: warning: mounted as

The zip file is in the os390 subdirectory: # ls /mnt/os390 devmap.mvs ickdsf.ipl os39r9.zip sadss.ipl devmap.nme migrate.doc readme.mvs

Both the _1 and _2 parts of the file as, explained in “FLEX-ES formats” on page 79, are contained in a single zip file. The unzip program feeds the two unzipped files (without any apparent break in the data stream) to the named pipe used below. •Usethemknod command to create a pipe: # /etc/mknod /tmp/esapipe p •Usetheunzip command to decompress the zip image into the pipe in the background (using & at the end of the command line):

6 The other six volumes contain DLIBs, DB2, CICS, and IMS. For our initial usage of NUMA-Q EFS we did not need these and did not restore them.

80 NUMA-Q and S/390 Emulation # /usr/flexes/unzip-5.40/unzip -p /mnt/os390/os39r9 > \ /tmp/esapipe & [1] 13935

13935 is the background process id generated by DYNIX/ptx. The command is shown as two lines here, with an escaped new line character (that is, a single backslash immediately followed by a return character) used for the line break. You could enter it as a single line. • Because of the -p parameter in the unzip command, indicating output to a pipe, the next process will essentially be waiting for something to read from the pipe. This next process is the FLEX-ES utility ckdconvaws which picks up the unzipped disk image from the pipe and writes it out to our emulated DASD: # /usr/flexes/bin/ckdconvaws /tmp/esapipe /dev/vx/rdsk/s390dg/2133903s \ 3390-3 The following slices will be formatted to create one CKD disk: /dev/vx/rdsk/s390dg/2133903s1 (cylinders 0 - 1117) /dev/vx/rdsk/s390dg/2133903s2 (cylinders 1118 - 2235) /dev/vx/rdsk/s390dg/2133903s3 (cylinders 2236 - 3342)

Do you wish to continue (default: n) [y,n,?] y Max head = 14, cyl = 0001, blks = 57 (Note: 0001 will start incrementing)

We used the same process to restore OS3R9A, OS39A1 and OS39M2 volumes. We issued the umount command to unmount each CD-ROM before mounting the next one.

8.5.2.2 FLEX-ES resource definitions Before the newly installed/restored OS/390 system can be IPLed, we must define the hardware and system resources to the FLEX-ES Resource Administrator. “FLEX-ES configuration for OS/390 - single instance” on page 115 shows the input file that defines both the system and hardware resources for our OS/390 system. We specify the name of this file as an argument for the FLEX-ES configuration compiler: $ /usr/flexes/bin/cfcomp /usr/flexes/rundir/sysconf.s390 Start FLEX-ES Configuration Utility Configuration processing *SUCCEEDED* with no errors Data Space Manager Terminated

This creates files s39029.syscf and s390.rescf. We can then invoke the resource administrator to activate our resources: $ su Password: (<--- enter root password when this prompt is shown) # cd /usr/flexes/bin # ./resadm -T (<--- To terminate currently active resources) # ./resadm -r (<--- To check that no resources are left) # ./resadm -x /usr/flexes/rundir/s390.rescf (<--- To activate our resources) # exit $ cd /usr/flexes/rundir

8.5.2.3 IPL OS/390 The ./resadm -x command activates/refreshes S/390 resources, including emulated terminals. We can now invoke the shell script that uses the FLEX-ES

Chapter 8. Loading CD-ROM systems 81 command line interface (CLI). It allows us to issue FLEX-ES mount commands and an ipl command. $ sh shos

We previously created the shos shell script file. The contents are: PATH=/usr/flexes/bin:$PATH; export PATH flexes os39029.syscf echo 'mount 700 osmstcon' | flexescli localhost os39029 echo 'mount 701 osaltcon' | flexescli localhost os39029 echo 'mount 702 osterm1' | flexescli localhost os39029 echo 'mount 703 osterm2' | flexescli localhost os39029 (.... we skip over many more mount commands here....) echo ' ' echo ' Connect a terminal to the session ' echo ' identified as osmstcon ' echo ' and press enter to continue the IPL ' echo ' ' read anything # <--- To cause a prompt. Press Enter as a response echo 'ipl A80 0A82CS' | flexescli localhost os39029 flexescli localhost os39029

Note that the mount commands are not the DYNIX/ptx file system mounts, but are FLEX-ES command line interface (CLI) mounts. Since the devices are terminals, these commands provide names for the terminal sessions, and these names appear on the Terminal Solicitor panel.

The read command in the shell script causes the script to pause. This pause allows us to activate the 3270 terminal session that will be the OS/390 master console. This is necessary before performing an IPL.7

In order to connect to the FLEX-ES Terminal Solicitor we connect a TN3270 client to the DYNIX/ptx IP address using port 24. Port 24 is the default port for the Terminal Solicitor. The client system can be connected to any LAN that is connected to DYNIX/ptx. The Terminal Solicitor presents us a panel with the names of available8 3270 devices (these are normally made available by the FLEX-ES CLI mount command):

7 At the time of our project, the NUMA-Q EFS system did not emulate an SE/HMC software console. A later release added the SE/HMC software console function. 8 Once a 3270 terminal device is selected by a user, it is removed from the Terminal Solicitor panel. When the device is freed by the user, it reappears on the Terminal Solicitor panel and can be selected by another user.

82 NUMA-Q and S/390 Emulation Welcome to the FLEX-ES Terminal Solicitor (node: q390dyn)

Please select (X) the desired service and press enter (PA1 to exit; CLEAR to refresh)

_ osterm4 _ osterm3 _ osterm7 _ osterm9 _ osterm5 _ osterm11 _ osterm8 _ osterm14 _ osterm12 _ osterm15 _ osterm6 _ osterm10 _ osterm13 _ osterm16 _ osterm20 _ osterm17 _ osterm18 _ osterm22 _ osterm21 _ osterm19 _ osterm26 _ osterm25 _ osterm24 _ osterm20 _ osterm29 _ osterm23 _ osterm27 _ osterm30 _ osterm1 _ osterm2 _ osmstcon _ osaltcon

Our combination of resource definitions and mount commands caused the OS/390 master console (at address 700 in the AD system) to appear as a terminal named osmstcon on the Terminal Solicitor. When we select this terminal (by marking it with an X and pressing Enter9), the Terminal Solicitor screen (on our client TN3270 session) is replaced by a blank screen (because nothing has been sent to the 3270 at address 700 yet).

At this point we returned to the window with our telnet session, which is waiting for a response to the read command in the shell script. Enter a simple line return to satisfy the read command. The shell script then issues an IPL command and the OS/390 IPL process begins. Please refer to “Operation and use” on page 70, for additional comments about using OS/390.

The last command of this shell script, flexescli localhost os39029, puts the telnet session in FLEX-ES CLI mode, with a flexes> prompt replacing the default DYNIX/ptx prompt. You can enter flexes commands here,10 or enter a quit command to return the telnet session to a DYNIX/ptx prompt. In most cases, you will not need this telnet session while you are using OS/390.

You need to reply to the normal OS/390 startup messages, using the TN3270 session you started with the Terminal Solicitor. Once OS/390 is functional, you would start another TN3270 client session, connect it to the Terminal Solicitor, and select another terminal name on the solicitor screen. If everything is set up correctly, you would then receive the USSTAB logo screen and you could log onto TSO (or CICS, or whatever else you have enabled for a VTAM 3270 logon).

8.6 Installing the VM/ESA AD CD-ROM system The latest version of the Application Development System for VM/ESA available to us was version 2.4.0, dated September 1999. We selected the 3390 System Package, which comes on 2 CD-ROMs, with four 3390-3 volumes, each mapped out to two zip files and the spool volume, 240SPL, which was a 3390-1 in a single zip file. Note that there is a base version preconfigured CD-ROM, available to all

9 Many 3270 emulator users set up the right-hand Ctrl key as the logical 3270 Enter key, since this most closely matches a “real” 3270 keyboard. We did this and, using the IBM PCOM emulator, we pressed the right-hand Ctrl key. 10 For example, instead of hard-coding the IPL command in the shell script, you could enter it at this point. This might be more convenient if you frequently change the IPLPARM value used for starting your OS/390 system.

Chapter 8. Loading CD-ROM systems 83 customers as a feature of their VM/ESA license, which only has two 3390-3 volumes and consists of only VM/VSE without optional program products.

8.6.1 System device layout In addition to 120 MB memory and two undedicated CPUs specified in the VM system definitions for FLEX-ES, the following devices were defined for our VM/ESA system:

Address Type Volser DYNIX/ptx & Details FLEX-ES

020-02F 3270 vmcons,vm021-2f 3270 consoles

00C 2540 RDR N/A OFFLINE Card reader

00D 2540 PUN N/A OFFLINE Card punch

00E 1403 N/A OFFLINE Impact printer

120 3390-3 240RES 0633903s1 IPL disk

121 3390-3 240W01 0733903s1 System disk

122 3390-3 240W02 0833903s1 System disk

123 3390-3 240W03 0933903s1 System disk

124 3390-3 240SPL 1033903s3 Spool disk

600 CTCA N/A N/A CTCA

620 CTCA N/A N/A CTCA

800-801 3172 pair N/A /dev/net/pe1 Ethernet

8.6.2 Installation tasks We needed to perform the following tasks to install and IPL the system: 1. Create the ptx Volumes needed to contain the emulated 3390 drives. The creation of these ptx Volumes is described in “Configuring emulated disk volumes” on page 33, and the description of the process is not repeated here. 2. Mount the pre-configured CD, unzip the required CD-ROM files, and convert them to the FLEX-ES CKD format. 3. Create and install the FLEX-ES system and resource definitions needed for our VM/ESA system. 4. Start the necessary 3270 terminal session(s) and IPL the system. 5. Load the Recommended Service Upgrade (RSU) in OMA/2 format on CD-ROM.

8.6.2.1 Unzipping CD-ROM files to the FLEX-ES CKD Place the first CD-ROM in the NUMA-Q CD-ROM drive, make sure you are in superuser mode, and enter the following at the ‘#’ prompt in the DYNIX/ptx: # mount -F cdfs /dev/dsk/cd0 /mnt mount: warning: mounted as

The zip files are in the vmesa subdirectory:

84 NUMA-Q and S/390 Emulation # ls /mnt/vmesa 240res_1.zip devmap.1vm disk.map system.cfg vmppnss.zip 240res_2.zip devmap.n99 p390nss.zip user.dir 240w02_1.zip devmap.nme readme.ckd vm3390.pac 240w02_2.zip devmap.p99 sdiskadd.580 vmesa240.iop

CD-ROM 1 (of 2) has the 240RES and 240W02 volumes on it. Since these are 3390-3 volumes, using two ZIP files each, we must use a pipes technique with DYNIX/ptx system to feed the two unzipped files into the FLEX-ES program that will convert them into the required CKD format.11 We will use three separate pipes for this process, the first two for each zip extent, and the third to feed the first two in a single, concatenated pipe, which will in turn, be fed into the conversion program. For documentation clarity, we use the full path names for the commands and files: •Usethemknod command to create the pipes (again, in superuser mode): /etc/mknod /tmp/vmpipe1 p /etc/mknod /tmp/vmpipe2 p /etc/mknod /tmp/vmpipe3 p •Usetheunzip command to decompress the zip images into the pipes in the background (using & at the end of the command line to indicate a background process): /usr/flexes/unzip-5.40/unzip -p /mnt/vmesa/240res_1.zip > /tmp/vmpipe1 & /usr/flexes/unzip-5.40/unzip -p /mnt/vmesa/240res_2.zip > /tmp/vmpipe2 & • Concatenate the two pipes: /tmp/vmpipe1 /tmp/vmpipe2 > /tmp/vmpipe3 & • Invoke the FLEX-ES utility ckdconvaws to pick up the decompressed disk image from the pipe, and write it out to our emulated DASD: /usr/flexes/bin/ckdconvaws /tmp/vmpipe3 /dev/vx/rdsk/s390dg/0633903s \ 3390-3 The following slices will be formatted to create one CKD disk: /dev/vx/rdsk/s390dg/0633903s1 (cylinders 0 - 1117) /dev/vx/rdsk/s390dg/0633903s2 (cylinders 1118 - 2235) /dev/vx/rdsk/s390dg/0633903s3 (cylinders 2236 - 3342)

Do you wish to continue (default: n) [y,n,?] y Max head = 14, cyl = 0001, blks = 57 (Note: 0001 will start incrementing)

The same process is used for the 240W02 volume. To load the other three DASD volumes from CD-ROM disk 2, first use a umount /mnt command before removing the current CD, and then issue the same mount command described above to begin working with the next volume. The unzip and restore is the same as for the first volume (but with different ptx volume names, of course).

8.6.2.2 FLEX-ES resource definitions Before we can IPL VM/ESA, we need to tell the FLEX-ES Resource Manager (through resadm) what resources we have and where they are located. Appendix A.3, “FLEX-ES configuration for VM/ESA” on page 118 shows the input

11 This was not required for the OS/390 AD system because it had both files (for a 3390-3 volume) zipped into the same zip file. Unzipping this file automatically produced the two files that contain the 3390-3 volume. For the VM system, the two files that contain a 3390-3 volume are zipped into separate zip files. In order to feed both these files to the ckdconvaws program, which takes a single input file, the pipe technique is required.

Chapter 8. Loading CD-ROM systems 85 file that defines both the system and the resources for our VM/ESA system. Running the configuration compiler against this file will produce the two output files which will be used to activate our resources and IPL our system: $ /usr/flexes/bin/cfcomp /usr/flexes/rundir/sysconf.vm Start FLEX-ES Configuration Utility Configuration processing *SUCCEEDED* with no errors Data Space Manager Terminated

This creates files vm.syscf and vmnuma.rescf. We can then tell the Resource Manager to activate our resources: $ su Password: (<--- enter root password when this prompt is shown) # cd /usr/flexes/bin #./resadm -T (<--- To terminate currently active resources) # ./resadm -r (<--- To check that no resources are left) # ./resadm -x /usr/flexes/rundir/vmnuma.rescf (<--- activate vm resources) # exit (<--- To leave superuser mode) $ cd /usr/flexes/rundir

8.6.2.3 IPL VM/ESA The resadm -x command in the previous section activates/refreshes our resources, including our emulated terminals. We can then invoke the FLEX-ES Command Line Interface (CLI) directly, or use a simple shell script to identify our terminal resources, and IPL VM/ESA. The following shell script will identify the names of our emulated 3270s at addresses 020-022 to the Terminal Solicitor, and wait for us to make a connection with TN3270 before the VM/ESA IPL: PATH=/usr/flexes/bin:$PATH; export PATH flexes vm.syscf echo 'mount 020 vmcons' | flexescli localhost vm echo 'mount 021 vm021' | flexescli localhost vm echo 'mount 022 vm022' | flexescli localhost vm echo ' ' echo ' Connect a terminal to the session ' echo ' identified as vmcons ' echo ' and press enter to continue the IPL ' echo ' ' read anything # <--- To cause prompt and put the 'enter' in var 'anything' echo 'ipl 120 020' | flexescli localhost vm flexescli localhost vm

8.6.2.4 Start 3270 sessions and IPL VM/ESA Once this shell script is invoked, we can connect a TN3270 session to port 24 of DYNIX/ptx to access the Terminal Solicitor. The latter presents the following display:

Welcome to the FLEX-ES Terminal Solicitor (node: q390dyn) Please select (X) the desired service and press enter (PA1 to exit; CLEAR to refresh)

_ vmcons _ vm021 _ vm022

86 NUMA-Q and S/390 Emulation Select vmcons, which points to our 3270 address 020 definition. Moving back to the telnet window and the shell script, we press enter, and the ipl 120 20 command is executed. The last command in the shell script will put us in the FLEX-ES CLI mode, with a flexes> prompt replacing the default DYNIX/ptx prompt. The VM/ESA IPL would then proceed with the stand-alone program loader (SAPL) panel on our Terminal Solicitor session vmcons.

8.6.2.5 Load VM/ESA service from CD-ROM (OMA/2) The VM/ESA AD CD-ROM will be at a certain service level when created and it is expected that VM/ESA’s Recommended Service Upgrade (RSU) will be applied after installation to make the system current on its service level. Because VM/ESA is the only operating system that also delivers service via CD-ROM, it makes sense to continue using this medium to apply maintenance. FLEX-ES’ support of OMA/2 format data can be exploited for this purpose. However, the process differs from the process used on P/390, R/390, and the S/390 Integrated Server, where all the mounting could be done from either the VM/ESA system or from the host server’s command prompt . For example, under the P/390 we start with the following assumptions: • Logged on to userid MAINT • Address 181 is to be our OMA/2 tape • CD-ROM is mapped out to OS/2 drive f:

From CMS we would enter: attach 181 * Ready; mount 181 f:\tapes\tape01.tdf Ready;

To accomplish the same from the NUMA-Q, we will do the mounting from DYNIX/ptx, in superuser mode: # mount -r -F cdfs -o showdot /dev/dsk/cd0 /mnt mount: warning: mounted as

The use of the device-specific option showdot on the DYNIX/ptx mount command was required for successful use of the OMA/2-formatted CD-ROM. Omitting it will cause subsequent FLEX-ES use to fail. From the FLEX-ES CLI prompt, enter: flexes> mount 181 /mnt:/mnt/tapes/tape01.tdf flexes>

The /mnt that precedes the colon is a string that is prepended by FLEX-ES’ OMA/2 support as the beginning of the path to files specified within the tape descriptor file. The remainder of the string following the colon is the fully qualified pathname of the tape descriptor file.

You can then log onto MAINT and attach 181, and you will be at the same point as in the previous P/390 example, and are ready to apply the RSU as instructed in the VM/ESA Service Guide, GC24-5838.

At this point, your CD installation is complete, and your terminal session will proceed as any normal VM/ESA IPL. Refer to “VM/ESA on NUMA-Q EFS” on page 61 for information on operating VM/ESA in this environment.

Chapter 8. Loading CD-ROM systems 87 8.7 Installing the VSE/ESA AD CD-ROM system - CKD format We installed the PID VSE/ESA V2R4 AD CD-ROM release on our NUMA-Q EFS system. This was the most current VSE/ESA AD CD-ROM system available. It is packaged on two CD-ROMs - one for 3390 DASD (with two zipped AWSCKD volumes) and one for FBA DASD (with three zipped FBA volumes).

The CKD format is provided on two volumes of 3390-2 disks. We chose to define and install the product on logical 3390-3 disks, but the model 2 format is all that is required.

8.7.1 System device layout In addition to the 220 MB memory and two dedicated CPUs we defined for our VSE system, the following devices were defined:

Address Type Volser FLEX-ES definition Details

01F 3270 VSE/ESA console 3270 consoles

00C 2540 RDR N/A OFFLINE Card reader

00D 2540 PUN N/A OFFLINE Card punch

00E 1403 N/A OFFLINE Impact printer

150 3390-3 DOSRES 4133903s1 IPL disk

151 3390-3 SYSWK1 4233903s1 System disk

200-206 3270 N/A non-SNA terminals 3274

300-301 3172 N/A 3172 Token-Ring

400 CTCA N/A N/A CTCA

500 3480 N/A FakeTape DYNIX file

590-592 3490 N/A 3490 SCSI tape

8.7.2 Installation tasks We performed the following tasks to install and IPL the system: 1. Create the ptx volumes needed to contain the emulated 3390 drives. The creation of these ptx volumes is described in Appendix B.4, “Allocating S/390 volumes” on page 133, and the description of the process is not repeated here. 2. Mount the CD-ROM, UNZIP the required files, and convert them to the FLEX-ES CKD format. 3. Create and install the FLEX-ES system and resource definitions needed for our VSE/ESA system. 4. Start the necessary 3270 terminal session(s) and IPL the system.

8.7.2.1 Unzipping CD-ROM files to the FLEX-ES CKD From the DYNIX/ptx system console, after having the CKD format CD-ROM (labeled 3390 System Package) mounted and ready, make sure you are in superuser mode (su) and enter at the # prompt:

88 NUMA-Q and S/390 Emulation # mount -F cdfs /dev/dsk/cd0 /mnt mount: warning: mounted as

The zip files are in the vse subdirectory: # ls /mnt/vse dosres.zip devmap.vse vseutils.ipl syswk1.zip devmap.n99 devmap.nme readme.ckd devmap.p99 vse240.iop

This CD-ROM has both DOSRES and SYSWK1 on it. We used a named pipe under DYNIX/ptx to feed each zipped file into the FLEX-ES program that will convert them into the required CKD format. For documentation clarity, we show the full path names to the commands and files: •Usemknod to create the named pipe (again, in superuser mode): /etc/mknod /tmp/vsepipe p •Usetheunzip command to decompress the zip images into the pipe. Using the & at the end of the command line will run this process in the background: /usr/flexes/unzip-5.40/unzip -p /mnt/vse/dosres.zip > /tmp/vsepipe & • Invoke the FLEX-ES utility ckdconvaws to pick up the decompressed disk image from the pipe, and write it out to our emulated DASD. The device type and model (in this case 3390-3) is very important; it tells the system how much disk space to use for this function. /usr/flexes/bin/ckdconvaws /tmp/vsepipe /dev/vx/rdsk/s390dg/4133903s1 \ 3390-3 The following slices will be formatted to create one CKD disk: /dev/vx/rdsk/s390dg/4133903s1 (cylinders 0 - 1117) /dev/vx/rdsk/s390dg/4133903s2 (cylinders 1118 - 2235) /dev/vx/rdsk/s390dg/4133903s3 (cylinders 2236 - 3342)

Do you wish to continue (default: n) [y,n,?] y Max head = 14, cyl = 0001, blks = 57 (Note: 0001 will start incrementing)

The same process is used to transfer SYSWK1 to disk 4233903s1. We found that when using named pipes, we had fewer problems if a specific name was used only once during a logon session. When running this process again for the second disk (and any subsequent disks), we used a different name for the pipe each time.

An automated shell script for entering these commands is documented in “Automating loading of CD-ROM systems” on page 92.

8.7.2.2 FLEX-ES resource definitions Before we can IPL VSE/ESA, we have to tell the FLEX-ES Resource Administrator what resources we have, and where they are located. “FLEX-ES configuration for VSE/ESA - CKD format” on page 120 shows the input file that defines both the system and the resources for our VSE/ESA system. Running the configuration compiler against this file will produce the two output files which will be used to activate our resources and IPL our system: $ /usr/flexes/bin/cfcomp /usr/flexes/rundir/sysconf.vseckd Start FLEX-ES Configuration Utility Configuration processing *SUCCEEDED* with no errors

Chapter 8. Loading CD-ROM systems 89 Data Space Manager Terminated

This creates files vseckd.syscf and vsecres.rescf. We can then invoke the resource administrator to activate our resources: $ su Password: (<--- enter root password when this prompt is shown) # cd /usr/flexes/bin # ./resadm -T (<--- To terminate currently active resources) # ./resadm -r (<--- To check that no resources are left) # ./resadm -x /usr/flexes/rundir/vsecres.rescf (To activate our resources) # ./resadm -r (<--- To check that new resources are there) # exit $ cd /usr/flexes/rundir

8.7.2.3 IPL VSE/ESA The resadm command above activates our resources, including our emulated terminals. Then invoke the shell script that will use the FLEX-ES command line interface (CLI), and allow us to issue the IPL command. Here is a sample script: PATH=/usr/flexes/bin:$PATH; export PATH flexes vseckd.syscf echo 'mount 01f vseccons' | flexescli localhost vseckd echo 'mount 200 vsec200' | flexescli localhost vseckd read something echo ’ipl 150’ | flexescli localhost vseckd flexescli localhost vseckd

When the shell script pauses (for the read command), we then start a TN3270 session to NUMA-Q at port 24 in order to access the FLEX-ES Terminal Solicitor. The solicitor presents a panel with selectable predefined terminal names, like this:

Welcome to the FLEX-ES Terminal Solicitor (node: q390dyn) Please select (X) the desired service and press enter (PA1 to exit; CLEAR to refresh)

_ vseccons _ vsec200

Select vseccons, which points to our 3270 address 01F definition. Back in the window with the shell script, press enter (to complete the read) and the IPL command will be executed. The last shell script command puts us in the FLEX-ES CLI mode, with a flexes> prompt replacing the default DYNIX/ptx prompt. The VSE/ESA IPL would then proceed with the normal VSE/ESA IPL process on our Terminal Solicitor session vseccons.

8.7.2.4 Manual IPL You can manually IPL your VSE system by editing the shvsec script and removing the IPL 150 command (or commenting it out): PATH=/usr/flexes/bin:$PATH; export PATH flexes vseckd.syscf echo 'mount 01f vseccons' | flexescli localhost vseckd echo 'mount 200 vsec200' | flexescli localhost vseckd

90 NUMA-Q and S/390 Emulation # echo 'ipl 150 ' | flexescli localhost vseckd note: this line is commented out with the ‘#’ flexescli localhost vseckd

To use it this way, run the shvsec shell script and then enter your IPL command when the shell script ends and gives you the flexes> prompt. For example: $ sh shvsec FLEX-ES: Copyright (C) Fundamental Software, Inc., 1991-2000 This FLEX-ES module is licensed to International Business Machines Corporation Poughkeepsie, New York flexes> ipl 150 flexes>

Again, you will see the normal VSE/ESA IPL process on the vseccons terminal session.

8.8 Installing the VSE AD CD-ROM system - FBA format We installed the VSE/ESA V2R4 system in FBA format using the same methods as for the CKD version, with only a few exceptions. The FBA version requires three disk volumes, and the FBA zipped format is transferred directly to disk without the requirement to convert the data.

8.8.1 System device layout The FBA system was defined with 220 MB and one dedicated CPU. The disk drives defined for the FBA system (addresses 140-142) are the main configuration change from the CKD system:

Address Type Volser DYNIX/pts & Details FLEX-ES

01F 3270 VSE/ESA console 3270 consoles

00C 2540 RDR N/A OFFLINE Card reader

00D 2540 PUN N/A OFFLINE Card punch

00E 1403 N/A OFFLINE Impact printer

140 9336-20 DOSRES 5193362s1 IPL disk

141 9336-20 SYSWK1 5293362s1 Work disk 1

142 9336-20 SYSWK2 5393362s1 Work disk 2

200-206 3270 N/A non-SNA terminals 3274

300-301 3172 N/A 3172 Token-Ring

400 CTCA N/A N/A CTCA

500 3480 N/A FakeTape DYNIX file

590-592 3490 N/A 3490 SCSI tape

Chapter 8. Loading CD-ROM systems 91 8.8.2 Installation tasks We needed to perform the following tasks to move the CD-ROM files to our FBA drives.

8.8.2.1 Unzipping CD-ROM files to the FLEX-ES FBA disks From the DYNIX/ptx system console, after having the CD-ROM (labeled FBA System Package) mounted and ready, make sure you are in superuser mode (su) and enter the following at the # prompt: # mount -F cdfs /dev/dsk/cd0 /mnt mount: warning: mounted as

The zip files are in the vse subdirectory: # ls /mnt/vse dosres.zip devmap.vse vseutils.ipl syswk1.zip readme.fba syswk2.zip devmap.nme

This CD contains zipped files for DOSRES, SYSWK1, and SYSWK2. Since there is no conversion of the data required, the files can be unzipped directly to the ptx volume using the following steps: •Usetheunzip command to decompress the zip images and copy them directly onto the disk. Note that DYNIX/ptx requires that writes to raw disk space must be in multiples of 512 bytes. The unzip command itself would not observe this restriction, so we piped it through dd specifying a blocksize of 1024K. /usr/flexes/unzip-5.40/unzip -p /mnt/vse/dosres.zip | dd \ of=/dev/vx/rdsk/s390dg/5193362s1 bs=1024k 0+55064 records in 0+55064 records out

We used the same steps for SYSWK1 and SYSWK2 to load ptx volumes 293362s1 and 5393362s1. Since the FBA devices are less than 2 GB, only one ptx volume is required for this type DASD.

All of the remaining process is exactly like the CKD example, except that the IPL address is now 140. “FLEX-ES configuration for VSE/ESA - FBA format” on page 122 shows the input file for the FBA system configuration.

8.9 Automating loading of CD-ROM systems We saved the best for last. Loading the CD-ROM-based systems, as documented in the previous sections, requires the use of a number of rather long and typographically error-prone commands. The commands required were to: • Unzip files from the CD-ROM, • Pipe multiple files together (for the VM/ESA 3390-3 disk volumes), and • Convert into the target ptx volume(s).

These tasks are tedious and very time consuming. As a partial solution to this tedious process, one of the team members developed a shell script to largely automate the primary steps. The awsckd2ptxvol shell script simplifies the process of moving and converting the zipped files from a CD-ROM to the ptx

92 NUMA-Q and S/390 Emulation volume(s) by taking advantage of named pipes and doing appropriate housekeeping.

In “Installation tasks” on page 84, the process of installing a VM/ESA disk was documented. In summary, the following commands were required: /etc/mknod /tmp/vmpipe1 p /etc/mknod /tmp/vmpipe2 p /etc/mknod /tmp/vmpipe3 p /usr/flexes/unzip-5.40/unzip -p /mnt/vmesa/240res_1.zip > \ /usr/flexes/vmpipe1 & /usr/flexes/unzip-5.40/unzip -p /mnt/vmesa/240res_2.zip > \ /usr/flexes/vmpipe2 & cat /tmp/vmpipe1 /tmp/vmpipe2 > /tmp/vmpipe3 & /usr/flexes/bin/ckdconvaws /usr/flexes/vmpipe3 \ /dev/vx/rdsk/s390dg/0633903s 3390-3 /tmp/vmpipe1 rm /tmp/vmpipe2 rm /tmp/vmpipe3

The sample shell script awsckd2ptxvol was written during the residency to minimize the need to retype these commands for every volume. By using the awsckd2ptxvol script, the above sequence would be reduced to: awsckd2ptxvol /dev/vx/rdsk/s390dg/0633903s 3390-3 /mnt/vmesa/240res_1.zip \ /mnt/vmesa/240res_2.zip

Because the script uses uniquely named pipes, multiple invocations of the script can be run at the same time, rather than using the sequential nature of the manual method. The shell script supports the single zip file and separate zip file combinations described previously for the VM/ESA and OS/390 systems on CD-ROM.

awsckd2ptxvol source listing The complete source of the script is presented here. The comments section in the header provides information on its function and usage.

#!/bin/sh # # Sample shell script developed for ITSO residency. No support expressed # or implied. # # DYNIX/ptx shell script to convert zipped CKD volumes from # from IBM S/390 preconfigured system CD's for P/390, R/390 and # S/390 Integrated Server to FLEX-ES(tm) formatted disk volumes under # DYNIX/ptx. This shell script has the advantage of not requiring # staging file space for the unzipped data # that is the input for the FLEX-ES ckdconvaws program. # # Original version by Gary Eheman September 8, 2000 # Parm enhancements by Mark Majhor, Sept 8, 2000 # Error checking by Cliff White, Sept 11, 2000 # Reorder parms to allow one or two zips by Gary Eheman Nov 9, 2000 # Add more variable names for readability by Mark Majhor # # Invocation: sh awsckd2ptxvol $1 $2 $3 <$4> # $1 = ptx disk volume to receive results from ckdconvaws # $2 = S/390 volume type parm required by ckdconvaws (e.g. 3390-3) # $3 = first zip file specification (contains cylinders 0-n) # $4 = optional second zip file specification (contains cyls n+1 to end) # # Note that some preconfigured systems will have two zip files per S/390 # volume. The first contains the first 2Gig of a 3390-3 and the second # contains the remaining 0.7Gig of the 3390-3. Some OS/390 CD's have # been observed to have both components zipped into a single zip file. # This shell script should work with either packaging type.

Chapter 8. Loading CD-ROM systems 93 # The zip file name(s) were placed at the end of the invocation string # to simplify coding since the second zip file is optional. Perhaps # not the most intuitive syntax, but this order simplifies the coding. # # This whole process can be run in parallel. Just background the shell # script itself. # # # # Get program name # PROGNAME=` $0` # Check all the input variables before you go anywhere. if [ $# -lt 3 ]; then echo "Usage: sh $PROGNAME \ [Zip_file_2] " exit 1 fi

# # assign more meaningful variable names for human readability # VOL_DEST=$1 # volume destination VOL_TYPE=$2 # S/390 volume type (for example 3390-3) ZIP_1=$3 # first zip file ZIP_2=$4 # optional second zip file # # Change these variable locations to suit your environment # # unzip is available publicly in the Info-ZIP implementation # available from ftp://ftp.freesoftware.com/pub/infozip/ # UNZIP=/usr/local/bin/unzip CKDCONVAWS=/usr/flexes/bin/ckdconvaws

# Verify the output volume exists and is writable if [ ! -w $VOL_DEST ]; then echo "The output volume is not present or permissions have not been set -- exiting " exit 1 fi # Verify that the input file(s) exist if [ -f $ZIP_1 ]; then echo $ZIP_1 " exists. Proceeding. " else echo $ZIP_1 " does not exist. Aborting. " exit 1 fi if [ $# -gt 3 ]; then if [ -f $ZIP_2 ]; then echo $ZIP_2 " exists. Proceeding. " else echo $ZIP_2 " does not exist. Aborting. " exit 1 fi fi

# # Now create some uniquely-named named pipes for this invocation. # UNIQUE=$$ # process id of this script PIPE1=/tmp/$UNIQUE.1 PIPE2=/tmp/$UNIQUE.2 PIPE3=/tmp/$UNIQUE.3

/etc/mknod $PIPE1 p /etc/mknod $PIPE2 p /etc/mknod $PIPE3 p

# If two separate zip files are specified, the unzipped content of both # must be concatenated together and fed as input to the ckdconvaws utility. # If a single zip file is specified, then the unzipped content of it # alone will be fed to the pipe that feeds ckdconvaws. # If that single zip file contains both parts of a 3390-3, they must # appear in the correct order (_1 followed by _2) in the zip file.

# Start the unzip of the first file in background. $UNZIP -p $ZIP_1 > $PIPE1 &

94 NUMA-Q and S/390 Emulation # If there is a second zip file specified, then start it unzipping # and concatenate the two parts in background. if [ $# -eq 4 ]; then $UNZIP -p $ZIP_2 > $PIPE2 & cat $PIPE1 $PIPE2 > $PIPE3 & else # Just one zip file specified on invocation. Send it on. cat $PIPE1 > $PIPE3 & fi

# # Don't background this - it must complete. # The pipes are removed in the last step # $CKDCONVAWS $PIPE3 $VOL_DEST $VOL_TYPE

# # clean up the pipes # /bin/rm -f $PIPE1 $PIPE2 $PIPE3

Sample uses of awsckd2ptxvol Here are two sample invocations of awsckd2ptxvol. In the first example we process two separate zip files to create one 3390-3 volume. $ awsckd2ptxvol /dev/vx/rdsk/s390dg/0333903s1 3390-3 \ /mnt/vmesa/240res_1.zip /mnt/vmesa/240res_2.zip /mnt/vmesa/240res_1.zip exists. Proceeding. /mnt/vmesa/240res_2.zip exists. Proceeding. The following slices will be formatted to create one CKD disk: /dev/vx/rdsk/s390dg/0333903s1 (cylinders 0 - 1117) /dev/vx/rdsk/s390dg/0333903s2 (cylinders 1118 - 2235) /dev/vx/rdsk/s390dg/0333903s3 (cylinders 2236 - 3342)

Do you wish to continue (default: n) [y,n,?] y Max head = 14, cyl = 3343, blks = 57 Cylinder 0 Completed in nnn milliseconds ... Cylinder 3342 Completed in nnn milliseconds CKD Conversion Completed (3339 cyls copied, 0 cyls ignored)

Here is an OS/390 unzip in which the zip file contains both parts in one file. $ awsckd2ptxvol /dev/vx/rdsk/s390dg/0333903s1 3390-3 /mnt/os390/os39r9.zip /mnt/os390/os39r9.zip exists. Proceeding. The following slices will be formatted to create one CKD disk: /dev/vx/rdsk/s390dg/0333903s1 (cylinders 0 - 1117) /dev/vx/rdsk/s390dg/0333903s2 (cylinders 1118 - 2235) /dev/vx/rdsk/s390dg/0333903s3 (cylinders 2236 - 3342)

Do you wish to continue (default: n) [y,n,?] y Max head = 14, cyl = 3343, blks = 57 Cylinder 0 Completed in nnn milliseconds ... Cylinder 3342 Completed in nnn milliseconds CKD Conversion Completed (3339 cyls copied, 0 cyls ignored)

Chapter 8. Loading CD-ROM systems 95 96 NUMA-Q and S/390 Emulation Chapter 9. Typical configurations

Here are two examples of NUMA-Q system configurations. The first is a typical single-quad system to effectively run S/390 workloads. The second is a two quad system that can support multiple application environments concurrently - S/390, Linux applications, other DYNIX/ptx applications, and Windows NT (Windows 2000).

9.1 Typical system configuration for NUMA-Q EFS

Bootbay - 36 GB Hard Disk, CD-RW - SCSI attached

Pentium III Pentium III Pentium III Pentium III -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon --

System Bus (720 MB/sec)

1-8 GB Memory PCI Bridge

PCI I/O PCI I/O

PCI I/O PCI I/O

PCI I/O PCI I/O

PCI - EISA PCI I/O PCI Bus (264 MB/s) PCI Bus (264 MB/s) MDC

Disks Disks MDC Console

For a VM/ESA, VSE/ESA or a small OS/390 system, this is what a typical single-quad configuration would contain: • one quad with 4 Intel 700Mhz processors • 1 GB (for VSE and possibly VM) or 2 GB (typically for OS/390) memory • Internal DASD - fibre channel attached • Bootbay (containing a CD-ROM and hard disk; a bootbay is required) • Seven PCI slots available for connection to internal and peripheral I/O devices: • SCSI adapter for the bootbay • Two fibre channel cards - at least one is required • LAN card (Ethernet or Token Ring)

© Copyright IBM Corp. 2000 97 • One or more PCA cards (up to three channels per card) • Integrated communications adapter (6 lines per card) • Differential SCSI adapter for a tape drive • Digital Linear Tape (DLT) drive • System console (a separate PC) • Enterprise or workgroup cabinet • DYNIX/ptx • NUMA-Q EFS running on one or more processors

This configuration allows a typical S/390 customer to run their existing workload with no changes to their operating environment. In addition, they have the ability to connect to: • Token ring (one-port card) or Ethernet (four-port card) local area networks (LANs). You must install one or more LAN cards of either type depending on your requirements. The four-port Ethernet card can be shared between DYNIX and S/390 CPU complexes. • A wide area network (WAN) using the FSI ICA card. You can install one or more six-port RS-232 ICA cards. Each card supports six SDLC or BSC lines in any combination. • Normal channel-attached devices (printers, terminal controllers, communication controllers, gateways, tape drives, and so forth, but excluding DASD devices) using the FSI PCA card. A PCA card can have connections to one set of bus and tag cables, or three sets of bus and tag cables. You can install up to two PCA cards per quad (due to the length and circuitry on the card itself). This limits the maximum number of parallel channels to six per quad. • A second fibre channel card. A second fibre channel card provides multiple paths to the internal DASD. This allows for redundancy and eliminates the fibre channel card as a single point of failure. • A SCSI-attached DLT tape drive. A DLT tape drive will provide the backup capability for your DYNIX and FLEX-ES environments. It can also emulate a 3480 tape drive and be used for S/390 functions, as well. • a second SCSI-attached bootbay. A second bootbay will provide redundancy and mirroring capabilities. It will allow you to be able to boot your system if your primary bootbay is unusable.

These options all require the use of PCI slots. Five slots are available for use after taking into account the SCSI-attached bootbay and fibre channel card. Both of these are required and each takes a PCI slot.

98 NUMA-Q and S/390 Emulation 9.2 Multiple application environment configuration

Bootbay 36 GB disk & CD-RW - SCSI attached Bootbay 36 GB disk & CD-RW - SCSI attached

High-speed Interlink (1 GB/s)

Pentium III Pentium III Pentium III Pentium III Pentium III Pentium III Pentium III Pentium III -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon -- -- Xeon --

System Bus (720 MB/sec) System Bus (720 MB/sec)

4GB 4GB PCI Bridge IQ-Link Memory PCI Bridge IQ-Link Memory

PCI I/O PCI I/O PCI I/O PCI I/O

PCI I/O PCI I/O PCI I/O PCI I/O

PCI I/O PCI I/O PCI I/O PCI I/O

PCI - EISA PCI I/O PCI - EISA PCI I/O PCI Bus (264 MB/s) PCI Bus (264 MB/s) PCI Bus (264 MB/s) PCI Bus (264 MB/s) MDC MDC

Fibre Channel Switch Fibre Channel Switch MDC Console Storage Subsystem Storage Subsystem

Disks Disks Disks Disks

A multi-quad NUMA-Q EFS configuration with concurrent multiple application environment support consists of: • Two quads with 4 Intel 700Mhz processors per quad, connected with IQ-Link • 2 GB or more, per quad • Two SCSI-attached boot bays (one is required) • Internal DASD - fibre channel attached • Seven PCI slots per quad for connection to internal and peripheral I/O devices • SCSI cards for attaching the two bootbays - one to each quad • Two LAN cards (Ethernet or token ring) • Four Fibre channel cards - two per quad, cross-connected to the DASD (one per quad is required) • One or more PCA cards (up to three channels per card) • Integrated communications adapter

Chapter 9. Typical configurations 99 • Differential SCSI adapter for an external SCSI tape drive • Digital Linear Tape (DLT) drive • System console • Enterprise cabinet • Fibre channel switches, to allow maximum disk flexibility • DYNIX/ptx • NUMA-Q EFS running on multiple processors

This configuration allows a S/390 customer running multiple S/390 CPU complexes to run their existing workload with no changes to their operating environment. They also have the ability to run Linux, other DYNIX applications, and, optionally, Windows NT (Windows 2000) on the same machine.

They can dedicate multiple Intel processors to their S/390 workload while assigning other Intel processors to run Linux applications and DYNIX/ptx applications. They can also choose to dedicate an entire quad to run Windows NT (Windows 2000). Windows NT (Windows 2000) is not able to participate in the memory sharing capability provided by FLEX-ES and must run in its own dedicated quad.

The same PCI options as listed for the basic configuration are available, except there are now more PCI slots available. There is no requirement to place PCI adapters in the same quad that is used to run an application (such as FLEX-ES). Having two or more quads provides much more flexability for PCI adapter use.

In this larger environment using Fibre channel switches, external fibre channel storage devices can be used, such as IBM’s Shark system.

100 NUMA-Q and S/390 Emulation Chapter 10. Additional topics

This chapter discusses several additional topics that may be of interest to prospective EFS users.

10.1 External fibre channel disks NUMA-Q is the first UNIX system to offer an integral multi-pathing switched Fibre Channel Storage Area Network (SAN). This technology provides support for very large, high performance transaction environments. The Fibre Channel SAN allows large UNIX database machines and hundreds of front-end UNIX or Windows NT application servers to use a common switched fabric and effectively cost share data center-class disk storage.

NUMA-Q runs I/O directly to its connected storage devices (like Enterprise Storage Server - “Shark”) over a switched Fibre Channel SAN fabric, and not over the interconnect that handles memory accesses. On the NUMA-Q systems, this eliminates the resource contention that reduces throughput in large SMP systems as processors are added. Also, because of the I/O multi-pathing that is supported at the operating system (DYNIX/ptx) level, NUMA-Q offers an inherently fault tolerant SAN.

Capacity exists for up to 128 active paths from the NUMA-Q platform to switched-fibre storage devices. Implemented in a NUMA-Q system, fibre channel switches provide the highest available storage capacity, throughput, scalability, and availability. A single-mode fibre channel enables storage subsystems and nodes in a cluster to be geographically separated by up to 10km, while providing full-bandwidth I/O.

10.2 Security All the standard security concerns that apply to a normal S/390 operating environment also apply in an EFS environment. There are a few additional concerns that are unique to the EFS environment. These are reviewed here briefly, in the context of OS/390 use. Similar comments would apply for VM and VSE.

Underlying the S/390 operating system and applications is the NUMA-Q machine, DYNIX/ptx, and the FLEX-ES software. Physical access to the machine has the normal exposures associated with physical access to most computers, and these will not be explored here. DYNIX/ptx has the usual strengths and weaknesses associated with traditional UNIX systems. Also, by default it does not implement all elements of C2-level security1 and some of the standard filesystem types do not support ACLs.

A major security decision, when planning or installing an EFS system, is whether to allow LAN access2 to DYNIX/ptx. In general, the administrator must allow access to TCP/IP port 24 for the Terminal Solicitor. If no other LAN access is

1 Standard DYNIX/ptx tools exist to enable C2 functions, but the system administrator must invoke these and deal with any circumstances that arise. 2 This typically means Internet access, but might be restricted to an intranet. The more access is restricted, the less cause there is for LAN security concerns.

© Copyright IBM Corp. 2000 101 allowed to DYNIX/ptx (through telnet, ftp, nfs, rlogin, rsh, and so forth), then DYNIX/ptx exposures are minimized. If the NUMA-Q system is treated as an S/390, there should no reason to routinely enable DYNIX/ptx TCP/IP usage (except for port 24). Conversely, if the system is used for server consolidation, with significant UNIX workload as well as S/390 workload, then the gamut of traditional UNIX security issues must be reviewed.

It is possible to configure an EFS system in which no user 3270 connections are through the Terminal Solicitor. This is done by specifying an absolute IP address for each defined 3270 terminal. While this technique does address a security concern, the manual processes needed to make this work could be overpowering for a larger configuration.

FLEX-ES can interoperate with other instances of FLEX-ES on other machines. This interoperation is through TCP/IP connections. If the LANs involved are exposed to the public, these interfaces are open to attack. Our initial use of the EFS system did not examine the details of these links.

The most common method for client 3270 connection involves the Terminal Solicitor. A prospective client connects his TN3270 session to the DYNIX/ptx IP address using TCP/IP port 24. (This is not a well-known port usage, and is unique to FLEX-ES for this purpose.) The client receives a screen (in 3270 protocol) listing all the emulated 3270 devices that can accept connections. No logon or passwords are required thus far. The client selects one of the listed 3270 devices. At this point he would normally receive the VTAM logo screen, and would then begin a normal TSO logon sequence, for example.

In principle, the Terminal Solicitor function is not a security problem since a user must subsequently log onto an S/390 subsystem, such as TSO or CICS. However, the Terminal Solicitor presents a convenient interface for bad guys, allowing a whole range of terminals on which to try userid/password guessing attacks or simple denial of service attacks.

10.3 Tape tricks The NUMA-Q hardware does not include an integrated tape drive tape medium readily interchanged with with other conventional S/390 processors. However, NUMA-Q EFS does provide a variety of possibilities for exchanging tape data.

Consider this scenario: a VM customer has a 3480 tape cartridge created on their existing S/390 processor from which they are migrating to a NUMA-Q EFS. If the NUMA-Q EFS has no 34X0 tape drives (SCSI or channel-attached), then the customer can use CMS PIPELINES to convert the tape on the existing system that has 3480 tape drives to the AWSTAPE format, and use the results from an emulated tape drive using FLEX-ES.

This sample CMS exec can read the tape and perform the conversion on the old system: /* TAPE2AWS EXEC CMS EXEC to read a 3480 tape and convert it to AWSTAPE format. By Gary Eheman, June 22, 2000 Modified July 10 - to crudely handle imbedded tape marks.

The resultant file can be downloaded in binary to OS/2 or AIX on

102 NUMA-Q and S/390 Emulation a P/390, R/390, Multiprise 3000 and read on an AWSTAPE emulated tape drive. It can also be similarly downloaded in binary to DYNIX/ptx on NUMA-Q and read on a FLEX-ES emulated tape drive. */ trace 'O' 'CP REWIND 181' output_fileid = 'TAPE AWSTAPE A' 'ERASE' output_fileid taperc = 0 nodata = 0 do until (taperc <> 0) | nodata ’PIPE (endchar ? name TAPECONV) ’, ’TAPE ’, ’| BLOCK 65535 AWSTAPE ’, ’|a: fanout’, /* send to all a: inputs */ ’| >> ’ output_fileid , ’?’, /* end of first pipe */ ’a:’, ’| take last 1’, ’| specs 1-2 1’, /* give me just first 2 bytes (size) */ ’| var psize’ /* use last rec’s length in tapemark psize field */

taperc=rc

if psize=’PSIZE’ then /* No data? */ do nodata=1 psize=’0000’x end

/* Now assure that there is a tape mark in the format expected by AWSTAPE at the end. */

tapemark = '0000’x||psize||’4000’x 'PIPE var tapemark | >> ' output_fileid end /* do until taperc <> 0 */ exit rc

There must be sufficient space on the target CMS disk to hold the content of the converted tape in a CMS file. The content of the tape may be quite large, depending on the capacity of the media (18 track or 36 track) and whether or not tape drive compression (IDRC) was in effect when the physical tape media was written.

The resultant CMS file must be downloaded in binary to DYNIX/ptx. Once downloaded, the file may then be mounted on a FLEX-ES emulated tape drive: flexes> mount 780 /usr/flexes/tapes/mytape

The data may then be read from any S/390 operating system running on the NUMA-Q EFS.

AWSTAPE format files can also be created directly from an S/390 operating system running on an IBM P/390, R/390, S/390 Integrated Server, and Multiprise 3000 through use of a tape drive defined as an AWSTAPE type tape drive. From OS/2 or AIX, the files may be transferred in binary to DYNIX/ptx and read as illustrated above.

Chapter 10. Additional topics 103 10.4 Traces Tracing functions, at various levels, can be key debugging tools. All of the more routine trace functions associated with normal S/390 systems (such as GTF trace and the system trace in OS/390) exist in NUMA-Q EFS systems. FLEX-ES provides additional trace functions within the S/390 emulation software. The FLEX-ES traces are intended to assist IBM and FLEX-ES support personnel and are not intended for general use. However, some of the functions may be useful to experienced S/390 developers.

FLEX-ES has two categories of traces: instruction trace and emulated I/O trace. Both these traces are always active. (You allocate storage for them with the trace() parameter3 in a FLEX-ES system definition file.) Two FLEX-ES commands, cputrsnap and devtrsnap are used to capture entries from the traces. The captured traces are in an internal, undocumented, binary format. The FLEX-ES command dtprint can be used to format the captured traces for human use.

The cputrsnap and devtrsnap commands are issued from the flexes> prompt, while the dtprint command is issued from a DYNIX/ptx command line.

If one of the FLEX-ES modules detects an internal problem it can issue an alert. This causes an appropriate trace (instruction or device) to be captured, just as if one of the snap commands had been entered. Traces are automatically written in subdirectories of /var/adm/flexes, and have file names such as cputr.18138.0, where part of the numeric suffix is a process ID. It is also possible to set trace points in FLEX-ES code, but this is normally done only by IBM or FLEX-ES support personnel.

With one exception, the instruction trace information is not helpful to S/390 users. It is undocumented, and meaningful only to FLEX-ES developers. The cputrsnap command (to obtain an instruction trace) is normally used only at the request of support personnel and the results are sent to the support site. (We suggest running the dtprint command against the trace file, and ftp’ing the results. The initial trace file is binary and is more likely to be corrupted during handling than a displayable ASCII file.)

The exception is the histogram function, which is, effectively, a subfunction of the instruction trace. It is documented in the FLEX-ES manuals. It provides counts of every S/390 instruction executed. For example, 1268562 LA instructions were executed, 34561 MVC instructions, and so forth for every S/390 operation code. S/390 developers may find the counts interesting.

The device trace may be more useful. A user familiar with CCW-level programming can pick out channel commands and other details from the formatted trace data.

At the time of writing, both the instruction trace output and the device trace output were undocumented. We understand that some documentation is planned for the device trace output.

3 The trace parameter specifies the number of entries in the FLEX-ES trace table. We suggest using 1024 as the parameter unless you have directions or reasons for a different value.

104 NUMA-Q and S/390 Emulation 10.4.1 Examples of trace commands Simple examples of trace commands (for an S/390 instance named os39029) might be: >flexes> cputrsnap /var/adm/flexes/cputr.8041.0 flexes> devtrsnap 560 /var/adm/flexes/devtr.23456.0 flexes> quit # cd /var/adm/flexes/os39029 # ls cputr.0841.0 devtr.23456.0 # /usr/flexes/bin/dtprint cputr.8041.0 > /tmp/cpu.trace # /usr/flexes/bin/dtprint devtr.23456.0 > /tmp/trace.560 # cd /tmp # cat cpu.trace | more ( output ) # cat trace.560 | more ( output )

10.4.2 Disk space administration You may accumulate cputr and devtr files in subdirectories of /var/adm/flexes. These files are not especially large, but over time a substantial amount of space could be used. You can safely delete any of the traces in these directories. In general, if you are not in the process of resolving a problem with IBM’s support organization for EFS, you should not need any of the traces in these directories.

10.4.3 Using sar The DYNIX/ptx sar command can be especially useful if S/390 emulated DASD is clearly mapped to specific hardware disks. The sar command can display the percentage busy status of each disk. A typical command might be: sar-d510

This would sample and display disk status every 5 seconds, for a total of 10 displays.

10.5 Mini-root backup Our NUMA-Q system contained a CD-RW drive in the bootbay. DYNIX/ptx contains commands for writing CD-ROMs, and contains a shell script that can create a mini-root file system on a blank CD. This CD-ROM then can be used for booting. The mini-root contains the current settings from the operational root file system, such as IP addresses, and so forth.

We created a mini-root CD-ROM and subsequently booted from it. This required changing boot parameters at the NUMA-Q MDC (the console PC), but worked very well once we understood how to do this.

Chapter 10. Additional topics 105 106 NUMA-Q and S/390 Emulation Chapter 11. Frequently asked questions

Q: Are all the most current zSeries instructions supported? A: No. For example, binary (IEEE) floating point is not supported. Instructions associated with the second level set of S/390 (corresponding to z/OS release 1) are not supported at the time of writing. The set of supported instructions is subject to change. You should obtain the latest information before making decisions based on any limitations in this area.

Q: I need a faster single-processor (single TCB) S/390. Can I simply transfer FLEX-ES to a faster PC? A: No. A FLEX-ES license is bound to a specified processor at a specified speed. Changing the processor speed requires a new license (and code). The FLEX-ES software automatically detects this situation.

Q: Which SCSI 3480/3490 tape drives can I use? Can I use either single-ended SCSI or differential SCSI? A: Only a differential SCSI adapter is supported. We used an IBM 7205-311 DLT drive.

Q: Can I connect a NUMA-Q parallel channel (using one of the FLEX-ES adapters) to a Pacer unit (IBM 9034) to connect to an ESCON control unit? A: No. A Pacer connects an ESCON channel to a parallel control unit. The reverse operation (a parallel channel to an ESCON control unit) is needed here.

Q: How well do the emulated printers handle forms control functions and various printer recovery and restart situations? (Other emulated printer implementations had problems in this area.) A: A typical problem in this area involves the use of preprinted, numbered forms where the software must in some way synchronize the preprinted numbers with application files. Printing payroll checks is the common example of this situation. We suggest you retain your parallel channel printers for such applications.

Q: How many parallel channels can I have on the NUMA-Q? A: This can be a complex answer. A single NUMA-Q quad has seven PCI slots and two of these can be used for parallel channel adapter (PCA) cards. A single PCA card can have either one or three channels. Therefore, two three-channel PCA cards, providing six channels, is the maximum for a quad.

However, it is possible to place additional PCA cards in other quads, if these are present. It is unlikely that additional quads would be purchased solely for providing additional PCI slots, but they might be present for other purposes, such as DYNIX/ptx applications. A NUMA-Q system can contain up to sixteen quads and, in general, two PCA cards can be placed in each quad. An extreme configuration might have something like 96 parallel channels. We are unaware of any existing system approaching this number of channels and suggest that additional verification efforts would be necessary before use.

S/390 channels require special memory handling within FLEX-ES and DYNIX/ptx. This is generally transparent for small numbers of channels, but may be a consideration for a large number of channels.

Q: How can I obtain the [Support Element] software console function? Some of the S/390 stand-alone programs need it and the Virtual Image Facility (which

© Copyright IBM Corp. 2000 107 provides one way to run Linux for S/390) needs it. A: This is now available in the latest version of FLEX-ES. It was not available during our project activities, so we had no chance to exercise it.

Q: I liked the mini disks I could create with P/390-based systems. Why cannot the NUMA-Q EFS system provide this? A: These mini disks (which are not the same thing as VM minidisks) are not part of the traditional S/390 product set. They were introduced with P/390 machines and their use is generally restricted to this area. They have not been designed into the NUMA-Q EFS emulation functions. FLEX-ES does support non-standard-sized FBA disks. The numblocks device option can be used to specify the number of 512-byte blocks in the disk.

Q: I want to share DASD with existing S/390 mainframes running OS/390. To do this effectively, I need to have GRS connections [with the NUMA-Q EFS system]. How can I do this? A: At the time of writing, you cannot do this. There are two issues: connection to external DASD, and GRS connections. The GRS connections can be managed using external 3088 CTC control units. However, at the time of writing, the parallel channel adapters do not support DASD connections.

Q: Why do you not include MIPS numbers for the NUMA-Q EFS platform? A: MIPS can mean Marketing Indicators of Performance, among other things. We will leave these to Marketing documents. MIPS numbers are so frequently misunderstood, misused, and abused that IBM has stopped discussing them. Modern S/390 (and z/Architecture) machines can have wide ranges of MIPS results, depending on the exact workload being processed; likewise the NUMA-Q EFS implementation can have a wide range of MIPS numbers, for the same reason.

Q: A device can be marked OFFLINE in the FLEX-ES definitions. What does this mean to OS/390? A: OS/390 (and other operating systems) should see the device as present but not ready. In traditional terms, it should generate an Intervention Required message (if appropriate for the device type).

Q: I am confused by the word volume. It seems to be used is too many different ways. A: This can be confusing. The SVM component of DYNIX/ptx uses the word volume. S/390 (and especially OS/390) uses the word volume. The word has different meanings in these two cases. An SVM volume is a “partition” or “slice” of UNIX raw disk space.1 Within an SVM volume, a user could define a normal UNIX file system. Instead of using a normal UNIX file system, FLEX-ES directly manages the raw disk space in an SVM volume and places one emulated CKD or FBA file in the SVM volume space. The emulated CKD or FBA file contains a S/390 volume.

This is complicated by an earlier FLEX-ES restriction that limits a single “file” (which is contained in the raw disk space of a single SVM volume) to 2 GB. This was not large enough for an emulated 3390-3 or 3390-9. FLEX-ES used several SVM volumes to contain several files that, together, constitute the 3390 volume. FLEX-ES manages the breaks between the multiple SVM volumes automatically,

1 You may hear the terms “SVM volume”, “ptx volume”, “logical volume”, “slice”, and “partition” used somewhat interchangeably, especially by those accustomed to other UNIX implementations.

108 NUMA-Q and S/390 Emulation and a certain naming convention is required for the SVM volumes that are linked to hold a single S/390 CKD volume. The latest FLEX-ES version does not have this limitation. However, documentation and comments based on earlier releases will reflect the limitation and delve into the SVM volumes versus S/390 volumes discussion.

Q: How do I correlate the OS/390 TCP/IP interface number with a NUMA-Q LAN adapter interface? A: NUMA-Q DYNIX/ptx LAN interface numbers are pe0, pe1, pe2, pe3 (for the 4-port Ethernet adapter, starting from the lowest (bottom) interface port) and tr0, tr1, tr2, and so forth (for any token ring ports). Considering only Ethernet connections, for example, FLEX-ES causes any Ethernet port assigned to S/390 to appear as LAN port 0, regardless of the actual NUMA-Q DYNIX/ptx port number. The MOUNT statement used with FLEX-ES assigns a DYNIX/ptx LAN port (for example, pe2) to an S/390 LCS device. To the LCS device, this LAN port will appear as LAN port 0. (The latest FLEX-ES prerelease we used added an option to assign the LAN port to any LAN port number you select; the default is 0.)

Q: When I stop FLEX-ES (that is, stop the resadm program), there appears to be system activity for several seconds. Should I worry about this? A: We noticed that resadm requires a number of seconds to cleanly stop all the threads it started. We suggest waiting at least 15 seconds after stopping resadm (with the resadm -T command) before trying to restart it.

Q: How can I obtain a sense of system activity? A: There are two easy ways. The NUMA-Q box has “flashing lights”, with one LED for each Pentium processor. Watching these provides a sense of system activity. Another way is to login to the main console as root, change to the /etc directory, and enter monitor -f on the command line. This provides a system activity monitor (bar chart) for each processor and is updated about once per second. Press the f key to flip to another page with a variety of system statistics; these are also updated once per second.2 The q key can be used to exit from the monitor program.

Q: We have systems with LAN connections to the public Internet (at an assigned IP address) and to private LANs (using unassigned IP addresses, and not using the special addresses in the 10 and 192 ranges). We cannot permit IP forwarding between these LANs. How do we disable IP forwarding in DYNIX/ptx? A: IP forwarding is a kernel parameter in DYNIX/ptx, and is off by default. You need to build a new kernel to change it. This process is described in the DYNIX/ptx System Administration Guide.

Q: Can I stop DYNIX/ptx and boot Linux? A: No. As far as we know, there is no Linux available for native use on a NUMA-Q machine. However, you can execute binaries compiled for Intel-based Linux systems under DYNIX/ptx. That is, DYNIX/ptx provides Linux-compatible kernel and C runtime library services.

Q: I want to use shared DASD between multiple instances of OS/390 systems on a NUMA-Q machine. Will this work? A: FLEX-ES provides RESERVE/RELEASE emulation, and provides emulated

2 We noticed that some of the statistics were incorrect on a lightly loaded system. We suspect the monitor program is sampling control blocks that are not being updated in the case of a lightly loaded system.

Chapter 11. Frequently asked questions 109 CTC connections that can be used for a GRS ring. These are not sufficient for successful shared DASD in an OS/390 environment. There is an additional requirement that individual records in the same track of the VTOC can be asynchronously updated (without RESERVEs or ENQs) without corruption. This appears to work correctly. We ran batch jobs on two instances of OS/390 that involved heavy allocation and unallocation of temporary data sets on a work volume. These completed without error and there was no sign of VTOC corruption.

Q: Can I bypass the Terminal Solicitor function for connecting client 3270 sessions? A: Yes, you can bypass it by specifying a specific IP address for a client in the FLEX-ES CLI mount command issued when you startup an instance of S/390, or in the resources file.

Q: If I specify specific IP addresses for client TN3270 connections (instead of using the Terminal Solicitor) can I control which 3270 session (on the client) is associated with the IP address? (A client, at a given IP address, might have multiple TN3270 sessions.) A: The first client session (at the specified client IP address) will be used.

Q: Can I specify the same client IP address for multiple 3270 terminal devices? How can I force multiple specific 3270 devices to connect only to a single client station running multiple TN3270 sessions? A: It should be possible to specify the same IP address for multiple 3270 terminal devices. (The addresses are specified in the resources definition or in the mount commands.) We made a single, brief attempt at this and were unable to make it work. However, later tests by FSI demonstrated that it does work.

Q: Can I use the LUname technique to associate a TN3270E session with a particular S/390 device address? (This technique is used by other IBM products that transform TN3270 sessions to appear as local 3270 terminals.) A: This was not supported in our early code. However, it is supported in the latest versions of FLEX-ES.

Q: Our client PCs use DHCP to obtain their IP addresses. In this situation, how can I force a particular emulated 3270 address (such as the MVS master console) to a specific client PC? A: This is not supported at this time. You will need to assign a permanent IP address to the PC you want to use for the master console.

Q: My telnet sessions to DYNIX/ptx will not accept common vi control sequences. Why? A: The first step is to specify a terminal mode, such as VT100, for your emulator session. (This solved the problems we encountered.) If this is not sufficient you might try changing the telnet character definition. By default, telnetd runs in 7-bit mode. You can change this by editing /etc/inetd.conf and changing the telnet line to: telnet stream tcp nowait root /usr/etc/telnetd telnetd -b This line differs from the default line in that it specifies streams and adds the -b flag. The -b flag indicates binary (8-bit) mode.

Q: Can I share channel-attached devices (such as a group of tape drives) between multiple instances of FLEX-ES running, for example, a test VSE and a

110 NUMA-Q and S/390 Emulation production VSE? A: No. You cannot share a parallel channel between multiple S/390s (that is, between multiple instances of FLEX-ES). If you have multiple channel interfaces on the tape control units, you could have one parallel channel assigned to each instance of VSE.

Q: The NUMA-Q system has a separate console “box” that is a PC running Windows/NT. Can I install other applications on this PC? (Such as PCOM?) A: No. The console PC has been carefully configured for the NUMA-Q console application. Any additional program that changes this configuration may interfere with the console application.

Q: When I telnet to the NUMA-Q address, using port 24 to access the Terminal Solicitor, I get an error message flashed on my telnet screen and nothing more. Why? A: Assuming the NUMA-Q EFS functions are started correctly, the Terminal Solicitor is a 3270 application. You must connect using TN3270 (or TN3270e) and not basic telnet. (It seems that everyone who uses the system makes this initial mistake.)

Q: Do I need an IOCP/IOCDS? A: No. The FLEX-ES system and resource statements perform the equivalent functions.

Q: When I connect a parallel channel control unit, such as a local 3174, should the control unit detect an on-line condition? A: We found that our local, non-SNA, parallel channel 3174 did not detect an on-line condition until the resource manager was started (using a resource file that defined the parallel channel and control unit) and an operating system performed its first contact with the control unit.

Q: We plan to add a SCSI tape drive. Can I define it in my FLEX-ES resource files before we actually have the drive? A: We had problems stopping the resource manager when we defined a SCSI tape drive that did not exist. We also had similar problems when we accidently defined two SCSI tape drives with the same SCSI address. We suggest that you only define SCSI drives that are connected to the system, and to check your definitions carefully.

Q: We are debating mirroring and striping. Mirroring helps performance and reliability. Striping should help performance but what impact does it have on reliability? A: If a drive involved in striping fails, then all the drives involved with the stripes are unusable.3 If a mirror exists, then the mirrored drives will continue to operate. In a sense, striping reduces reliability of the total disk resources.

Q: Is disk striping, using SVM, advisable? Our NUMA-Q and UNIX people say “yes” and our S/390 people have no comment. A: A large NUMA-Q system is capable of very high disk I/O bandwidths and striping can definitely add to the potential bandwidth. Traditional NUMA-Q systems are typically found in large environments. A typical NUMA-Q EFS system is a relatively small NUMA-Q system. Furthermore, S/390 NUMA-Q EFS processing is usually limited by CPU cycles on the Pentiums, not by disk I/O

3 This answer applies to the software mirroring and striping offered by the DYNIX/ptx SVM. Other implementations may work differently.

Chapter 11. Frequently asked questions 111 bandwidth. That is, typical NUMA-Q EFS usage does not require enhanced disk I/O performance. For this reason, and because striping potentially reduces disk redundancy, we suggest that striping is not necessary in the normal NUMA-Q EFS environment.4

Q: I am a little confused about the resadm command and the FLEX-ES resource manager. Aren’t these the same thing? A: Not quite. In a sense, the resource manager is like a daemon, and the resadm command/program communicates with the daemon.

Q: If I use a PCA (Parallel Channel Adapter), I need to reserve contiguous memory for it. This is limited to slightly less than 512 MB. If I want to define a larger S/390 than 512 MB, can I simply not define the PCA for that S/390 instance? A: Yes. If you have a PCA, you can elect not to define it in your FLEX-ES system definitions. In this case you can define a S/390 instance larger than 512 MB.

Q: What is the largest memory S/390 I can define, assuming I do not use a PCA? How much S/390 expanded storage can I define? A: We did not find a firm limit during our ITSO residency. In principle, you should be able to define 2 GB main storage. However the early, prerelease code we used gave us problems defining more than 1.0 GB main storage, and we did not explore the use of larger storage. We should note that the NUMA-Q EFS implementation assumes there will be little DYNIX/ptx paging of the NUMA-Q memory used for NUMA-Q EFS. In other words, you should not define S/390 storage as large as physical storage on the NUMA-Q hardware.

The largest expanded storage size that can be defined is approximately (3 GB) minus (your defined main storage size * 1.03) minus (overhead for other FLEX-ES data structures). The result might range from .5 GB to about 2.5 GB.

Q: What is the largest blocksize I can write to a FakeTape device? Some S/390 software now writes blocks larger than the traditional 32 KB maximum. A: FakeTape supports blocksizes up to 64 KB - 1 (65535).

Q: Your examples sometimes show DYNIX/ptx executables in the /etc directory. Is this typical? A: Yes, DYNIX/ptx (and FLEX-ES) place executables in the /etc directory. This is typically not done with current UNIX implementations, but was sometimes done in earlier implementations. There is nothing wrong with it, but it can be a little surprising.

Q: The device trace function sounds interesting. Can it trace channel commands on a parallel channel? A: Yes, the same tracing exists for parallel channels as for internal emulated devices. When invoked for channel devices, it captures all activity on that channel, rather than just single-device activity.

Q: As DYNIX/ptx is starting I see several lines on the system console about starting VM. What is this? Is there a version of VM hidden in the NUMA-Q machine or console?

4 The disk setup examples in this redbook show striping because that is what we used during our project. Our after-the-fact conclusion was that striping via SVM was not necessary.

112 NUMA-Q and S/390 Emulation A: No. The VM shown on the console has nothing to do with S/390 VM and is not related to NUMA-Q EFS functions or operation.

Q: We normally use X windows to talk with our traditional UNIX systems. Does it work with DYNIX/ptx? You do not mention it in the redbook. A: Yes, it works and we used it briefly. We used a number of PCs for client machines and these did not have X windows servers installed. This is the only reason we did not use it much.

Q: You briefly mentioned a QIC tape drive. Can it be used to emulate a 3480? A: We did not try. We understand that production versions of NUMA-Q EFS systems will not have the QIC drive.

Q: You mention the main console for FLEX-ES, but you never show its usage. Why? A: This is a virtual console, and never directly appears in any window. You can send commands to it from a flexes> prompt.

Q: It appears that the number of PCI adapter slots in a typical NUMA-Q EFS system might be a limiting factor. Comments? A: You are correct. After using the required slots for the MDC console, the bootbay, and one Fibre Channel disk connection, there are only five slots left. You should have two Fibre Channel disk connections, leaving only four PCI slots for other uses. These need to be balanced among Parallel Channel Adapters, SCSI adapters, WAN adapters, and LAN adapters. Adding another quad would provide another seven usable PCI slots.

Q: I am attaching a parallel channel control unit to my NUMA-Q EFS system. Must I define all the physical devices on the control unit if I want to use only one of them? A: Yes, you must define all the physical devices on the control unit in your resources definition. Failure to do this may produce timeouts if an undefined device generates an interrupt.

Chapter 11. Frequently asked questions 113 114 NUMA-Q and S/390 Emulation Appendix A. Configuration file listings

The various FLEX-ES system and resource definitions we used during our NUMA-Q EFS project are listed in this appendix. While none of the listings may match your exact requirements, we suggest that one (or more) might provide a good starting point for you.

A.1 FLEX-ES configuration for OS/390 - single instance We used the following FLEX-ES OS/390 I/O configuration for a single instance of OS/390 V2R9, installed from an AD CD-ROM distribution: #------# FLEX-ES OS/390 I/O configuration - single instance environment # contains both - system and RESADM resources #------

system os39029: # system resources

# sample memory size specifications # 65536 = 64M # 131072 = 128M # 262144 = 256M # 524288 = 512M # 1048576 = 1G

memsize(503808) # K of core memory (492M = 512M - 20M) essize(1024) # M of expanded storage (1G) cachesize(1024) instset(esa) tracesize(256)

cpu(0) dedicated cpu(1) dedicated cpu(2) dedicated

channel(0) localbyte channel(1) local channel(2) blockmux oschpbt0 # pca card to tape drivesl

cu devad(0x00C,3) path(1) resource(os2821) # card rdr,pun and prt cu devad(0x560,2) path(1) resource(os3480) # tapes cu devad(0x700,32) path(0) resource(os3274) #terminals cu devad(0x900,16) path(2) unitadd(0x00) interlocked # terminals - real cu devad(0xA80,16) path(1) resource(osdasd) # 10*3390-3 6*3390-1 cu devad(0xE20,2) path(1) resource(os3172) # os/390's r/w pair cu devad(0xE22,1) path(1) resource(osctc) # ctc

end os39029 # end of system resources

resources s390: # RESADM resources

os39029: memory 512 # 512M central end os39029

© Copyright IBM Corp. 2000 115 oschpbt0: blockmux /dev/chpbt/ch0 end oschpbt0

os2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end os2821

os3480: cu 3480 interface local(1) device(01) 3480 OFFLINE # device name or filename to be filled in via CLI device(02) 3480 OFFLINE # device name or filename to be filled in via CLI end os3480

os3274: cu 3274 interface local(1) device(00) 3278 osmstcon # os/390 master console device(01) 3278 osaltcon # os/390 alt console device(02) 3278 osterm1 # vtam term1 device(03) 3278 osterm2 # vtam term2 device(04) 3278 osterm3 # vtam term3 device(05) 3278 osterm4 # vtam ... device(06) 3278 osterm5 # etc. ... device(07) 3278 osterm6 device(08) 3278 osterm7 device(09) 3278 osterm8 device(10) 3278 osterm9 device(11) 3278 osterm10 device(12) 3278 osterm11 device(13) 3278 osterm12 device(14) 3278 osterm13 device(15) 3278 osterm14 device(16) 3278 osterm15 device(17) 3278 osterm16 device(18) 3278 osterm17 device(19) 3278 osterm18 device(20) 3278 osterm19 device(21) 3278 osterm20 device(22) 3278 osterm21 device(23) 3278 osterm22 device(24) 3278 osterm23 device(25) 3278 osterm24 device(26) 3278 osterm25 device(27) 3278 osterm26 device(28) 3278 osterm27 device(29) 3278 osterm28 device(30) 3278 osterm29 device(31) 3278 osterm30 end os3274

osdasd: cu 3990 interface local(1) device(00) 3390-3 /dev/vx/rdsk/s390dg/2133903s1 # OS39R9-sysres device(01) 3390-3 /dev/vx/rdsk/s390dg/2233903s1 # OS3R9A-sysres ext device(02) 3390-3 /dev/vx/rdsk/s390dg/2333903s1 # OS39M1-paging,parm etc.

116 NUMA-Q and S/390 Emulation device(03) 3390-3 OFFLINE device(04) 3390-3 OFFLINE device(05) 3390-3 OFFLINE device(06) 3390-3 OFFLINE device(07) 3390-3 /dev/vx/rdsk/s390dg/2433903s1 # OS39H1-hfs device(08) 3390-3 /dev/vx/rdsk/s390dg/2533903s1 # SHR001-common device(09) 3390-3 OFFLINE device(10) 3390-1 /dev/vx/rdsk/s390dg/2633901s1 # WORK01 device(11) 3390-1 /dev/vx/rdsk/s390dg/2733901s1 # WORK02 device(12) 3390-1 /dev/vx/rdsk/s390dg/2833901s1 # WORK03 device(13) 3390-1 OFFLINE device(14) 3390-1 OFFLINE device(15) 3390-1 OFFLINE end osdasd

os3172: cu 3172 interface local(1) device(00) 3172 /dev/net/pe2 # third port on ethernet card device(01) 3172 OFFLINE end os3172

osctc: cu ctc interface local(2) device(00) ctc end osctc

end s390 # end of RESADM resources

A.2 FLEX-ES configuration for OS/390 - multiple instances The following FLEX-ES system configuration was used with multiple FLEX-ES instances of OS/390. In this case, one instance was the AD CD-ROM OS/390 V2R9 release and the other was an old AD CD-ROM OS/390 V1R1 preconfigured system. Only the system definitions are listed. The resource definition can be easily extrapolated from the other OS/390 resource definitions.

A.2.0.1 System definition for OS/390 V2R9 instance system os39029:

# sample memory size specifications # 65536 = 64M # 131072 = 128M # 262144 = 256M # 524288 = 512M # 1048576 = 1G

memsize(153600) # 150M essize(1024) # M of expanded storage (1G) cachesize(1024) instset(esa) tracesize(256)

cpu(0) cpu(1) cpu(3)

Appendix A. Configuration file listings 117 channel(0) localbyte channel(1) local channel(2) blockmux oschpbt0 # pca card to tape drivesl

cu devad(0x00C,3) path(1) resource(os2821) # card rdr,pun and prt cu devad(0x560,2) path(1) resource(os3480) # tapes cu devad(0x900,16) path(2) unitadd(0x00) interlocked # terminals - real cu devad(0x700,32) path(0) resource(os3274) #terminals cu devad(0xA80,16) path(1) resource(os3390) # 10*3390-3 6*3390-1 cu devad(0xE20,2) path(1) resource(os3172) # os/390's r/w pair cu devad(0x120,1) path(1) resource(os3380) # 1*3380-K cu devad(0xE22,1) path(1) resource(osctc) # ctc end os39029

A.2.0.2 System definition for OS/390 V1R1 instance system os39011:

# sample memory size specifications # 65536 = 64M # 131072 = 128M # 262144 = 256M # 524288 = 512M # 1048576 = 1G

memsize(98304) # 96M essize(512) # M of expanded storage (521M) cachesize(1024) instset(esa) tracesize(256)

cpu(0) cpu(1) cpu(3)

channel(0) localbyte channel(1) local

cu devad(0x00C,3) path(1) resource(os12821) # card rdr,pun and prt cu devad(0x700,16) path(0) resource(os13274) # terminals cu devad(0xA80,16) path(1) resource(os3390) # 10*3390-3 6*3390-1 cu devad(0x120,1) path(1) resource(os3380) # 1*3380-K

end os39011

A.3 FLEX-ES configuration for VM/ESA # /usr/open370/rundir/sysconf.vm system vm:

# sample memory size specifications # 65536 = 64M # 131072 = 128M # 262144 = 256M # 524288 = 512M # 1048576 = 1G

memsize(262144) # K of core memory (memory minus 32M for pca card)

118 NUMA-Q and S/390 Emulation essize(1048576) # K of expanded storage cachesize(1024) instset(esa) tracesize(256) feature lpar # turn on lpar bit in scpinfo so VM does not use active wait cpu(0) # 3 undedicated cpus cpu(1) cpu(2) channel(0) localbyte channel(1) local channel(2) blockmux chpbt0 # pca card to tape drives cu devad(0x00C,3) path(1) resource(cu2821) # card rdr,pun and prt cu devad(0x020,8) path(0) resource(cu3274) cu devad(0x100,5) path(1) resource(vmdasd) # 3390-3 # cu devad(0x170,2) path(2) unitadd(0x70) streaming (45) # real tape drives cu devad(0x181,1) path(1) resource(cu3480) # FakeTape cu devad(0x600,1) path(1) resource(cuctca) # CTC device cu devad(0x800,2) path(1) resource(vm3172) # vm r/w pair end vm resources vmnuma: vm: memory 264 # 256M central (256+3%) end vm cu2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end cu2821 cu3274: cu 3274 interface local(1) device(00) 3278 OFFLINE # device name filled in by mount command device(01) 3278 OFFLINE # device name filled in by mount command device(02) 3278 OFFLINE # device name filled in by mount command device(03) 3278 OFFLINE # device name filled in by mount command device(04) 3278 OFFLINE # device name filled in by mount command device(05) 3278 OFFLINE # device name filled in by mount command device(06) 3278 OFFLINE # device name filled in by mount command device(07) 3278 OFFLINE # device name filled in by mount command end cu3274 chpbt0: blockmux /dev/chpbt/ch0 end chpbt0 cu3172: cu 3172 interface local(1) device(00) 3172 /dev/net/pe1 # second port on ethernet card # device(00) 3172 OFFLINE # token ring card (if dynix not using) device(01) 3172 OFFLINE end cu3172 vm3172: cu 3172 interface local(1)

Appendix A. Configuration file listings 119 device(00) 3172 /dev/net/pe3 # fourth port on ethernet card device(01) 3172 OFFLINE end vm3172

cu3480: cu 3480 interface local(1) device(00) 3480 OFFLINE # device name or filename to be filled in via CLIend cu3480

cuctca: cu ctc interface local(2) device(00) ctca OFFLINE # device name or filename to be filled in via CLI end cuctca

vmdasd: cu 3990 interface local(1) device(00) 3390-3 /dev/vx/rdsk/s390dg/0133903s1 device(01) 3390-3 /dev/vx/rdsk/s390dg/0233903s1 device(02) 3390-3 /dev/vx/rdsk/s390dg/0333903s1 device(03) 3390-3 /dev/vx/rdsk/s390dg/0433903s1 device(04) 3390-3 /dev/vx/rdsk/s390dg/0533903s1 end vmdasd

end vmnuma

A.4 FLEX-ES configuration for VSE/ESA - CKD format system vseckd:

# sample memory size specifications # 65536 = 64M # 131072 = 128M # 262144 = 256M # 524288 = 512M # 1048576 = 1G

memsize(225280) # K of core memory (220M memory minus 32M for pca card) # essize(1048576) # K of expanded storage cachesize(1024) instset(esa) tracesize(256)

cpu(0) dedicated cpu(1) dedicated

channel(0) localbyte channel(1) local #channel(2) blockmux vsechpbt0 # pca card to tape drivesl

cu devad(0x00C,3) path(1) resource(vsec2821) # card rdr,pun and prt cu devad(0x01F,1) path(1) resource(vseccons) # VSE system console #cu devad(0x038,6) path(1) resource(vsec2701d) # ICA w/ 6 lines cu devad(0x150,2) path(1) resource(vsecdasd) # ckd dasd for vse # cu devad(0x170,2) path(2) unitadd(0x70) streaming (45) # real tape drives cu devad(0x200,7) path(1) resource(vsec3274) # nSNA terminals cu devad(0x300,2) path(1) resource(vsec3172tr) # vse's r/w pair SNA over T-R cu devad(0x400,1) path(1) resource(vsecctc) # CTC to VSEFBA & VM

120 NUMA-Q and S/390 Emulation cu devad(0x500,1) path(1) resource(vsec3480ft) # FakeTape (AWSTAPE) cu devad(0x590,3) path(1) resource(vsec3490scsi) # SCSI tape end vseckd resources vsecres: vseckd: memory 512 # 512M total central storage end vseckd

# Emulated 2821 (unit record) control unit vsec2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end vsec2821

# FSI ICA #vsec2701d: cu 2701d # options /dev/fsiica/ica0p0:/usr/open370/bin/pica960.img # interface local(1) # SDLC dialup line (default), nrz (default)

#device(00) rsdlc /dev/fsiica/ica0p0 # device name filled in by mount command #device(01) rsdlc /dev/fsiica/ica0p1 # device name filled in by mount command #device(02) rsdlc /dev/fsiica/ica0p2 # device name filled in by mount command #device(03) rsdlc /dev/fsiica/ica0p3 # device name filled in by mount command #device(04) rsdlc /dev/fsiica/ica0p4 # device name filled in by mount command #device(05) rsdlc /dev/fsiica/ica0p5 # device name filled in by mount command

#end vsec2701d

# 3390-3 DASD vsecdasd: cu 3990 interface local(1) # writethroughcache device(00) 3390-3 /dev/vx/rdsk/s390dg/4133903s1 # dosres device(01) 3390-3 /dev/vx/rdsk/s390dg/4233903s1 # syswk1 # device(02) 3390-3 /dev/vx/rdsk/s390dg/4333903s1 # user01 # device(03) 3390-3 /dev/vx/rdsk/s390dg/4433903s1 # user02 end vsecdasd vseccons: cu 3274 interface local(1) device(00) 3278 OFFLINE end vseccons vsec3274: cu 3274 interface local(1) device(00) 3278 OFFLINE # device name filled in by mount command device(01) 3278 OFFLINE # device name filled in by mount command device(02) 3278 OFFLINE # device name filled in by mount command device(03) 3278 OFFLINE # device name filled in by mount command device(04) 3278 OFFLINE # device name filled in by mount command device(05) 3278 OFFLINE # device name filled in by mount command

Appendix A. Configuration file listings 121 device(06) 3278 OFFLINE # device name filled in by mount command

end vsec3274

# SNA over T-R vsec3172tr: cu 3172TR interface local(1) # device(00) 3172 /dev/net/pe1 # second port on ethernet card device(00) 3172 /dev/net/tr0 # token ring card (was OFFLINE) device(01) 3172 OFFLINE end vsec3172tr

vsecctc: cu ctc interface local(2) device(00) ctc # ctc to vsefba end vsecctc

vsec3480ft: cu 3480 interface local(1) device(00) 3480 OFFLINE end vsec3480ft

vsec3490scsi: cu 3490 interface local(1) device(00) 3490 /dev/scsibus/scsibus0c devopt 'scsitarget=3' device(01) 3490 /dev/scsibus/scsibus0c devopt 'scsitarget=4' device(02) 3490-E OFFLINE end vsec3490scsi

end vsecres

A.5 FLEX-ES configuration for VSE/ESA - FBA format system vsefba:

# sample memory size specifications # 65536 = 64M # 131072 = 128M # 262144 = 256M # 524288 = 512M # 1048576 = 1G

memsize(225280) # K of core memory (220M memory minus 32M for pca card) # esize(1048576) # K of expanded storage cachesize(1024) instset(esa) tracesize(256)

cpu(2) dedicated

channel(0) localbyte channel(1) local #channel(2) blockmux chpbt0 # pca card to tape drivesl

cu devad(0x00C,3) path(1) resource(vsef2821) # card rdr,pun and prt cu devad(0x01F,1) path(1) resource(vsefcons) # VSE system console #cu devad(0x038,6) path(1) resource(vsef2701d) # ICA w/ 6 lines

122 NUMA-Q and S/390 Emulation # cu devad(0x150,4) path(1) resource(vsecdasd) # ckd dasd for vse cu devad(0x140,3) path(1) resource(vsefdasd) # fba dasd for vse # cu devad(0x170,2) path(2) unitadd(0x70) streaming (45) # real tape drives cu devad(0x200,7) path(1) resource(vsef3274) # nSNA terminals cu devad(0x300,2) path(1) resource(vsef3172tr) # vse's r/w pair SNA over T-R cu devad(0x400,1) path(1) resource(vsefctc) # CTC to VSEFBA & VM cu devad(0x500,1) path(1) resource(vsef3480ft) # FakeTape (AWSTAPE) cu devad(0x590,3) path(1) resource(vsef3490scsi) # SCSI tape end vsefba resources vsefres: vsefba: memory 512 # 512M total central storage end vsefba

# Emulated 2821 (unit record) control unit vsef2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end vsef2821 vsefdasd: cu 6310 interface local(1) device(00) 9336-20 /dev/vx/rdsk/s390dg/5193362s1 # dosres device(01) 9336-20 /dev/vx/rdsk/s390dg/5293362s1 # syswk1 device (02) 9336-20 /dev/vx/rdsk/s390dg/5393362s1 # syswk # device (01) 9336-20 OFFLINE end vsefdasd vsefcons: cu 3274 interface local(1) device(00) 3278 OFFLINE end vsefcons vsef3274: cu 3274 interface local(1) device(00) 3278 OFFLINE # device name filled in by mount command device(01) 3278 @9.12.2.103 device(02) 3278 OFFLINE # device name filled in by mount command device(03) 3278 OFFLINE # device name filled in by mount command device(04) 3278 OFFLINE # device name filled in by mount command device(05) 3278 OFFLINE # device name filled in by mount command device(06) 3278 OFFLINE # device name filled in by mount command end vsef3274

# SNA over T-R vsef3172tr: cu 3172TR interface local(1) # device(00) 3172 /dev/net/pe1 # second port on ethernet card device(00) 3172 /dev/net/tr0 # token ring card (was OFFLINE) device(01) 3172 OFFLINE end vsef3172tr

Appendix A. Configuration file listings 123 vsefctc: cu ctc interface local(2) device(00) ctc # ctc to vsefba end vsefctc

vsef3480ft: cu 3480 interface local(1) device(00) 3480 /var/temp/vseft1 # device name or filename to be filled in via CLI end vsef3480ft

end vsefres

A.6 FLEX-ES configuration for combined systems #------+ # SG24-6215 NUMA-Q FLEX-ES combined resource configuration file for ! # VM/ESA, OS/390 and VSE/ESA (VSECKD, VSEFBA, VSETCP) ! #------+

resources combined:

os39029: memory 512 # (VM=128,MVS=256,VSEC=64,VSEF=64) end os39029

vm2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end vm2821

vm3274: cu 3274 interface local(1) device(00 - 15) 3278 OFFLINE # device name filled in by mount comman end vm3274

vm3172: cu 3172 interface local(1) device(00) 3172 /dev/net/pe1 # second port on ethernet card device(01) 3172 OFFLINE end vm3172

vm3420: cu 3803 interface local(1) device(00) 3420 OFFLINE # device name or filename to be filled in via CLI end vm3420

vm3480: cu 3480 interface local(1) device(00) 3480 OFFLINE # device name or filename to be filled in via CLI end vm3480

vmctca1: cu ctc interface local(2)

124 NUMA-Q and S/390 Emulation device(00) ctc end vmctca1 vmctca2: cu ctc interface local(2) device(00) ctc end vmctca2 vmdasd: cu 3990 options 'writethroughcache' # <--- write to cache *AND* to dasd interface local(1) # Override default trackcachesize (1 cyl) on 240RES and 250RES device(00) 3390-3 /dev/vx/rdsk/s390dg/0133903s1 devopt 'trackcachesize=1500' device(01) 3390-3 /dev/vx/rdsk/s390dg/0233903s1 # 240W01 device(02) 3390-3 /dev/vx/rdsk/s390dg/0333903s1 # spare device(03) 3390-3 /dev/vx/rdsk/s390dg/0433903s1 devopt 'trackcachesize=1500' device(04) 3390-3 /dev/vx/rdsk/s390dg/0533903s1 # 250W01 (z/VM) end vmdasd vmdasd2: cu 3990 interface local(1) device(00) 3390-3 /dev/vx/rdsk/s390dg/0633903s1 # 240res (VM+) device(01) 3390-3 /dev/vx/rdsk/s390dg/0733903s1 # 240w01 (VM+) device(02) 3390-3 /dev/vx/rdsk/s390dg/0833903s1 # 240w02 (VM+) device(03) 3390-3 /dev/vx/rdsk/s390dg/0933903s1 # 240w03 (VM+) device(04) 3390-3 /dev/vx/rdsk/s390dg/1033903s1 # 240spl (VM+) end vmdasd2 vmdasd3: cu 3990 interface local(1) device(00) 3390-2 /dev/vx/rdsk/s390dg/1133903s1 # spare model 1 end vmdasd3 vmdasd4: cu 3990 interface local(1) device(00) 3390-1 /dev/vx/rdsk/s390dg/1133903s3 # spare model 1 end vmdasd4

#------+ # End of VM/ESA resources ! # Start of OS/390 resources ! #------+ oschpbt0: blockmux /dev/chpbt/ch0 end oschpbt0 os2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end os2821 os12821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command

Appendix A. Configuration file listings 125 device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end os12821

os3480: cu 3480 interface local(1) # device(00) 3480 /dev/scsibus/scsibus0c devopt 'scsitarget=0' device(00) 3480 OFFLINE device(01) 3480 OFFLINE # device name or filename to be filled in via CLI end os3480

os3274: cu 3274 interface local(1) device(00 - 32) 3278 OFFLINE # device name filled in by mount command end os3274

os13274: cu 3274 interface local(1) device(00 - 16) 3278 OFFLINE # CLI mount command

end os13274

os3390: cu 3990 interface local(2) device(00) 3390-3 /dev/vx/rdsk/s390dg/2133903s1 # os39r9-sysres device(01) 3390-3 /dev/vx/rdsk/s390dg/2233903s1 # os3r9a-sysres ext device(02) 3390-3 /dev/vx/rdsk/s390dg/2333903s1 # os39m1-paging,parm etc. device(03) 3390-3 OFFLINE device(04) 3390-3 OFFLINE device(05) 3390-3 OFFLINE device(06) 3390-3 OFFLINE device(07) 3390-3 /dev/vx/rdsk/s390dg/2433903s1 # os39h1-HFS device(08) 3390-3 OFFLINE device(09) 3390-3 OFFLINE device(10) 3390-1 /dev/vx/rdsk/s390dg/2633901s1 # work01 device(11) 3390-1 /dev/vx/rdsk/s390dg/2733901s1 # work02 device(12) 3390-1 /dev/vx/rdsk/s390dg/2833901s1 # work03 device(13) 3390-1 OFFLINE device(14) 3390-1 OFFLINE device(15) 3390-1 OFFLINE end os3390

os3380: cu 3880 interface local(2) device(00) 3380-K /dev/vx/rdsk/s390dg/2533903s1 # OS390R-sysres (1pack system R1.1) end os3380

os3172: cu 3172 interface local(1) device(00) 3172 /dev/net/pe2 # third port on ethernet card device(01) 3172 OFFLINE end os3172

osctc: cu ctc interface local(2) device(00) ctc end osctc

126 NUMA-Q and S/390 Emulation #------+ # End of OS/390 resources ! # Start of VSE/ESA resources ! #------+

# start of VSECKD resources...

# Emulated 2821 (unit record) control unit vsec2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end vsec2821

# 3390-3 DASD vsecdasd: cu 3990 interface local(1) device(00) 3390-3 /dev/vx/rdsk/s390dg/4133903s1 # dosres device(01) 3390-3 /dev/vx/rdsk/s390dg/4233903s1 # syswk1 end vsecdasd vseccons: cu 3274 interface local(1) device(00) 3278 OFFLINE end vseccons vsec3274: cu 3274 interface local(1) device(00 - 07) 3278 OFFLINE # device name filled in by mount command end vsec3274

# SNA over T-R vsec3172tr: cu 3172TR interface local(1) device(00) 3172 /dev/net/tr0 # token ring card (was OFFLINE) device(01) 3172 OFFLINE end vsec3172tr vsecctc: cu ctc interface local(2) device(00) ctc # ctc to vsefba end vsecctc vsec3480ft: cu 3480 interface local(1) device(00) 3480 OFFLINE # device name or filename to be filled in via CLI end vsec3480ft

# start of VSEFBA resources...

# Emulated 2821 (unit record) control unit vsef2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command

Appendix A. Configuration file listings 127 device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end vsef2821

vsefdasd: cu 6310 interface local(1) device(00) 9336-20 /dev/vx/rdsk/s390dg/5193362s1 # dosres device(01) 9336-20 /dev/vx/rdsk/s390dg/5293362s1 # syswk1 device (02) 9336-20 /dev/vx/rdsk/s390dg/5393362s1 # syswk2 # device (01) 9336-20 OFFLINE end vsefdasd

vsefcons: cu 3274 interface local(1) device(00) 3278 OFFLINE end vsefcons

vsef3274: cu 3274 interface local(1) device(00) 3278 OFFLINE # device name filled in by mount command device(01) 3278 @9.12.2.103 # bypass terminal solicitor device(02 - 05) 3278 OFFLINE # device name filled in by mount command

end vsef3274

# SNA over T-R vsef3172tr: cu 3172TR interface local(1) device(00) 3172 /dev/net/tr0 # token ring card (was OFFLINE) device(01) 3172 OFFLINE end vsef3172tr

vsefctc: cu ctc interface local(2) device(00) ctc # ctc to vsefba end vsefctc

vsef3480ft: cu 3480 interface local(1) device(00) 3480 OFFLINE # device name or filename to be filled in via CLI end vsef3480ft

# start of VSETCP resources...

# Wolf's VSE/ESA 3390-2 system w/ TCPIP/VSE

# Emulated 2821 (unit record) control unit vset2821: cu 2821 interface local(1) device(00) 2540R OFFLINE # device name filled in by mount command device(01) 2540P OFFLINE # device name filled in by mount command device(02) 1403 OFFLINE # device name filled in by mount command end vset2821

# FSI ICA vset2701d: cu 9221ICAd options '/dev/fsiica/ica0p0:/usr/open370/bin/pica960.img' interface local(1)

128 NUMA-Q and S/390 Emulation # SDLC dialup line (default), nrz (default) device(00) rsdlc /dev/fsiica/ica0p0 devopt 'leased,nrzi' # device(01) rsdlc /dev/fsiica/ica0p1 devopt 'leased,nrzi' device(02) rsdlc /dev/fsiica/ica0p2 devopt 'leased,nrzi' device(03) rsdlc /dev/fsiica/ica0p3 devopt 'leased,nrzi' device(04) rsdlc /dev/fsiica/ica0p4 # device name filled in by mount command device(05) rsdlc /dev/fsiica/ica0p5 # device name filled in by mount command end vset2701d

# 3390-3 DASD vsetdasd: cu 3990 interface local(1) device(00) 3390-2 /dev/vx/rdsk/s390dg/4333902s1 # dosres device(01) 3390-2 /dev/vx/rdsk/s390dg/4433902s1 # syswk1 end vsetdasd vsetcons: cu 3274 interface local(1) device(00) 3278 OFFLINE end vsetcons vset3274: cu 3274 interface local(1) device(00 - 07) 3278 OFFLINE # device name filled in by mount command end vset3274

# SNA over T-R # Ethernet on 3172 vset3172en: cu 3172 interface local(1) device(00) 3172 /dev/net/pe3 # fourth port on ethernet device(01) 3172 OFFLINE end vset3172en vsetctc: cu ctc interface local(2) device(00) ctc # ctc to vsefba end vsetctc vset3480ft: cu 3480 interface local(1) device(00) 3480 OFFLINE end vset3480ft end combined

Appendix A. Configuration file listings 129 130 NUMA-Q and S/390 Emulation Appendix B. The Storage Volume Manager (SVM)

DYNIX/ptx provides the Storage Volume Manager to organize and control disk space within the system. Typical NUMA-Q EFS systems will only utilize a small part of the power of SVM in support of the S/390 function. This appendix will describe and summarize some of the functions we used in setting up and configuring the NUMA-Q EFS.

There are three different interfaces to the SVM: • Command line interface • Text based menu system (vxdiskadm)1 • Graphical user interface system (CommandPoint SVM)

During the residency, we used menu system (vxdiskadm) primarily when we needed to do a function only once and were not sure of the syntax of the command line interface. We used the command line interface when we had many similar commands to issue and could easily build a shell script with many long and/or time-consuming commands. Although the vxdiskadm menu interface was a bit unusual and cumbersome for S/390 users, we did find it useful for many functions. We did not attempt to use the GUI-based CommandPoint SVM.

B.1 Disks, data groups, and volumes In simple terms, SVM is used to combine multiple physical disk drives into disk groups, which can then be subdivided in various ways into ptx/SVM volumes.

The NUMA-Q system used for the residency had a total of 23 physical disk drives available. (For most practical purposes, the hard drive in the boot-bay is dedicated to DYNIX/ptx use; it is ignored in this discussion.) We decided to use two physical disk drives for DYNIX/ptx paging (“swapping”) and other system functions, and created a disk group named sysdg for this purpose. We placed physical devices sd11 and sd23 in this disk group. We then created the SWAP volumes (non-mirrored) on these two disk drives. Next we created a disk group named s390dg and placed all other physical disk drives into this group. Thereafter, ptx volumes were allocated from the s390dg for use as S/390 emulated disks.

The sections below will provide more detailed information on this process and the SVM commands used.

B.2 Modifying system SWAP volumes During the residency, we first had to delete the prior configuration and then create our target configuration. This was complicated somewhat by the fact that the system SWAP volumes were created on disks we wanted for our S/390 volumes. In order to separate the SWAP areas from the S/390 data, we had to go into DYNIX/ptx single user mode, remove the current SWAP volumes from the DYNIX/ptx configuration, delete the existing disk group, create a new system disk group (sysdg), allocate new SWAP volumes in the new disk group, and bring

1 The vxdiskadm menus do not cover all the functions of SVM and disk management. We understand that vxdiskadm is not included in classes offered by the DYNIX/ptx group. We found it useful for the functions mentioned in this chapter.

© Copyright IBM Corp. 2000 131 them online. Since this process will not be frequently needed by users, we will only provide a brief summary of the commands. • Reboot to single user mode: shutdown -y -g0 -i1 • Edit /etc/init.d/addswap and comment out the addswap statements for the relevant old swap volume(s). • Stop the swap volumes: vxvol stop swap1 vxvol stop swap2 • Clean up any other volumes in the disk group: (multiple commands, ending with deporting the old disk group) vxdg deport pbaydg • Reboot DYNIX/ptx. • Create a new disk group named sysdg with two physical disks: sd11 and sd23. Accept the default names so that, within the disk group, physical disks sd11 and sd23 will be known as sysdg01 and sysdg02. Using the menus: vxdiskadm -> 1 (add or initialize one or more disks) ->fil1 in disk device names (sd11 and sd23) and diskgroup name (sysdg) • Create the SWAP volumes within the disk group sysdg, using disks named sysdg01 and sysdg02 (not striped and not mirrored): vxassist -g sysdg -U gen make swap1 8192m sysdg01 vxassist -g sysdg -U gen make swap2 8192m sysdg02 • Do not attempt to recover the swap contents on a restart: vxvol set start_opts=noreov swap1 vxvol set start_opts=noreov swap2

Activate the new swap areas via the swap -a command and check to make sure they are active via the swap -l command: swap -a /dev/vx/dsk/sysdg/swap1 0 swap -a /dev/vx/dsk/sysdg/swap2 0 swap -l path dev swaplo blocks /dev/vx/dsk/SWAPVOL 121,1 0 530688 /dev/vx/dsk/sysdg/swap1 121,1175000 0 16777216 /dev/vx/dsk/sysdg/swap2 121,1175000 0 16777216

Do not forget to add the same swap -a commands to the end of the /etc/init.d/addswap script. This will ensure that these disk volumes are added to the system swap resources each time the system boots. At this point we have one disk group, sysdg, with two swap volumes, swap1 and swap2. The complete names of these volumes are: /dev/vx/rdsk/sysdg/swap1 /dev/vx/rdsk/sysdg/swap2

We also have twenty-one 9 GB disks available for allocation and use. These disks are identified as sd1, sd2..sd10 and sd13, sd14, ..., sd22. (The other volumes either do not exist, or are in use in the sysdg disk group.)

132 NUMA-Q and S/390 Emulation • Edit /etc/init.d/addswap to include our new swap disks. This will cause them to be used after each boot: swap -a /dev/vx/dsk/sysdg/swap1 0 swap -a /dev/vx/dsk/sysdg/swap2 0

B.3 Creating disk group for S/390 We created a disk group named s390dg to contain the emulated S/390 disk volumes. We again used the vxdiskadm menu system to create the disk group: vxdiskadm -> 1 (add or initialize one or more disks) ->fil1 in disk device names (sd1 - sd10 and sd12 - sd22) and diskgroup name (s390dg)

For the s390dg group, we did not allow the SVM disk names to default but instead specified explicit SVM disk names: device sd1 was named SVM disk1, device sd2 was named SVM disk2, and so forth. This allowed us to more easily map specific ptx volumes (and emulated S/390 volumes) to specific physical disks. The names sd1, sd2, and so on, are assigned automatically by DYNIX/ptx; they represent the physical drives.

B.3.1 Changing SVM Defaults Prior to actually creating the ptx volumes, we changed the SVM Defaults file to specify a different default owner and group for volumes created by vxassist. The defaults file is /etc/xv/vxassist.defaults and can be edited by vi or another available editor. The user= and group= values are about half way down the file and should be changed from root to flexes. It may be of interest to review the other settings in the defaults file, especially the default layout= values, to better understand the default vxassist behavior.

B.4 Allocating S/390 volumes The next step was to subdivide the disk group into multiple ptx volumes, which would be then be formatted into FLEX-ES emulated S/390 volumes. There were two steps in this process: 1. Allocate the primary volumes, striped across multiple SVM disks. 2. Specify mirroring for each ptx volume and initializing the mirror (“silvering”).

Because there would be many volumes to allocate and mirror, and because the commands are rather long, we elected to create a series of shell scripts containing the commands and then run these scripts in “batch mode”. Here is a small part of the scripts with some of the vxassist commands:

# sg246215 - creating volumes and mirrors

vxassist -g s390dg -U gen make 0133903s1 934m layout=striped columns=4 disk1 disk2 disk3 disk4 vxassist -g s390dg mirror 0133903s1 layout=striped columns=4 disk13 disk14 disk15 disk16 & vxassist -g s390dg -U gen make 0133903s2 934m layout=striped columns=4 disk1 disk2 disk3 disk4 vxassist -g s390dg mirror 0133903s2 layout=striped columns=4 disk13 disk14 disk15 disk16 & vxassist -g s390dg -U gen make 0133903s3 934m layout=striped columns=4 disk1 disk2 disk3 disk4 vxassist -g s390dg mirror 0133903s3 layout=striped columns=4 disk13 disk14 disk15 disk16 &

Appendix B. The Storage Volume Manager (SVM) 133 These commands are the ones required to allocate and mirror ptx volumes that could be used for three 3390-1 volumes or one 3390-3 volume. This sequence specifies the following for each volume: • Make a ptx/SVM volume with the specified name, the size of a 3390-1 (934 MB) (-U gen make 0133903s1 934m) • Stripe it across four physical disks (layout=striped columns=4) • Put the stripes on the SVM disks named disk1 ... disk4 (disk1 disk2 disk3 disk4) • Mirror the ptx volume (mirror 0133903s1) • Stripe the mirror image across 4 SVM disks (layout=striped columns=4) • Put the mirror stripes on SVM disks named disk13 disk14 disk15 disk16 • Run the mirror initialization (“silvering”) in the background ( & )

For VM/ESA, we initially created enough mirrored ptx volumes to support five 3390-3 volumes. For OS/390, we initially created four 3390-3 volumes and three 3390-1 volumes. Like the VM disks, these were striped across four SVM disks and mirrored across another 4 disks. For VSE, we initially created two sets of volumes: 1. Two 3390-3 volumes, striped across two SVM disks and mirrored onto two more disks 2. Two 9336-20 (FBA) volumes, not striped, each 9336 volume on a separate SVM disk and mirrored onto two more disks. Since these non-striped volumes for the 9336 disks were a little different, it may be useful to look at these vxassist commands also: #FBA non-striped single column vxassist -g s390dg -U gen make 5193362s1 818M disk9 vxassist -g s390dg mirror 5193362s1 disk21 & vxassist -g s390dg -U gen make 5293362s1 818M disk10 vxassist -g s390dg mirror 5293362s1 disk22 &

As the residency progressed, we frequently needed to allocate additional ptx volumes for emulated S/390 volumes. We then used either command line vxassist commands, shell scripts, or the vxdiskadm menus, depending on the specific function required.

B.5 Some other useful SVM commands There are numerous SVM commands available to display the configuration and status of the disk group and volumes.

B.5.0.1 List defined Disk Groups on system # vxdg list NAME STATE ID rootdg enabled 957562337.1025.numa-q sysdg enabled 972938134.1356.numa-q s390dg enabled 972942344.1398.numa-q

B.5.0.2 List SVM disks and their status # vxdisk -g s390dg list DEVICE TYPE DISK GROUP STATUS sd2 simple disk2 s390dg online sd3 simple disk3 s390dg online sd4 simple disk4 s390dg online

134 NUMA-Q and S/390 Emulation sd5 simple disk5 s390dg online sd6 simple disk6 s390dg online sd7 simple disk7 s390dg online sd8 simple disk8 s390dg online sd9 simple disk9 s390dg online sd10 simple disk10 s390dg online sd13 simple disk13 s390dg online sd14 simple disk14 s390dg online sd15 simple disk15 s390dg online sd16 simple disk16 s390dg online sd17 simple disk17 s390dg online sd18 simple disk18 s390dg online sd19 simple disk19 s390dg online sd20 simple disk20 s390dg online sd21 simple disk21 s390dg online sd22 simple disk22 s390dg online - - disk1 s390dg failed was:sd1 - - disk12 s390dg failed was:sd12

Note that at the time of this report, we had two disks reporting as failed. Disk1 is mirrored onto disk13, so the S/390 system using it was still operational. Disk12 is an unused disk, so there was no immediate problem caused by its failure.

B.5.0.3 List detailed information about a disk # vxdisk list disk3 Device: sd3 devicetag: sd3 type: simple hostid: numa-q disk: name=disk3 id=#uuid(0d00040305363831323830363100000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000003 f) group: name=s390dg id=972942344.1398.numa-q flags: online ready autoconfig autoimport imported pubpaths: block=/dev/dsk/sd3 char=/dev/rdsk/sd3 version: 2.1 iosize: min=512 (bytes) max=131072 (blocks) public: slice=63 offset=10272 len=17785805 private: slice=63 offset=32 len=10240 update: time=973692464 seqno=0.22 headers: 0 248 configs: count=1 len=7527 logs: count=1 len=1140 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-007544[007296]: copy=01 offset=000231 enabled log priv 007545-008684[001140]: copy=01 offset=000000 enabled

If we use this command to examine one of the disks that the prior command reported as failed, we get: vxdisk list disk1 vxvm:vxdisk: ERROR: Disk disk1: Not connected to a physical disk

B.5.0.4 Check Space used and available in a Disk Group # vxdg -g s390dg free DISK DEVICE TAG OFFSET LENGTH FLAGS disk2 sd2 sd2 15780864 2004941 - disk3 sd3 sd3 15780864 2004941 - disk4 sd4 sd4 15780864 2004941 - disk5 sd5 sd5 8607744 9178061 - disk6 sd6 sd6 8607744 9178061 - disk7 sd7 sd7 8607744 9178061 -

Appendix B. The Storage Volume Manager (SVM) 135 disk8 sd8 sd8 8607744 9178061 - disk9 sd9 sd9 11227136 6558669 - disk10 sd10 sd10 12902400 4883405 - disk13 sd13 sd13 15780864 2004941 - disk14 sd14 sd14 15780864 2004941 - disk15 sd15 sd15 15780864 2004941 - disk16 sd16 sd16 15780864 2004941 - disk17 sd17 sd17 8607744 9178061 - disk18 sd18 sd18 8607744 9178061 - disk19 sd19 sd19 8607744 9178061 - disk20 sd20 sd20 8607744 9178061 - disk21 sd21 sd21 11227136 6558669 - disk22 sd22 sd22 12902400 4883405 -

The OFFSET value indicates the beginning of the free space (in 512 byte blocks) and the LENGTH value indicates the length of the remaining free space (in 512 byte blocks). In this example, disk5, 6, 7, and 8 are the primary disks for the OS/390 system. Each of the 4 disks has 8,607,744 blocks (about 4.3 GB) in use and 9,176,061 blocks (about 4.6 GB) available. Across the four disks used for OS/390, there is about 18 GB available for additional emulated S/390 volumes. Note that the mirror disks have the same space used and available as the primary disks, which is as it should be.

B.5.0.5 Display Disk Usage and Performance Information # vxstat -g s390dg -d OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm disk1 0 0 0 0 0.0 0.0 dm disk2 0 0 0 0 0.0 0.0 dm disk3 0 0 0 0 0.0 0.0 dm disk4 0 0 0 0 0.0 0.0 dm disk5 15809 12256 299975 249292 7.8 11.9 dm disk6 15433 10595 295941 218418 7.6 11.5 dm disk7 15661 11503 300043 248977 7.6 11.7 dm disk8 16216 10890 302571 218900 7.6 11.3 dm disk9 0 0 0 0 0.0 0.0 dm disk10 0 0 0 0 0.0 0.0 dm disk12 0 0 0 0 0.0 0.0 dm disk13 3650 712 68888 12838 10.2 10.6 dm disk14 3206 515 68588 10839 5.5 10.2 dm disk15 3248 512 67146 10703 5.3 10.7 dm disk16 3441 581 66645 11480 5.7 10.1 dm disk17 15946 12256 301040 249292 7.9 12.0 dm disk18 15618 10595 295942 218418 7.6 11.9 dm disk19 15722 11503 302255 248977 7.6 12.0 dm disk20 16038 10890 301550 218900 7.7 11.5 dm disk21 0 0 0 0 0.0 0.0 dm disk22 0 0 0 0 0.0 0.0

B.5.1 Failed disk recovery Several of the above responses to SVM commands indicated that two of the physical disk drives in the s390dg disk group had failed. One of the disks, sd12, was not in use so its failure had no impact on the system. The other disk, sd1, is one of the disks upon which the VM/ESA emulated disks were striped. We observed that when sd1 failed, all I/O to disks sd2, sd3, and sd4 also stopped.

136 NUMA-Q and S/390 Emulation We concluded that DYNIX suspended I/O to all the disks when sd1 failed because the ptx volumes were striped across all four volumes; if the stripes on sd1 could not be updated, then there was apparently no need to update the stripes on the other primary drives. The mirrored volumes on sd13, sd14, sd15, and sd16 continued to function, and there was no disruption to the VM/ESA system.

B.5.1.1 Recovery process After we determined that the two drives had failed, we called IBM NUMA-Q service to have the drives replaced. Disks can be replaced while the system is operational, using SVM commands to integrate the new disks into the operational groups.

B.6 Additional configuration tasks For our purposes and due to our lack of time, we felt it was not necessary to perform the additional tasks to make our system look like a production environment. Experienced DYNIX/ptx administratiors might consider the additional steps, such as the following: • Place mount points in /tmp and /usr/tmp for individual mounted filesystems. • Likewise, place mount points in /opt and /home for individual mounted filesystems. • Backup sets should be created using the menu system, and be performed on a regular basis.

An example of creating and mounting the /opt filesystem (done as root) might be: • Add an additional disk partition to the root disk group: vxdg -g rootdg adddisk sd0s10 • Determine the size of partition sd0s10 (8388608): / # prtvtoc sd0 * /dev/diag/rdsk/sd0 partition map * * Disk Type: ibms36w * * Dimensions: * 512 bytes/sector * 209 sectors/track * 20 tracks/cylinder * 8162 cylinders * 4392 sectors/cylinder * 72170879 sectors/disk * * Partition Types: * 0: Empty Slot * 1: Regular Partition * 2: Bootstrap Area * 3: Reserved Area * 4: Firmware Area * 5: SCAN Dump Partition - Required For Hardware Maintenance * 6: SVM Database * 7: Clusters Management Area * 8: SVM Private Partition * 9: Miniroot Partition

Appendix B. The Storage Volume Manager (SVM) 137 * * Start Size Block Sz Frag Sz * Type Sector in Sectors in Bytes in Bytes Mount point 0 1 5651 2050129 8192 1024 1 1 2055780 530712 8192 1024 2 1 2586492 2050129 8192 1024 3 1 4636621 16777216 8192 1024 / 4 1 21422585 8388608 8192 1024 5 1 29811193 5549454 8192 1024 6 1 35560647 16777216 8192 1024 /apps 7 1 52337863 8388608 8192 1024 /ibm 8 8 21413837 8748 0 0 9 9 35360647 200000 8192 1024 10 1 60726471 8388608 8192 1024 11 1 69115079 3049968 8192 1024 12 4 32 5619 0 0 13 3 72165047 5832 0 0 14 2 0 16 0 0 15 3 16 16 0 0 • Use SVM to create a volume to hold the filesystem: vxassist -g rootdg make opt 8388608 sd0s10 • Use newfs to create a filesystem in the new volume: newfs -F efs /dev/vx/rdsk/rootdg/opt • Mount the new filesystem: mount /dev/vx/dsk/rootdg/opt /opt • See the new filesystem mounted: df / (/dev/vx/dsk/ROOTVOL ): 8995154 blocks 1946278 i-nodes /apps (/dev/dsk/sd0s6 ): 1910302 blocks 1975821 i-nodes /ibm (/dev/dsk/sd0s7 ): 7203952 blocks 260533 i-nodes /mnt (/dev/dsk/cd0 ): 0 blocks 0 i-nodes /opt (/dev/vx/dsk/rootdg/opt): 8384912 blocks 262140 i-nodes

An example of an entry added to /etc/vfstab so the system will mount this when booting is: /dev/vx/dsk/rootdg/opt /dev/vx/rdsk/rootdg/opt efs 1 yes rw

11.1 Further information There are many more SVM commands available for monitoring and changing the SVM-managed disks. There are also many more SVM commands and functions which are not used in the S/390 environment, but which may be needed in a mixed DYNIX and S/390 environment. Users who expect to actively manage and change their SVM environment are encouraged to attend the SVM Administration class. The SVM Administration reference guide is available online at http://webdocs.sequent.com/start.htm

From the home page, select Disk and Filesystem Administration, then ptx/SVM Administration.

138 NUMA-Q and S/390 Emulation Appendix C. Special notices

This publication is intended to help potential customers interested in NUMA-Q Enabled For S/390. The information in this publication is not intended as the specification of any programming interfaces for the NUMA-Q product or the S/390 software mentioned in this document.

References in this publication to IBM products, programs or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM's product, program, or service may be used. Any functionally equivalent program that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program or service.

Information in this book was developed in conjunction with use of the equipment specified, and is limited in application to those specific hardware and software products and levels.

IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785.

Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA.

Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee.

The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites.

The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries: IBM â Redbooks Redbooks Logo

Processor Resource/Systems Manager PR/SM RACF RETAIN RMF S/370 S/390 SP TCS VM/ESA

© Copyright IBM Corp. 2000 139 VSE/ESA VTAM XT 400 TME

The following terms are trademarks of other companies:

FLEX-ES and FakeTape are trademarks of Fundamental Software, Incorporated, of Freemont, California.

Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjøbenhavns Sommer - Tivoli A/S.

C-bus is a trademark of Corollary, Inc. in the United States and/or other countries.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries.

PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license.

ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries.

UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group.

SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC.

Other company, product, and service names may be trademarks or service marks of others.

140 NUMA-Q and S/390 Emulation Appendix D. Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

D.1 IBM Redbooks For information on ordering these publications see “How to get IBM Redbooks” on page 143. • P/390, R/390, S/390 Integrated Server: OS/390 New User’s Cookbook, SG24-4757 (use the -01 version) • Linux for S/390, SG24-4987 • VM/ESA: Running Guest Operating Systems, SG24-5755

D.2 IBM Redbooks collections Redbooks are also available on the following CD-ROMs. Click the CD-ROMs button at ibm.com/redbooks for information about all the CD-ROMs offered, updates and formats. CD-ROM Title Collection Kit Number IBM System/390 Redbooks Collection SK2T-2177 IBM Networking Redbooks Collection SK2T-6022 IBM Transaction Processing and Data Management Redbooks Collection SK2T-8038 IBM Lotus Redbooks Collection SK2T-8039 Tivoli Redbooks Collection SK2T-8044 IBM AS/400 Redbooks Collection SK2T-2849 IBM Netfinity Hardware and Software Redbooks Collection SK2T-8046 IBM RS/6000 Redbooks Collection SK2T-8043 IBM Application Development Redbooks Collection SK2T-8037 IBM Enterprise Storage and Systems Management Solutions SK3T-3694

D.3 Other resources These publications are also relevant as further information sources: • VM/ESA Service Guide, GC24-5838 • FLEX-ES Release 5.9 Concepts, Fundamental Software, Inc., FSIMM020 • FLEX-ES Release 5.9 Technical FAQ, Fundamental Software, Inc., FSIMM040 • FLEX-ES Release 5.9 Installation Guide (DYNIX/ptx), Fundamental Software, Inc., FSIMM110 • FLEX-ES Release 5.9 Operator’s Guide, Fundamental Software, Inc., FSIMM200 • FLEX-ES Release 5.9 CLI Language Reference, Fundamental Software, Inc., FSIMM210 • FLEX-ES Release 5.9 Messages, Fundamental Software, Inc., FSIMM230 • FLEX-ES Release 5.9 System Programmer’s Guide, Fundamental Software, Inc., FSIMM300

© Copyright IBM Corp. 2000 141 D.4 Referenced Web and Internet sites These Web and Internet sites are also relevant as further information sources: • ftp://ftp.freesoftware.com/pub/infozip/ UNZIP utility for DYNIX/ptx • http://support.funsoft.com for FLEX-ES information

142 NUMA-Q and S/390 Emulation How to get IBM Redbooks

This section explains how both customers and IBM employees can find out about IBM Redbooks, redpieces, and CD-ROMs. A form for ordering books and CD-ROMs by fax or e-mail is also provided. • Redbooks Web Site ibm.com/redbooks Search for, view, download, or order hardcopy/CD-ROM Redbooks from the Redbooks Web site. Also read redpieces and download additional materials (code samples or diskette/CD-ROM images) from this Redbooks site. Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. • E-mail Orders Send orders by e-mail including information from the IBM Redbooks fax order form to: e-mail address In United States or Canada [email protected] Outside North America Contact information is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl • Telephone Orders United States (toll free) 1-800-879-2755 Canada (toll free) 1-800-IBM-4YOU Outside North America Country coordinator phone number is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl • Fax Orders United States (toll free) 1-800-445-9269 Canada 1-403-267-4455 Outside North America Fax phone number is in the “How to Order” section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl

This information was current at the time of publication, but is continually subject to change. The latest information may be found at the Redbooks Web site.

IBM Intranet for Employees IBM employees may register for information on workshops, residencies, and Redbooks by accessing the IBM Intranet Web site at http://w3.itso.ibm.com/ and clicking the ITSO Mailing List button. Look in the Materials repository for workshops, presentations, papers, and Web pages developed and written by the ITSO technical professionals; click the Additional Materials button. Employees may access MyNews at http://w3.ibm.com/ for redbook, residency, and workshop announcements.

© Copyright IBM Corp. 2000 143 IBM Redbooks fax order form Please send me the following:

Title Order Number Quantity

First name Last name

Company

Address

City Postal code Country

Telephone number Telefax number VAT number

Invoice to customer number

Credit card number

Credit card expiration date Card issued to Signature

We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment.

144 NUMA-Q and S/390 Emulation Index cfcomp 21 Symbols CKD 18 /dev/net/pe0 38 CKD disks 15 /dev/net/tr0 38 ckdbackup 36 ckdconvaws 33 ckdfmt 33, 35, 36 Numerics ckdrestore 33, 36 1403 19 CLI console 25 2314 18 CLI facilities 24 2540 19 Command Line Interface 15, 24 2701 26 Compatible media 42 3172 19, 37, 39 console PC 11 3174 19, 111 Contiguous Memory 27 3203 19 contiguous memory 112 3211 19 Coupling Facility 4 3215 19 CPU complex 16, 17, 23 3270 19 CPU complex, definition 5 3270 sessions 1 cputrsnap 104 3310 18 cryptographic hardware 4 3330 18 CTC 19, 39, 40 3340 18 customization tasks 29 3350 18 3370 18 3375 18 D 3380 18 DASD devices 4 3390 18 DASD solutions 10 3420 18 DDR 37 3480 18 dedicated processors 30 3490 18 devtrsnap 104 4245 19 DFSMS/dss 37 4248 19 DHCP 110 64-bit capabilities 4 differential SCSI adapter 107 9221ICA 26 Disaster recovery 5 9332 18 disk bays 33 9335 18 DLT 18, 98 9336 18 documentation 6 9345 18 dtprint 104 9373ICA 26 DYNIX/ptx 1, 3, 5, 9, 11, 24, 33 DYNIX/ptx 4.5.1 14 A adapter number 38 E AWSTAPE format 19 ECPS VSE functions 18 EFS 11 B emulated DASD functions 1 backup 105 emulated I/O trace 104 boot bay 14, 33 emulated S/390 functions 1 bootbay 97, 105, 113 emulation program 3 BSC 25 Enabled For S/390 (EFS) 11 BSC lines 15, 98 Enterprise Storage System 10 error checking 3 ESA instruction set 18 C ESCON 4 Cache Coherent 9 ESCON control unit 107 CD-RW 14 Ethernet 14, 37, 98 CD-RW drive 105 Ethernet LAN 10 central storage 4 expanded storage 112 CETI 39

© Copyright IBM Corp. 2000 145 F LAN connections 14, 37 FakeTape 19, 112 level set 4 FASTCOPY 37 Linux 109 FBA 18 Linux applications 3, 10, 97 fbafmt 33, 36 local 3270 20 fibre channel 97 local interface 25 fibre channel disks 101 LPAR 3, 17 Fibre channel switches 100 LUname 110 fibre connections 10 fibre connects 14 M fibre/SCSI bridges 14 main console 113 fibre-channel switches 14 main console component 15 FICON 4 MDC 10 FLEX-ES 3, 5, 11, 15, 16, 21, 25, 33, 34, 38 MDC console 14, 113 FLEX-ES 5.10 (beta version) 14 MEMORY clause 27 flexes command 23 mini disks 108 FLEX-ES configuration files 29 MIPS numbers 108 FLEX-ES instance, definition 5 mirroring 1, 32, 111 FLEX-ES license 107 mount command 23 FLEX-ES resource file 30 Multiprise 3000 2 flexes> 113 flexes> prompt 25 flexescli 23 N flexescli command 23 naming convention 35 flexescli mount command 24 NCP 26 forms control functions 107 Network channel 39 FSI 11, 15 network interface 25 Fundamental Software, Inc. 1, 11, 15 NUMA-Q 1, 5, 17 NUMA-Q EFS 29, 37 H NUMQ-Q 9 Hardware service 12 HMC 4 O OMA/2 format 19 OS/390 TCP/IP interface number 109 I OS/390 v2.9 14 I/O attachment 12 OS/390 volume 6 I/O device emulators 15 OSA Express 4 IBM 3174 control units 1 IBM 7205-311 DLT drive 107 IBM 9034 107 P ICA 19, 25 Pacer unit 107 ICA emulation 4 Parallel Channel Adapter 15, 27, 112 ICKDSF 33, 37 Parallel channel adapter 26 IEEE (“binary”) floating point 4 parallel channel adapter 30 instances 17 parallel channel adapter (PCA) 15 instruction emulator 15 parallel channels 4, 107 instruction set 15 PATH environment 22 instruction sets 4 PCA 26 instruction trace 104 PCA channels 15 Integrated communications adapter 25 PCI adapter slots 113 integrated communications adapter 15 PCI slots 10, 12, 97 Intel Pentium processors 1 Pentium processors 1 IOCDS 111 port 24 24 IP address 110 Processor emulation 16 IPL 23 ptx volume 6 IQ-Link 10 ptx/SVM volume 6 ptx/SVM volumes 32, 33, 35, 36 L LAN adapters 38

146 NUMA-Q and S/390 Emulation Q TN3270 server 24 QDIO channel programming 4 TN3270 sessions 1, 3 QIC tape drive 14, 113 TN3270E 19 quad 9, 10, 16, 97, 113 token ring 14, 37, 98 quads 3, 12 trace 112 Traces 104 R RAID adapter 1 U raw disks 33, 36 UNIX 11 resadm 22, 112 UNIX file system 33 resadm command 21 Upgradability 12 resource definition 27 Resource Manager 16 V resource manager 15, 21 VCS 10 resources section 21 VCS application 11 vi control sequences 110 S Virtual Console Software 10 S/390 I/O devices 18 VM/ESA on EFS 61 S/390 instance 30 VM/ESA v2.4 14 S/390 memory 30 volume, definition 6 S/390 volume 35 VSE/ESA 20 sar command 105 VSE/ESA v2.4 14 SCSI 3480 18 VTAM 3, 4 SCSI 3480/3490 tape drives 107 vxassist command 35 SCSI 9-trk 18 vxdiskadm 34 SCSI tape drive 111 SDLC 25 W SDLC lines 4, 15, 98 Windows 2000 10 SDLC/BSC adapters 5 Windows NT 10 Security 101 Server consolidation 5 shared DASD 109 X Shark 10 X windows 113 shell script 23, 56 Xeon 10 skills 5 xSeries 9 SMP 10 SNA 37, 38, 39 SNA LAN 4 Z software console function 4, 107 zSeries instructions 107 software licensing 6 Storage Area Network 101 striping 1, 32, 33, 34, 111 Support Element 4, 107 SVM 5, 33, 35 SVM Disk Groups 32 SVM volume 6 Swapping 33 system configurations 97 system section 21 systems programming skills 5

T TAPE 18 TCP/IP 24, 37, 38 TCP/IP network connections 2 telnet 111 Terminal Solicitor 15, 24, 102, 110 TN3270 19, 110

147 148 NUMA-Q and S/390 Emulation IBM Redbooks review

Your feedback is valued by the Redbook authors. In particular we are interested in situations where a Redbook "made the difference" in a task or problem you encountered. Using one of the following methods, please review the Redbook, addressing value, subject matter, structure, depth and quality as appropriate. • Use the online Contact us review redbook form found at ibm.com/redbooks • Fax this form to: USA International Access Code + 1 914 432 8264 • Send your comments in an Internet note to [email protected]

Document Number SG24-6215-00 Redbook Title NUMA-Q Enabled for S/390: Technical Introduction

Review

What other subjects would you like to see IBM Redbooks address?

Please rate your overall O Very Good O Good O Average O Poor satisfaction:

Please identify yourself as OCustomer belonging to one of the following O Business Partner groups: O Solution Developer O IBM, Lotus or Tivoli Employee O None of the above

Your email address: The data you provide here may be used to provide you with information O Please do not use the information collected here for future marketing or from IBM or our business partners promotional contacts or other communications beyond the scope of this about our products, services or transaction. activities.

Questions about IBM’s privacy The following link explains how we protect your personal information. policy? ibm.com/privacy/yourprivacy/

© Copyright IBM Corp. 2000 149

NUMA-Q Enabled for S/390: Technical Introduction (0.2”spine) 0.17”<->0.473” 90<->249 pages

® NUMA-Q Enabled for S/390: Technical Introduction

What is NUMA-Q? NUMA-Q is a well-established IBM product in the very high-end UNIX market. Using appropriate S/390 emulation software INTERNATIONAL products, a NUMA-Q system can emulate a smaller S/390, TECHNICAL Running VSE, VM, and SUPPORT OS/390 on NUMA-Q including many of the I/O units associated with the S/390. This IBM Redbook briefly introduces the NUMA-Q system, and ORGANIZATION Setup, customization, then describes its use while using OS/390, VM/ESA, and VSE/ESA operation, and results through such S/390 emulation.

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks

SG24-6215-00 ISBN 0738419567