Break-out Session on OpenShift Container Platform for IBM Z® & LinuxONE

@ 13. OpenShift Anwendertreffen

Wilhelm Mild IBM Executive IT Architect

Hendrik Brückner IBM Manager on Z Development Red Hat Partner Engineer for RHEL & RHOCP on Z

IBM Germany Research & Development GmbH Mainframe Break-out

Speaker Introduction Why and benefits of What does RH OCP Open Discussion Hybrid Multi-Cloud look like on IBM Z & environments on IBM LinuxONE? Z & LinuxONE? Red Hat OpenShift Container Platform for IBM Z & LinuxONE

Hendrik Brückner Wilhelm Mild Manager Linux on Z Development IBM Executive IT Architect Red Hat Partner Engineer for RHEL & Integration Architectures for Container, RHOCP on Z Mobile, IBM Z, and Linux IBM DE R&D GmbH IBM DE R&D GmbH

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation News and Updates Red Hat OpenShift Container Platform for IBM Z® & LinuxONE

Wilhelm Mild IBM Executive IT Architect

Hendrik Brückner IBM Manager Linux on Z Development Red Hat Partner Engineer for RHEL & RHOCP on Z

IBM Germany Research & Development GmbH Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. API Connect* Cognos* IBM* IBM Z* Power* z13s* Aspera* DataPower* IBM (logo)* IMS Power Systems z/OS* CICS* Db2* IBM Cloud* LinuxONE WebSphere* z/VM* Cloud Paks FileNet* .com* MobileFirst* z13* * Registered trademarks of IBM Corporation Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. IT Infrastructure Library is a Registered Trade Mark of AXELOS Limited. ITIL is a Registered Trade Mark of AXELOS Limited. Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. The registered trademark Linux® is used pursuant to a sublicense from the , the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Red Hat®, JBoss®, OpenShift®, Fedora®, Hibernate®, Ansible®, CloudForms®, RHCA®, RHCE®, RHCSA®, Ceph®, and Gluster® are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries. RStudio®, the RStudio logo and Shiny® are registered trademarks of RStudio, Inc. is a registered trademark of The Open Group in the United States and other countries. VMware, the VMware logo, VMware Cloud Foundation, VMware Cloud Foundation Service, VMware vCenter , and VMware vSphere are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in the United States and/or other jurisdictions. Zowe™, the Zowe™ logo and the ™ are trademarks of The Linux Foundation. Other product and service names might be trademarks of IBM or other companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. This information provides only general descriptions of the types and portions of workloads that are eligible for execution on Specialty Engines (e.g, , zAAPs, and IFLs) ("SEs"). IBM authorizes customers to use IBM SE only to execute the processing of Eligible Workloads of specific Programs expressly authorized by IBM as specified in the “Authorized Use Table for IBM Machines” provided at www.ibm.com/systems/support/machine_warranties/machine_code/aut.html (“AUT”). No other workload processing is authorized for execution on an SE. IBM offers SE at a lower price than General Processors/Central Processors because customers are authorized to use SEs only to process certain types and/or amounts of workloads as specified by IBM in the AUT. OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation This talk is about…

Introduction IBM’s IBM Cloud Paks for Overview of Hybrid Multicloud IBM Z & LinuxONE Strategy for IBM Z & Red Hat OpenShift LinuxONE Container Platform for IBM Z & LinuxONE Creating the world’s leading hybrid cloud provider

IBM® Hybrid Multicloud Strategy

Services Advise Move Build Manage

Certified Offerings Multicloud Data Integration Application Automation Security Management Cloud Paks

Foundation Service Mesh, Serverless, Pipelines Open Hybrid Odo, CRW OpenShift Multicloud Red Hat Runtimes

CoreOS Platform

™ IBM Z® AWS ™ Infrastructure IBM LinuxONE™ IBM Power Systems™ IBM Cloud™ Azure Google Cloud™

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation 7 7 Why Hybrid Multi-Cloud with IBM Z & LinuxONE

Benefits on IBM Z Adoption Patterns Low Latency and Large Enterprise scale Private Volume Data Serving and Cloud-in-a-Box Transaction processing Digital Transformation and Enterprise class Modernization for Apps infrastructure – Elastic, Scalable, Available and Built-in secure enclaves for Resilient Zero Trust Cloud Native Highest levels of Security Extreme consolidation and and Compliance scalable Data Serving

Scale-out to 2,4 million containers Reducing data-center footprint by Process over 1 Trillion encrypted on a single system 4:1 and power by 2:1 transactions per day

Fully encrypt container data (at-rest, in-flight) and apps with ZERO code changes

Enterprise grade. Open by design. Secured by IBM LinuxONE 8 Integrate existing and new services across hybrid IT

Offerings designed for Cloud Native Development on IBM Z & LinuxONE

• IBM z15 & IBM LinuxONE III • IBM z/VM 7.1 • IBM Cloud Infrastructure Center • Red Hat OpenShift Container Platform IBM Cloud Paks • IBM Cloud Paks enterprise-ready, • IBM z/OS Cloud Broker containerized software solutions • IBM Hyper Protect Virtual Servers • IBM Blockchain Platform SW

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation IBM Cloud Infrastructure Center End users or Enterprise Architecture Overview Service Management

other, Platform-as-a-Service & 3rd party 1.1.1

1 tools cloud automation tools Jun 2020 ® ® End users

Extended OpenStack APIs IBM Cloud Infrastructure Center Self-Service portal Infrastructure- as-a-Service Controller components Infrastructure Services Value Add Features

Compute Storage: block, file, object Network

KVM* Spectrum® Overlay z/VM FCP Scale* VLAN Networks* HPVS2* SAN NFV* DPM* ECKD Switches* Flat Network Switches* rd LPAR* 3 party 3rd party plug-ins* plug-ins*

1 IBM Hyper Protect Virtual Server * All statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. 10 © Copyright IBM Corporation 2020 Provision RHEL, CoreOS, SLES 15 SP1 und 20.04 guests IBM Software as Cloud Paks – Middleware anywhere

A faster, more secure way to move your core business applications to any cloud through enterprise-ready containerized software solutions

Complete yet simple IBM containerized software Application, data and AI services, Packaged with Open Source components, pre-integrated with the common operational services, fully modular and easy to consume and secure by design IBM certified Full software stack support, and ongoing Container platform security, compliance and version compatibility and operational services Run anywhere Logging, monitoring, security, identity access management On-premises, on private and public clouds, and in pre-integrated systems

Infrastructure AWS™ ™ IBM Z® Azure ™ IBM LinuxONE™ IBM Power Systems™ IBM Cloud™ Google Cloud

© Copyright IBM Corporation 2020 11 IBMIBM Cloud Cloud Paks Paks onon IBM IBM Z Z and and LinuxONE LinuxONE - -RoadmapRoadmap All Cloud Paks are coming to IBM Z and LinuxONE in Various Phases!

Available except Manage-to-Z (MCM) – GA 8/07 Accelerator for Teams Phase 1 – GA 9/25 Manage-to-Z (RHACM) – 1Q 2021 Phase 1 (target) Manage from-Z – 2H 2021 Target – TBD Full release – 1Q 2021* Phase 1 - 4Q 2020 Next Release - 4Q 2020* in 1H 2021

Cloud Pak for Cloud Pak for Cloud Pak for Cloud Pak for Cloud Pak for Cloud Pak for Applications Data Integration Multicloud Management Automation Security

Build, deploy and run Collect, organize, and Integrate applications, Multicloud visibility, Transform business Connect security data, applications analyze data data, cloud services, governance, and processes, decisions, tools, and teams and APIs automation and content

IBM containerized IBM containerized IBM containerized IBM containerized IBM containerized IBM containerized software software software software software software

Operational services Operational services Operational services Operational services Operational services Operational services

Container platform Container platform Container platform Container platform Container platform Container platform RH OpenShift 4.x RH OpenShift 4.x RH OpenShift 4.x RH OpenShift 4.x RH OpenShift 4.x RH OpenShift 4.x

Runs on choice of Linux on IBM Z IBM z/VM (z13 or later) and LinuxONE IBM Cloud Infrastructure Center – IaaS (optional)

*Dates listed here are targets only OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Red Hat OpenShift Container Platform 4.x for IBM Z & LinuxONE

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Red Hat OpenShift Container Platform (RHOCP)

V4 on IBM Z and LinuxONE Red Hat OpenShift cluster ➢ takes advantage of the underlying enterprise capabilities ➢ grow to thousands of Linux guests ➢ and millions of containers ➢ non-disruptively grow, vertical and horizontal scalability ➢ including advanced security ➢ confidential Cloud Computing, including FIPS 140-2 Level 4 certification z/VM 7.1 These capabilities were highlighted with the recent announcement of the IBM z15 and IBM LinuxONE III. LPAR Running Red Hat OpenShift on IBM Z and LinuxONE also OSA/ RoCE enables cloud native applications to easily integrate with existing data and applications on these platforms, reducing latency by avoiding network delays.

https://www.ibm.com/blogs/systems/get-ready-for-red-hat-openshift-on-ibm-z-and-linuxone/

14 Where can you download RHOCP? try.openshift.com cloud.redhat.com

OCP 4.5 on Z was released on 7/30/20 OCP 4.4 on Z was released on 6/22/20 OCP 4.3 on Z was released on 4/30/20 OCP 4.2 on Z was released on 2/11/20

https://docs.openshift.com/container-platform/4.5/installing/installing_ibm_z/installing-ibm-z.html https://docs.openshift.com/container-platform/4.5/release_notes/ocp-4-5-release-notes.html

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Red Hat Runtimes for RHOCP on Z & LinuxONE

Lightweight • JBoss Enterprise Application Platform middleware runtimes (EAP) 7.3 and frameworks for • JBoss Web Server 5.3 developing cloud- native applications on • Red Hat Single Sign-on RHOCP (SSO) 7.4 • Red Hat Data Grid 7.3 RH Runtimes are available for RH OCP • AMQ (Broker) 7.5 on Z/LinuxONE on • Quarkus 1.3.4 06/15/20 • Vert.x 3.9.1

• Thorntail 2.5.1

• Spring Boot 2.2.6

• Node.js 10 & 12

https://catalog.redhat.com/software/containers/search?p=1&architecture=s390x OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Developer Experience for RHOCP on Z & LinuxONE

Developer CLI – OpenShift do (odo)

• You can use odo for creating applications on OpenShift Container Platform and Kubernetes

• odo 1.2.6 available for IBM Z & LinuxONE on 9/23/20

• https://docs.openshift.com/container-platform/4.5/cli_reference/developer_cli_odo/installing- odo.html#installing-odo-on-linux-on-ibm-z

What’s next? …. CodeReady Workspaces

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation mastered image being on IBM Z/LinuxONE

Day 1 – Installation and Setup Day 2 – Operation and Management Planning & Installation tasks Operational tasks

User-Provisioned Infrastructure (UPI) – Platform • (Optionally) Setting up infrastructure nodes administrator has to pre-provision infrastructure components • Establishing etcd backup procedure • Planning for required services • Adding additional worker nodes • Planning for cluster network • Configuring monitoring and logging • Planning for storage • Integrating and authenticating with LDAP

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Red Hat OpenShift V4 Installation Options On IBM Z and LinuxONE

OPENSHIFT CONTAINER PLATFORM HOSTED OPENSHIFT

Red Hat OpenShift on IBM Cloud * Full Stack Automated (IPI) Pre-existing Infrastructure (UPI) Deploy directly from the IBM Cloud Simplified opinionated “Best Customer managed resources & console. An IBM service, master nodes are managed by IBM Cloud engineers. Practices” for cluster infrastructure provisioning provisioning Azure Red Hat OpenShift ** Plug into existing DNS and security boundaries Deploy directly from the Azure console. A Fully automated installation and MSFT service, jointly managed by Red Hat updates including host container and Microsoft Azure engineers. OS. OpenShift Dedicated **

Get a powerful cluster, fully managed by Red Hat engineers and support; a Red Hat service. * Based on RHOCP v4.3 GA slated for March; public beta available now

** Entitlements of RHOCP obtained through a Cloud Pak purchase are not transferable to these environments 19 Planning: RHOCP on LinuxONE or IBM Z co-located with z/OS IBM LinuxONE IBM Z

OpenShift OpenShift z/OS zCX

z/OSMF Ansible automation IMS

Connect transactional z/OS z/OS services CICS z/VM 7.1 z/VM 7.1 DB2

LPAR LPAR LPAR OSA/ RoCE OSA/ RoCE OSA/ RoCE

RHOCP standalone RHOCP co-located with z/OS 20 RHOCP co-location environment with z/OS RHOCP co-location to z/OS z/OS major use cases: OpenShift zCX • Dynamic workload in RHOCP accesses z/OS services z/OSMF Ansible automation • RHOCP logic access to DB2 z/OS IMS

Connect transactional

• RHOCP uses z/OS Cloud Broker z/OS to provision z/OS subsystems services CICS • RHOCP Web environment integrates z/VM 7.1 with z/OS transactional services DB2

• RHOCP running Open Source LPAR LPAR technologies extends z/OS services OSA/ RoCE OSA/ RoCE

• Batch workload executed in RHOCP with z/OS data Network options: - Shared OSA

- (HS) with VSWITCH Bridge (VB) 21 Red Hat OpenShift Container Platform on IBM Z z/OS Linux on Z OpenShift cluster Container zCX

z/OSMF Ansible Oracle automation Data Warehouse Services Management Analytics / AI IMS

Connect transactional z/OS z/OS services REST CICS

Hypervisor DB2 LPAR 1 LPAR 2 LPAR 3

IBM Z Security Network I/O

Shared FCP/SCSI FS ECKD/DASD Planning: RHOCP on IBM Z & LinuxONE implementation topology

A) What is the Use Case OpenShift cluster 1. Deployment topology ➢ PoC environment ➢ RHOCP Standalone • less resources ➢ Co-located with z/OS ➢ Productive like env. • SLA based, 2. HW topology • HA / DR ➢ On one HW machine • One cluster / 1LPAR B) What are SLAs (PoC) ➢ DevOps integration • multiple LPARs • automation ➢ Multiple HW machines z/VM 7.1 • shared content • in same DC ➢ Transactional load • across DC ( with • performance LPAR synchronous ➢ HA variants OSA/ RoCE replication only) • availability • resiliency Shared FCP/SCSI FS ECKD/DASD 23 What does Red Hat OpenShift Container Platform on IBM Z & LinuxONE look like?

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Sample operational RHOCP on DHCP NFS z/VM Layout DNS

External network

Router Load Balancer z/VM OpenShift SDN

Control Compute nodes Plane Notes API ETCD • DHCP server/relay is not API ETCD Router required for static IP API ETCD Router Registry App 1 App 2 configurations. Storage Storage Storage Storage Storage

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Local Local NFS Local Shared DASD/FCP DASD/FCP DASD/FCP file system Technical Fact Sheet for RHOCP 4.x on Z & LinuxONE

Feature Overview Minimum System Requirements • RHEL CoreOS • IBM z13/z13s and later, and IBM LinuxONE • User-provisioned infrastructure (UPI) • 1 LPAR with z/VM 7.1 using 3 IFLs, 80+GB • Disconnected / air-gapped installation • FICON or FCP attached disk storage • Shared persistent storage with NFS or IBM • OSA, RoCE, z/VM VSwitch networking Spectrum Scale (Beta)

Preferred Systems Requirements for High- Availability • 3 LPARs with z/VM 7.1 using 6 IFLs, 112+GB

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation What are the hardware and software requirements?

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Minimum System Requirements

Hardware Capacity Network

• 1 LPAR with 3 IFLs supporting SMT2 • Single z/VM virtual NIC in layer 2 mode, one of

• 1 OSA and/or RoCE card • Direct-attached OSA, HiperSockets, or RoCE

Operating System • z/VM VSwitch • z/VM 7.1 Memory • 3 VMs for OCP Control Plane Nodes • 16GB for OCP Control Plane Nodes • 2 VMs for OCP Compute Nodes • 8GB for OCP Compute Nodes • 1 VM for temporary OCP Bootstrap Node • 16GB for OCP Bootstrap Node (temporary) Disk storage • FICON attached disk storage (DASDs) • Minidisks, fullpack minidisks, or dedicated DASDs • FCP attached disk storage

News on RHOCP on IBM Z & LinuxONE / July 2020 / © 2020 IBM Corporation Preferred Architecture Overview

z/VM LPAR z/VM LPAR z/VM LPAR

OCP OCP OCP OCP OCP OCP OCP OCP OCP Control Compute Compute Control Compute Compute Control Compute Compute Plane Node Node Plane Node Node Plane Node Node

RHEL RHEL RHEL RHEL RHEL RHEL RHEL RHEL RHEL CoreOS CoreOS CoreOS CoreOS CoreOS CoreOS CoreOS CoreOS CoreOS

z/VM Control Program (CP) z/VM Control Program (CP) z/VM Control Program (CP)

OSA / OSA / OSA / IBM Z / LinuxONE RoCE RoCE RoCE Notes

• Distribute OCP control planes to different z/VM instances on one or more IBM Z / LinuxONE servers to achieve HA and cover service outages/windows

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Preferred System Requirements

Hardware Capacity Network

• 3 LPARs with 6 IFLs supporting SMT2 • Single z/VM virtual NIC in layer 2 mode, one of

• 1-2 OSA and/or RoCE card • Direct-attached OSA or RoCE

Operating System • z/VM VSwitch (using OSA link aggregation to • z/VM 7.1 – 3 instances for HA purposes increase bandwidth and high availability) • 3 VMs for OCP Control Planes (one per instance) Memory

• 6+ VMs for OCP Compute Nodes (across instances) • 16+ GB for each OCP Control Plane Node

• 1 VM for temporary OCP Bootstrap Node • 8+ GB for each OCP Compute Node

Disk storage • 16GB for the OCP Bootstrap Node (temporary) • FICON attached disk storage (DASDs) • For sizing details, see also • Minidisks, fullpack minidisks, or dedicated DASDs https://docs.openshift.com/container- platform/4.5/scalability_and_performance/recommen • FCP attached disk storage ded-host-practices.html#master-node-sizing_

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Software Configuration for OpenShift Container Platform

Infrastructure Services (Pre-requisites) Bootstrap and Master Nodes (Control Planes)

• DHCP server or static IP addresses • 4 vCPUs

• DNS server • 16+ GB main memory • Load balancers (optional but preferred) • 120GB disk storage • Deployment server for installation (temporary) Worker Nodes (+ depending on workload) • Internet connectivity • 2+ vCPUs (1+ IFLs with SMT2 enabled) Operating System • 8+GB main memory • RHEL CoreOS for Control Plane and Bootstrap Nodes • 120GB disk storage • RHEL CoreOS only for Compute Nodes Reference about OCP cluster limits Persistent Storage • https://docs.openshift.com/container- • NFSv4 server with >100GB disk storage platform/4.5/scalability_and_performance/planning- • 100GB for internal registry at minimum your-environment-according-to-object- maximums.html

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Minimum Architecture Overview – Network Option 1 z/VM LPAR Use single vNIC for z/VM guest virtual machines • Direct-attached OSA or RoCE to each guest virtual machine OCP OCP OCP OCP OCP Control Control Control Compute Compute RHOCP uses this 1 vNIC for two Plane Plane Plane Node Node networks

RHEL RHEL RHEL RHEL RHEL • External communication CoreOS CoreOS CoreOS CoreOS CoreOS • Internal communication – software-defined network for Kubernetes pod communication z/VM Control Program (CP) OSA / IBM Z / LinuxONE RoCE

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Minimum Architecture Overview – Network Option 2 z/VM LPAR Use single vNIC for z/VM guest virtual machines • z/VM VSwitch with OSA (optionally, using link OCP OCP OCP OCP OCP aggregation) Control Control Control Compute Compute Plane Plane Plane Node Node RHOCP uses this 1 vNIC for two networks RHEL RHEL RHEL RHEL RHEL CoreOS CoreOS CoreOS CoreOS CoreOS • External communication • Internal communication – software-defined network for Kubernetes pod z/VM Control Program (CP) VSwitch communication OSA IBM Z / LinuxONE

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Minimum Architecture Overview – Disk Storage Options for Installation z/VM LPAR Disk storage considerations

• Minidisks are a z/VM virtual OCP OCP OCP OCP OCP resources and represent Control Control Control Compute Compute smaller chunks on a DASD; Plane Plane Plane Node Node Linux sees them as individual disks (DASDs) RHEL RHEL RHEL RHEL RHEL • Consider HyperPAV for ECKD CoreOS CoreOS CoreOS CoreOS CoreOS storage

• DASDs/FCP devices can be z/VM Control Program (CP) MinidiskMinidisk dedicated to a z/VM guest IBM Z / LinuxONE ("pass-through")

• Consider using FCP multipath installations (future) FCP FCP FCP ECKDFCP FCP

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Shared disk storage considerations

Required shared disk storage Shared disk storage options in the initial release

• Internal registry (container images) • NFS only Use cases for shared disk storage Shared disk storage options in future releases • Shared data pool for container instances (persistent container storage) • IBM Spectrum Scale (Beta available)

• Application or workload specific use cases • Register for the Cloud Native Deployment of Spectrum Scale Beta Program: https://www- 355.ibm.com/technologyconnect/cna/index.xhtml

• IBM CSI Block Plugin 1.1.0

• https://www.ibm.com/support/knowledgecenter/SSRQ8T_1.1.0 /csi_block_storage_kc_welcome.html https://access.redhat.com/containers/#/registry.connect.redha t.com/ibm/ibm-block-csi-operator

• Red Hat OpenShift Container Storage OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation News on RHOCP 4.x Performance on IBM LinuxONE III LT1

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) Acme Air Performance on OpenShift 2.4x 2.6x 2.7x Container Platform 4.2 on LinuxONE 2.2x III LT1 vs. x86 Skylake

Achieve up to 2.7x more throughput per core and up to 2.9x lower latency on OpenShift Container Platform 4.2 on LinuxONE III using z/VM versus on compared x86 platform using KVM, when running 12 Acme Air benchmark instances on 3 worker nodes

DISCLAIMER: Performance results based on IBM internal tests running the Acme Air microservice benchmark 2.9x (https://github.com/blueperf/acmeair-mainservice-java) on OpenShift Container Platform (OCP) 4.2.19 on LinuxONE III using z/VM 2.7x 2.6x versus on compared x86 platform using KVM. On both platforms 12 Acme Air instances were running on 3 OCP Worker nodes. The 2.4x z/VM and KVM guests with the OCP Master nodes were configured with 4 vCPUs and 16 GB memory each. The z/VM and KVM guests with the OCP Worker nodes were configured with 16 vCPUs and 32 GB memory each. Results may vary. LinuxONE III configuration: The OCP Proxy server ran native LPAR with 4 dedicated cores, 64 GB memory, RHEL 8.1 (SMT mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 dedicated cores, 160 GB memory, and DASD storage. x86 configuration: The OCP Proxy server ran on 4 Intel® Xeon® Gold 6126 CPU @ 2.60GHz with Hyperthreading turned on, 64 GB memory, RHEL 8.1. The OCP Master and Worker nodes ran on KVM on RHEL 8.1 on 30 Intel® Xeon® Gold 6140 CPU @ 2.30GHz with Hyperthreading turned on, 160 GB memory, and RAID5 local SSD storage.

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) 30.6K transactions/sec in total, 12.4K transactions/sec in total, Acme Air Density on OpenShift 2K transactions/sec per instance 2K transactions/sec per instance Container Platform 4.2 on JMeter JMeter Workload Driver Workload Driver LinuxONE III LT1 vs. x86 Skylake x86 server x86 server x86 server x86 server x86 server 1 - 3 x86 server 1 - 3

Run up to 2.5x more Acme Air benchmark Proxy / Balancer Proxy / Balancer instances per core on OpenShift Container LinuxONE III LPAR x86 server (4 cores, 64 GB memory) (4 cores, 64 GB memory) Platform 4.2 on LinuxONE III using z/VM versus on a compared x86 platform using

KVM, each processing an identical OCP Worker OCP Worker with OCP with OCP transaction load 5 Acme Air Master 2 Acme Air Master instances instances Guest 1 Guest 4 Guest 1 Guest 4 Guest 1 Guest 4 Guest 1 Guest 4 (16 vCPU, (4 vCPU, Guest(16 1vCPU - 3 , Guest(4 vCPU 4 - 6 , DISCLAIMER: Performance results based on IBM internal tests running the Acme Air microservice benchmark Guest(16 vCPU 1 - 3, Guest(4 vCPU 4 - 6, (16 vCPU, (4 vCPU, 32GB memory) 16GB memory) each32GB 16 vCPUmemory, ) each16GB 4 vCPU memory), (https://github.com/blueperf/acmeair-mainservice-java) on OpenShift Container Platform (OCP) 4.2.19 on each32GB 16 memory vCPU, ) each16GB 4 memory) vCPU, 32GB memory) 16GB memory) LinuxONE III using z/VM versus on compared x86 platform using KVM. The z/VM and KVM guests with the OCP 32 GB memory 16 GB memory 32 GB memory 16 GB memory Master nodes were configured with 4 vCPUs and 16 GB memory each. The z/VM and KVM guests with the OCP Worker nodes were configured with 16 vCPUs and 32 GB memory each. Results may vary. LinuxONE III z/VM 7.1 in LPAR KVM on RHEL 8.1 configuration: The OCP Proxy server ran native LPAR with 4 dedicated cores, 64 GB memory, RHEL 8.1 (SMT 30 cores, 160 GB memory 30 cores, 160 GB memory mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 dedicated cores, 160 GB memory, and DASD storage. x86 configuration: The OCP Proxy server ran on 4 Intel® Xeon® Gold 6126 CPU @ 2.60GHz with zHypervisor Hyperthreading turned on, 64 GB memory, RHEL 8.1. The OCP Master and Worker nodes ran on KVM on RHEL 8.1 Compared x86 Platform on 30 Intel® Xeon® Gold 6140 CPU @ 2.30GHz with Hyperthreading turned on, 160 GB memory, and RAID5 local LinuxONE III SSD storage.

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) Acme Air Scaling on OpenShift Container Platform 4.2 on LinuxONE III LT1

Scale-out the Acme Air benchmark to 24 virtual CPUs with up to 88% scaling efficiency on an OpenShift Container Platform 4.2 worker node on LinuxONE III LT1 using z/VM OCP Worker with OCP 2 – 24 JMeter Master Workload Driver Acme Air instances Guest 4 x86 server Guest(4 vCPU 4 , Guest 1 Guest(4 vCPU 2 - 4, 2 - 24 vCPU, each16GB 4 vCPU memory), DISCLAIMER: Performance results based on IBM internal tests running the Acme Air microservice benchmark 16GB memory) (https://github.com/blueperf/acmeair-mainservice-java) on OpenShift Container Platform (OCP) 4.2.19 on LinuxONE III LT1 64 GB memory 16 GB memory using z/VM. The z/VM guests with the OCP Master nodes were configured with 4 vCPUs and 16 GB memory each. The z/VM Proxy / Balancer guest with the OCP Worker node was configured with 2 - 24 vCPUs and 64 GB memory. Per vCPU one Acme Air instance z/VM 7.1 in LPAR was running on the OCP Worker node. The Acme Air instances were driven remotely from JMeter 5.2.1. Results may vary. LinuxONE III LT1 LPAR 30 cores, 160 GB memory LinuxONE III LT1 configuration: The OCP Proxy server ran native LPAR with 4 dedicated cores, 64 GB memory, RHEL 8.1 (SMT (4 cores, 64 GB memory) mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 dedicated cores, 160 GB memory, and DASD zHypervisor storage. LinuxONE III LT1

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation What’s next?

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation What could be next*…?

Red Hat OpenShift Container Platform Storage Support

• RHOCP release cadence with x86 • IBM Spectrum Scale

• Red Hat OpenShift Service Mesh (istio) to • Red Hat OpenShift Container Storage (OCS) connect, secure, control, and observe services based on Ceph, Rook, and Noobaa • Red Hat OpenShift Pipelines and Serverless • CodeReady Workspaces

* All statements regarding IBM and Red Hat's future direction and intent are subject to OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation change or withdrawal without notice and represent goals and objectives only. *Statements by IBM regarding its plans, directions, and intent are subject to change or withdrawal without notice at the sole discretion of IBM Questions?

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Thank you

Hendrik Brückner [email protected] Manager Linux on Z Development & Service Red Hat Partner Engineer for RHEL and RHOCP on Z & LinuxONE

Wilhelm Mild [email protected] IBM Executive IT Architect Integration Architectures for Container, Mobile, IBM Z, and Linux

© Copyright IBM Corporation 2020. All rights reserved. The information contained in these materials is provided for informational purposes only, and is provided AS IS without warranty of any kind, express or implied. Any statement of direction represents IBM’s current intent, is subject to change or withdrawal, and represent only goals and objectives. IBM, the IBM logo, and ibm.com are trademarks of IBM Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available at Copyright and trademark information.

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation More Information

IBM LinuxONE Community Cloud – Free access to virtual servers on LinuxONE https://developer.ibm.com/components/ibm-linuxone/gettingstarted/

Red Hat OCP portal cloud.redhat.com

Install OCP on IBM Z https://docs.openshift.com/container-platform/4.5/installing/installing_ibm_z/installing-ibm-z.html Step by step sample installations and environment setup https://www.openshift.com/blog/installing-ocp-in-a-mainframe-z-series https://www.openshift.com/blog/red-hat-openshift-installation-process-experiences-on-ibm-z-linuxone

Ross Mauri's Blog http://www.ibm.com/blogs/systems/red-hat-openshift-now-available-ibm-z-linuxone

IBM Systems Magazine Article https://ibmsystemsmag.com/01/2020/cutting-edge-ibm-z-innovations

IDC Whitepaper https://www.ibm.com/it-infrastructure/linuxone/capabilities/linux-containers OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) Acme Air Performance on 2.1x 2.2x 2.4x OpenShift Container Platform 4.2 1.9x on LinuxONE III LT2 vs. x86 Skylake

Achieve up to 2.4x more throughput per core and up to 2.5x lower latency on OpenShift Container Platform 4.2 on LinuxONE III LT2 using z/VM versus on compared x86 platform using KVM, when running 12 Acme Air benchmark instances on 3 worker nodes

DISCLAIMER: Performance result is extrapolated based on a clock ratio of 0.8654 from IBM internal tests running the Acme Air 2.5x 2.4x microservice benchmark (https://github.com/blueperf/acmeair-mainservice-java) on OpenShift Container Platform (OCP) 4.2.19 on 2.2x LinuxONE III LT1 using z/VM versus on compared x86 platform using KVM. On both platforms 12 Acme Air instances were running 2.1x on 3 OCP Worker nodes. The z/VM and KVM guests with the OCP Master nodes were configured with 4 vCPUs and 16 GB memory each. The z/VM and KVM guests with the OCP Worker nodes were configured with 16 vCPUs and 32 GB memory each. Results may vary. LinuxONE III LT1 configuration: The OCP Proxy server ran native LPAR with 4 dedicated cores, 64 GB memory, RHEL 8.1 (SMT mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 dedicated cores, 160 GB memory, and DASD storage. x86 configuration: The OCP Proxy server ran on 4 Skylake Intel® Xeon® Gold CPU @ 2.60GHz with Hyperthreading turned on, 64 GB memory, RHEL 8.1. The OCP Master and Worker nodes ran on KVM on RHEL 8.1 on 30 Skylake Intel® Xeon® Gold CPU @ 2.30GHz with Hyperthreading turned on, 160 GB memory, and RAID5 local SSD storage.

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) 27.3k transactions/sec in total, 12.4k transactions/sec in total, Acme Air Container Density on 2.3k transactions/sec per instance 2.1k transactions/sec per instance OpenShift Container Platform JMeter JMeter Workload Driver Workload Driver x86 server x86 server 4.2 on LinuxONE III LT2 x86 server x86 server x86 server 1 - 3 x86 server 1 - 3 versus x86 Skylake Proxy / Balancer Proxy / Balancer Run up to 2x more Acme Air benchmark LinuxONE III LT2 LPAR x86 server instances per core on OpenShift Container (4 cores, 64 GB memory) (4 cores, 64 GB memory) Platform 4.2 on LinuxONE III LT2 using z/VM versus on a compared x86 platform OCP Worker OCP Worker using KVM, each processing an identical with OCP with OCP 4 Acme Air Master 2 Acme Air Master transaction load instances instances Guest 1 Guest 4 Guest 1 Guest 4 Guest 1 Guest 4 Guest 1 Guest 4 (16 vCPU, (4 vCPU, Guest(16 1vCPU - 3 , Guest(4 vCPU 4 - 6 , Guest(16 vCPU 1 - 3, Guest(4 vCPU 4 - 6, (16 vCPU, (4 vCPU, 32GB memory) 16GB memory) each32GB 16 vCPUmemory, ) each16GB 4 vCPU memory), DISCLAIMER: Performance result is extrapolated based on a clock ratio of 0.8654 from IBM internal tests running each32GB 16 memory vCPU, ) each16GB 4 memory) vCPU, 32GB memory) 16GB memory) the Acme Air microservice benchmark (https://github.com/blueperf/acmeair-mainservice-java) on OpenShift 32 GB memory 16 GB memory 32 GB memory 16 GB memory Container Platform (OCP) 4.2.19 on LinuxONE III LT1 using z/VM versus on compared x86 platform using KVM. The z/VM and KVM guests with the OCP Master nodes were configured with 4 vCPUs and 16 GB memory each. KVM on RHEL 8.1 The z/VM and KVM guests with the OCP Worker nodes were configured with 16 vCPUs and 32 GB memory each. z/VM 7.1 in LPAR Results may vary. LinuxONE III LT1 configuration: The OCP Proxy server ran native LPAR with 4 dedicated cores, 30 cores, 160 GB memory 30 cores, 160 GB memory 64 GB memory, RHEL 8.1 (SMT mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 dedicated cores, 160 GB memory, and DASD storage. x86 configuration: The OCP Proxy server ran on 4 Skylake zHypervisor Compared x86 Platform Intel® Xeon® Gold CPU @ 2.60GHz with Hyperthreading turned on, 64 GB memory, RHEL 8.1. The OCP Master and Worker nodes ran on KVM on RHEL 8.1 on 30 Skylake Intel® Xeon® Gold CPU @ 2.30GHz with Hyperthreading LinuxONE III LT2 turned on, 160 GB memory, and RAID5 local SSD storage.

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) Acme Air Scaling on OpenShift Container Platform 4.2 on LinuxONE III LT2

Scale-out the Acme Air benchmark to 24 virtual CPUs with up to 88% scaling efficiency on an OpenShift Container Platform 4.2 worker OCP Worker with node on LinuxONE III LT2 using z/VM OCP 2 – 24 JMeter Master Workload Driver Acme Air instances Guest 4 x86 server Guest(4 vCPU 4 , Guest 1 Guest(4 vCPU 2 - 4, 2 - 24 vCPU, each16GB 4 vCPU memory), DISCLAIMER: Performance result is extrapolated based on a clock ratio of 0.8654 from IBM internal tests running the Acme 16GB memory) Air microservice benchmark (https://github.com/blueperf/acmeair-mainservice-java) on OpenShift Container Platform 64 GB memory 16 GB memory (OCP) 4.2.19 on LinuxONE III LT1 using z/VM. The z/VM guests with the OCP Master nodes were configured with 4 vCPUs Proxy / Balancer and 16 GB memory each. The z/VM guest with the OCP Worker node was configured with 2 - 24 vCPUs and 64 GB memory. z/VM 7.1 in LPAR Per vCPU one Acme Air instance was running on the OCP Worker node. The Acme Air instances were driven remotely from LinuxONE III LT2 LPAR 30 cores, 160 GB memory JMeter 5.2.1. Results may vary. LinuxONE III LT1 configuration: The OCP Proxy server ran native LPAR with 4 dedicated (4 cores, 64 GB memory) cores, 64 GB memory, RHEL 8.1 (SMT mode). The OCP Master and Worker nodes ran on z/VM 7.1 in a LPAR with 30 zHypervisor dedicated cores, 160 GB memory, and DASD storage. LinuxONE III LT2

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) Acme Air Performance on OpenShift Container Platform 4.2 on LinuxONE III LT1 vs. x86 Skylake

JMeter JMeter Benchmark Setup Workload Driver Workload Driver x86 server x86 server – 3 OpenShift Container Platform (OCP) Master and 3 Worker nodes on LinuxONE III x86 server x86 server under z/VM versus on x86 under KVM x86 server 1 - 3 x86 server 1 - 3 – Acme Air microservice benchmark (https://github.com/blueperf/acmeair-mainservice- java) instances placed manually on the OCP Worker nodes such that each OCP Worker node ran the same number of instances Proxy / Balancer Proxy / Balancer – Acme Air instances were driven remotely from 3 x86 servers running JMeter 5.2.1 LinuxONE III LPAR x86 server System Stack (4 cores, 64 GB memory) (4 cores, 64 GB memory) – LinuxONE III • LPAR with 4 dedicated cores, 64 GB memory, RHEL 8.1 (SMT mode), running the OCP Proxy server • LPAR with 30 dedicated cores, 160 GB memory, DASD storage, running z/VM 7.1 OCP Worker OCP Worker with OCP – 3 guests with 4 vCPU, 16 GB memory, each running an OCP Master with OCP Acme Air Master Acme Air Master – 3 guests with 16 vCPUs, 32 GB memory, each running an OCP Worker instances instances • OpenShift Container Platform (OCP) 4.2.19 Guest 1 Guest 4 Guest 1 Guest 4 Guest 1 Guest 4 Guest(16 vCPU 1 , Guest 4 – x86 (16 vCPU, (4 vCPU, Guest 1 - 3 Guest(4 vCPU 4 - 6 , Guest(16 vCPU 1 - 3, Guest(4 vCPU 4 - 6, (16 vCPU, (4 vCPU, 32GB memory) 16GB memory) each32GB 16 vCPUmemory, ) each16GB 4 vCPU memory), • 4 Intel® Xeon® Gold 6126 CPU @ 2.60GHz w/ Hyperthreading turned on, 64 GB each32GB 16 memory vCPU, ) each16GB 4 memory) vCPU, 32GB memory) 16GB memory) memory, RHEL 8.1, running the OCP Proxy server 32 GB memory 16 GB memory 32 GB memory 16 GB memory • 30 Intel® Xeon® Gold 6140 CPU @ 2.30GHz w/ Hyperthreading turned on, 160 GB z/VM 7.1 in LPAR KVM on RHEL 8.1 memory, running KVM on RHEL 8.1 30 cores, 160 GB memory 30 cores, 160 GB memory – 3 guests with 4 vCPU, 16 GB memory, each running an OCP Master zHypervisor – 3 guests with 16 vCPUs, 32 GB memory, each running a OCP Worker Compared x86 Platform • OpenShift Container Platform (OCP) 4.2.19 LinuxONE III

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation OpenShift Container Platform (OCP) Acme Air Performance on OpenShift Container Platform 4.2 on LinuxONE III LT2 vs. x86 Skylake JMeter JMeter Benchmark Setup Workload Driver Workload Driver x86 server x86 server – 3 OpenShift Container Platform (OCP) Master and 3 Worker nodes on LinuxONE III LT2 x86 server x86 server under z/VM versus on x86 under KVM x86 server 1 - 3 x86 server 1 - 3 – Acme Air microservice benchmark (https://github.com/blueperf/acmeair-mainservice- java) instances placed manually on the OCP Worker nodes such that each OCP Worker node ran the same number of instances Proxy / Balancer Proxy / Balancer – Acme Air instances were driven remotely from 3 x86 servers running JMeter 5.2.1 LinuxONE III LT2 LPAR x86 server System Stack (4 cores, 64 GB memory) (4 cores, 64 GB memory) – LinuxONE III LT2 • LPAR with 4 dedicated cores, 64 GB memory, RHEL 8.1 (SMT mode), running the OCP Proxy server • LPAR with 30 dedicated cores, 160 GB memory, DASD storage, running z/VM 7.1 OCP Worker OCP Worker with OCP – 3 guests with 4 vCPU, 16 GB memory, each running an OCP Master with OCP Acme Air Master Acme Air Master – 3 guests with 16 vCPUs, 32 GB memory, each running an OCP Worker instances instances • OpenShift Container Platform (OCP) 4.2.19 Guest 1 Guest 4 Guest 1 Guest 4 Guest 1 Guest 4 Guest(16 vCPU 1 , Guest 4 – x86 (16 vCPU, (4 vCPU, Guest 1 - 3 Guest(4 vCPU 4 - 6 , Guest(16 vCPU 1 - 3, Guest(4 vCPU 4 - 6, (16 vCPU, (4 vCPU, 32GB memory) 16GB memory) each32GB 16 vCPUmemory, ) each16GB 4 vCPU memory), • 4 Skylake Intel® Xeon® Gold CPU @ 2.60GHz w/ Hyperthreading turned on, 64 GB each32GB 16 memory vCPU, ) each16GB 4 memory) vCPU, 32GB memory) 16GB memory) memory, RHEL 8.1, running the OCP Proxy server 32 GB memory 16 GB memory 32 GB memory 16 GB memory • 30 Skylake Intel® Xeon® Gold CPU @ 2.30GHz w/ Hyperthreading turned on, 160 z/VM 7.1 in LPAR KVM on RHEL 8.1 GB memory, running KVM on RHEL 8.1 30 cores, 160 GB memory 30 cores, 160 GB memory – 3 guests with 4 vCPU, 16 GB memory, each running an OCP Master zHypervisor – 3 guests with 16 vCPUs, 32 GB memory, each running a OCP Worker Compared x86 Platform • OpenShift Container Platform (OCP) 4.2.19 LinuxONE III LT2

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation Minimum System Requirements

Hardware Capacity Network

• 1 LPAR with 3 IFLs supporting SMT2 • Single z/VM virtual NIC in layer 2 mode, one of

• 1 OSA and/or RoCE card • Direct-attached OSA, HiperSockets, or RoCE

Operating System • z/VM VSwitch • z/VM 7.1 Memory • 3 VMs for OCP Control Plane Nodes • 16GB for OCP Control Plane Nodes • 2 VMs for OCP Compute Nodes • 8GB for OCP Compute Nodes • 1 VM for temporary OCP Bootstrap Node • 16GB for OCP Bootstrap Node (temporary) Disk storage • FICON attached disk storage (DASDs) • Minidisks, fullpack minidisks, or dedicated DASDs • FCP attached disk storage

OpenShift Anwendertreffen - RHOCP on IBM Z & LinuxONE / September 2020 / © 2020 IBM Corporation