<<

Linux on IBM Z: What’s New

Stefan Raspl IBM Research & Development Germany

January 29, 2020

IBM Z / © 2019 IBM Corporation Trademarks

The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AIX* DB2* HiperSockets* MQSeries* PowerHA* RMF System z* zEnterprise* z/VM* BladeCenter* DFSMS HyperSwap NetView* PR/SM Smarter Planet* System z10* z10 z/VSE* CICS* EASY Tier IMS OMEGAMON* PureSystems Storwize* Tivoli* z10 EC Cognos* FICON* InfiniBand* Parallel Sysplex* Rational* System Storage* WebSphere* z/OS* DataPower* GDPS* Lotus* POWER7* RACF* System x* XIV* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Java and all Java based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S. and is a registered trademark of in the United States, other countries, or both. , Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. OpenStack is a trademark of OpenStack LLC. The OpenStack trademark policy is available on the OpenStack website. TEALEAF is a registered trademark of Tealeaf, an IBM Company. Windows Server and the Windows logo are trademarks of the Microsoft group of countries. Worklight is a trademark or registered trademark of Worklight, an IBM Company. UNIX is a registered trademark of The Open Group in the United States and other countries. * Other product and service names might be trademarks of IBM or other companies. Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. This information provides only general descriptions of the types and portions of workloads that are eligible for execution on Specialty Engines (e.g, zIIPs, zAAPs, and IFLs) ("SEs"). IBM authorizes customers to use IBM SE only to execute the processing of Eligible Workloads of specific Programs expressly authorized by IBM as specified in the “Authorized Use Table for IBM Machines” provided at www.ibm.com/systems/support/machine_warranties/machine_code/aut.html (“AUT”). No other workload processing is authorized for execution on an SE. IBM offers SE at a lower price than General Processors/Central Processors because customers are authorized to use SEs only to process certain types and/or amounts of workloads as specified by IBM in the AUT.

IBM Z / © 2019 IBM Corporation 2 Linux on IBM Z Agenda

● Linux on IBM Z Distributions ● IBM z15 and LinuxONE III ● Kernel & Other Packages ● Containers ● KVM

IBM Z / © 2019 IBM Corporation 3 Linux on IBM Z Supported Linux Distributions

See www.ibm.com/systems/z/os/linux/resources/testedplatforms.html for latest updates and details.

IBM Z / © 2019 IBM Corporation 4 Linux on IBM Z Linux on IBM Z Distributions: SUSE

● SUSE Linux Enterprise Server 11

– 03/2009 SLES11 GA: Kernel 2.6.27, GCC 4.3.3

– 07/2015 SLES11 SP4: Kernel 3.0, GCC 4.3.4

● EOS 31 Mar. 2019; LTSS: 31 Mar. 2022

● SUSE Linux Enterprise Server 12

– 10/2014 SLES12 GA: Kernel 3.12, GCC 4.8

– 12/2019 SLES12 SP5: Kernel 4.12, GCC 4.8

● EOS 31 Oct. 2024; LTSS: 31 Oct. 2027

● SUSE Linux Enterprise Server 15

– 07/2018 SLES 15 GA: Kernel 4.12, GCC 7.1 / 7.3

– 06/2019 SLES15 SP1: Kernel 4.12. GCC 7.3 / 8.2

– EOS 31 July 2028; LTSS: 31 July 2031

IBM Z / © 2019 IBM Corporation 5 Linux on IBM Z Linux on IBM Z Distributions:

5 ● Red Hat Enterprise Linux 8

– 03/2007 RHEL 5 GA: Kernel 2.6.18, GCC 4.1.0 – 05/2019 RHEL 8 GA: Kernel 4.18, GCC 8.2.1 – 09/2014 RHEL 5 Update 11 – 11/2019 RHEL 8 Update 1 – EOS 31 Mar. 2017; ELS: 30 Nov. 2020 – EOS: May 2029; ELS: tbd ● Red Hat Enterprise Linux 6 ● For further details on RHEL Life Cycle, see – 11/2010 RHEL 6 GA: Kernel 2.6.32, GCC 4.4.0 https://access.redhat.com/support/policy/updates/er rata – 06/2018 RHEL 6 Update 10

– EOS 30 Nov. 2020; ELS: 30 June 2024

● Red Hat Enterprise Linux 7

– 06/2014 RHEL 7 GA: Kernel 3.10, GCC 4.8

– 08/2019 RHEL 7 Update 7

– EOS 30 Jun. 2024; ELS: tbd

IBM Z / © 2019 IBM Corporation 6 Linux on IBM Z Linux on IBM Z Distributions:

16.04 (Xenial Xerus) ● Lifecycle

– 04/2016 Ubuntu 16.04 – Regular releases every 6 months and supported for 9 GA: Kernel 4.4, GCC 5.3.0+ LTS-Release months

– 02/2019 Ubuntu 16.04.06 LTS – LTS releases every 2 years and supported for 5 years

– EOS: April 2021; ESM: Apr 2024 – LTS enablement stack will provide newer kernels within LTS releases ● Ubuntu 18.04 (Bionic Beaver) – http://www.ubuntu.com/info/release-end-of-life – 04/2018 Ubuntu 18.04 GA: Kernel 4.15, GCC 7.2.0 LTS-Release

– 08/2019 Ubuntu 18.04.3 Kernel 4.15/4.18 GCC 7.2.0

– EOS: April 2023; ESM: Apr 2028

IBM Z / © 2019 IBM Corporation 7 IBM z15 & LinuxONE III

IBM Z / © 2019 IBM Corporation 8 IBM z15 Overview

● 19” industry standard form factor

● New on-chip compression acceleration

• 14nm SOI technology • 17 layers of metal as IBM z14 • 5.2 GHz

• 12 cores per CP-chip 1 core • Increased cache sizes: – 2x L2 on-chip – 1.4x L4 NEW • 14% single thread performance improvement • 40TB max memory IBM Z / © 2019 IBM Corporation 9 • 190 usable cores Balanced system design*

RAW I/O Bandwidth*** (GBps) 1152 832 832

384 384 288 173

PCI* for 1-Way Memory (TB) 10 TB 3 TB 0.5 TB 581 1202 1695 40 TB 2055 32 TB 3 TB 1.5 TB 54 920 1514 1870 64 80 101 141 170

* PCI Tables are NOT adequate for making comparisons of IBM Z processors. 190 Additional capacity planning required ** Number of PU cores for customer use *** Theoretical max. I/O Bandwidth N-Way**

IBM Z / © 2019 IBM Corporation 10 IBM Z – Processor Roadmap 14 nm z15 9/2019 14 nm

22 nm z14 32 nm 7/2017 45 nm z13 zEC12 z196 1/2015 8/2012 9/2010 Focus on power efficiency and new

Core 0 Core 2 MC GX on-chip architectures L3B IOs IOs L3_0 L3_0 L2 L2 Improved and enlarged caches L3_0 Controller MCU CoP CoP GX L3_1 Controller Optimized Out-of-Order L2 L2 Pervasive encryption

MC L3B GX architecture IOs IOs Core 1 L3_1 L3_1 Core 3 Low latency I/O for Leadership System Capacity acceleration of transaction Binary Floating point and Performance enhancements Top Tier Single Thread Leadership Single Thread, processing for DB2 on z/OS Performance,System Capacity Enhanced Throughput Modularity & Scalability Pause-less garbage collection IBM Integrated Accelerator for Accelerator Integration Improved out-of-order Dynamic SMT for enterprise scale JAVA zEDC (On-chip compression support (DEFLATE)) Out of Order Execution Transactional Memory Supports two instruction applications threads Enhanced Cryptographic Water Cooling Dynamic Optimization New SIMD instructions SIMD Coprocessor (CPACF) PCIe I/O Fabric 2 GB page support Optimized pipeline and PCIe attached accelerators RAIM Step Function in System enhanced SMT Capacity Business Analytics Optimized Enhanced Energy Virtual Flash Memory Management

IBM Z / © 2019 IBM Corporation 11 z15 Toleration Support: Linux Distros, z/VM and KVM

. Linux distributions . z/VM Hypervisor – Red Hat RHEL 7.6.z – z/VM 7.1 – Red Hat RHEL 6.10.z – z/VM 6.4 – SUSE SLES 12 SP4 maintweb . – SUSE SLES 11 SP4 maintweb KVM Hypervisor – Ubuntu 18.04 LTS – Red Hat RHEL 7.6.z alt – SUSE SLES 12 SP4 maintweb – Red Hat RHEL 8.0 (z stream if needed) – Ubuntu 18.04 LTS – SUSE SLES 15 SP1 (maintweb if needed) – Ubuntu16.04 LTS – Red Hat RHEL 8.0 (z stream if needed) – SUSE SLES 15 SP1 (maintweb if needed) – Ubuntu 16.04 LTS Install z15 with the Linux environment you use today!

IBM Z / © 2019 IBM Corporation 12 IBM z15 Support: New Vector Instructions

● Reported with new feature flags in /proc/cpuinfo – vxp – vxe2

● Examples for use of new vector instructions: – Vector alignment hints – Vector Byte and element swaps – Vector substring search in strstr() and memmem()

● Exploited (among others) in – GCC 9.1 – glibc 2.30 – LLVM 9.0.0

IBM Z / © 2019 IBM Corporation 13 IBM z15 Support: Deflate

● Data compress and uncompress through new instruction

● Reported with new feature flag in /proc/cpuinfo: dflt

● Compression equivalent to gzip -1 Compression Time w/ 4 IFLs -1 is fastest, -9 slowest, default is -6

● Can be exploited e.g. by zlib

● Compress data with zlib on IBM z15 with 4 processors up to 42x faster as compared to compression 33.9x 30.3x 24x 42x E.coli bible.txt world192.txt Canterbury.tar Source data file

minigzip -1 w/ software compression minigzip -1 w/ Integrated Accelerator for zEDC

IBM Z / © 2019 IBM Corporation 14 IBM z15 Support: Deflate (continued)

● Note: Sequence of compression and encryption is essential!

IBM Z / © 2019 IBM Corporation 15 IBM z15 Support: CPACF

● New Message Security Assist MSA9 for Elliptic Curve Cryptography (ECC) ● Supports – message authentication – generation of elliptic curve keys – scalar multiplication. ● Used with SSL/TLS protocol z15 Processor Unit – securing client-server network connection – handshake establishes the secure connection ● TLS v1.3 supports ECDH (key exchange) and ECDSA (signature) ● Supported curves: – ECDSA (sign/verify) P256, P384, P521 Ed 25519, Ed448 – ECDH (key exchange) P256, P384, P521, X25519, X448 ● ECC support available also with Crypto Express (CEX) CCA co-processor

IBM Z / © 2019 IBM Corporation 16 IBM z15 Support: CPACF (continued)

Throughput Comparison OpenSSL speed test OpenSSL 1.1.1c, ECDH-op (P256) for ECDH-ECDSA

up to 20x more

key exchange e s / p

operations O

on a z15 with PU hardware support 1 4 8 for ECC #Threads

z14 z15 IBM Z / © 2019 IBM Corporation 17 IBM z15 Support: CPACF (continued)

Throughput Comparison OpenSSL 1.1.1c, ECDSA-sign (P256) OpenSSL speed test for ECDH-ECDSA

up to 38x more c e s sign operations / n g i S on a z15 with PU hardware support

for ECC 1 4 8

#Threads

z14 z15 IBM Z / © 2019 IBM Corporation 18 IBM z15 Support: CPACF (continued)

Throughput Comparison OpenSSL 1.1.1c, ECDSA-verify (P256) OpenSSL speed test for ECDH-ECDSA

up to 10x more c e s /

operations y

verify f i r e V on a z15 with PU hardware support

for ECC 1 4 8

#Threads

z14 z15 IBM Z / © 2019 IBM Corporation 19 IBM z15 Support: Secure Boot for SCSI IPL 19.10 8.1

. Part of the Pervasive Encryption effort . /sys/firmware/ipl/has_secure indicates support . Ensure that only code is loaded during IPL that is . – signed by a trusted distribution vendor /sys/firmware/ipl/secure indicates IPL (currently: Red Hat, SUSE or Canonical) using secure boot – unmodified . zipl option secure=”auto/0/1” 0 disable secure boot . Kernel image and zipl boot record must be 1 enforce secure boot signed auto enable secure boot if system supports it and image/stage3 signed . zipl tool creates signature entries for SCSI IPL . Support available in . New switch on HMC enables secure boot – Kernel 5.3

. Firmware checks signatures and stops IPL on mismatch

IBM Z / © 2019 IBM Corporation 20 IBM z15 Support: I/O Features

● (New) Crypto Express7S (CEX7S) ● Toleration: Treated as a CEX6S ● Supported by latest releases of RHEL 7, RHEL 8, SLES 12, SLES 15, Ubuntu 18.04

● FICON Express16SA ● Same performance as FICON Express16S+ ● Exploited transparently, no distro support required

● New RoCE Express2.1 10 and 25 GbE ● (New) Now up to 16 features per system

● OSA-Express7S 25 GbE SR1.1 ● (New) 10 and 1 GbE features in addition to 25 GbE now available

IBM Z / © 2019 IBM Corporation 21 Competitive Performance PostgreSQL Performance on z15 versus x86 Skylake 2.3x

2.1x

2x

Run the pgBench 9.6.1 read-only benchmark on PostgreSQL 11.1 with up to 2.3x more throughput on 8 cores on a z15 LPAR versus a compared x86 platform

1.9x

1.9x

1.9x

DISCLAIMER: Performance results based on IBM internal tests running pgBench (read-only) 9.6.1 benchmark (32 threads, 288 clients) remotely against PostgreSQL 11.1. PostgreSQL database size was 100 GB. Results may vary. z15 configuration: LPAR with 8 dedicated IFLs, 64 GB memory, 40 GB DASD storage, database in 160 GB LUN on FlashSystem 900, SLES 12 SP4 (SMT mode) running PostgreSQL 11.1. x86 configuration: 8 Intel® Xeon® Gold 6140 CPU @ 2.30GHz with Hyperthreading turned on, 64 GB memory, database in 160 GB RAID5 local SSD storage, SLES12 SP4 running PostgreSQL 11.1.

IBM Z / September 12, 2019 / © 2019 IBM Corporation 22 Competitive Performance PostgreSQL Performance on z15 versus z15 Setup x86 Skylake 10 Gbit/s pgBench IF IF PostgreSQL

Benchmark Setup x86 blade server z15 LPAR – Ran pgBench 9.6.1 workload driver remotely on x86 blade server with 32 threads • read-only with 72-288 clients • write-only with 112-448 clients FlashSystem – PostgreSQL database size 100 GB 900 – Used 8 GB shared buffers for PostgreSQL and turned on huge pages on both platforms System Stack x86 Setup – z15 • LPAR with 2-8 dedicated IFLs, 64 GB memory, running SLES 12 SP4 with SMT enabled 10 Gbit/s pgBench IF IF PostgreSQL • PostgreSQL database on 160 GB LUN on FlashSystem 900 • PostgreSQL 11.1 x86 blade server – x86 • 2-8 Intel® Xeon® Gold 6140 CPU @ 2.30GHz w/ Hyperthreading turned on, 64 GB of memory, running SLES 12 SP4 Local SSDs • PostgreSQL database on 160 GB RAID5 local SSDs • PostgreSQL 11.1 x86 server

IBM Z / September 12, 2019 / © 2019 IBM Corporation 23 Competitive Performance MongoDB Performance on z15 versus x86 Skylake 1.8x

1.7x 2.1x

Run the YCSB 0.15.0 read-only benchmark with up to 2.1x more throughput and the YCSB 0.15.0 write- heavy benchmark with up to 2x more throughput on MongoDB 4.0.6 on 2 cores on a z15 LPAR versus a compared x86 platform 1.9x

1.8x 2x

DISCLAIMER: Performance results based on IBM internal tests running YCSB 0.15.0 (read-only, write-heavy) from remote x86 servers on MongoDB Enterprise Release 4.0.6. MongoDB database size was 50 GB and record size 1 KB. Results may vary. z15 configuration: LPAR with 2 dedicated IFLs, 128 GB memory, 120 GB FlashSystem 900 storage, SLES 12 SP4 (SMT mode) running MongoDB, driven remotely by YCSB using 2 x86 servers with total 256 threads. x86 configuration: 2 Intel® Xeon® Gold 6140 CPU @ 2.30GHz with Hyperthreading turned on, 128 GB memory, 2 TB local RAID5 SSD storage, SLES12 SP4 running MongoDB, driven remotely by YCSB using 1 x86 server with total 256 threads.

IBM Z / September 12, 2019 / © 2019 IBM Corporation 24 Competitive Performance MongoDB Performance on z15 z15 Setup YCSB 0 versus x86 Skylake 64 threads 10 Gbit/s IF YCSB 1 64 threads IF 0 Benchmark Setup x86 blade 0 MongoDB FlashSystem – Ran YCSB 0.15.0 benchmark driver remotely on x86 blade server with with journaling 900 256 threads in total YCSB 2 IF 64 threads 1 • read-only (100% read) IF z15 LPAR • write-heavy (50% read, 50% update) YCSB 3 10 Gbit/s 64 threads – MongoDB database size 50 GB with 1 KB records x86 blade 1 System Stack – z15 x86 Setup • LPAR with 2-8 dedicated IFLs, 128 GB memory, running SLES 12 SP4 YCSB 0 with SMT enabled 64 threads

• 120 GB LUN on FlashSystem 900 YCSB 1 10 Gbit/s • MongoDB 4.0.6 64 threads MongoDB IF IF Local SSDs – x86 YCSB 2 with journaling 64 threads • 2-8 Intel® Xeon® Gold 6140 CPU @ 2.30GHz w/ Hyperthreading turned on, 128 GB memory, SLES 12 SP4 YCSB 3 x86 server 64 threads • 2 TB local RAID5 SSD storage • MongoDB 4.0.6 x86 blade 0

IBM Z / September 12, 2019 / © 2019 IBM Corporation 25 Competitive Performance WebSphere Application Server Liberty Performance on z15 versus x86 Skylake 2x

2.5x

The DayTrader 7 benchmark on WebSphere Application Server Liberty delivers up to 2.6x more throughput on 2 cores on a z15 LPAR 2.6x versus a compared x86 platform 2.6x

DISCLAIMER: Performance results based on IBM internal tests running DayTrader 7 web application benchmark on WebSphere Application Server Liberty (WAS Liberty) 19.0.0.3 with IBM Java 8.0.5.36 (SR5 FP36). Database Db2 LUW 11.1.1.4 located on the same system was used to persist application data. Database size was 4 GB. Results may vary.

IBM Z / September 12, 2019 / © 2019 IBM Corporation 26 Competitive Performance WebSphere Application Server z15 Setup

Apache JMeter IF

k DB2, r

Liberty Performance on z15 h o c

t WAS Liberty

x86 blade server 1 i w IF t w e versus x86 Skylake S running N Apache JMeter IF DayTrader

x86 blade server 2 10 Gbit/s z15 LPAR Benchmark Setup – DayTrader 7 Benchmark (15000 Users, 10000 Stocks) – Ran Apache Jmeter 3.3 workload driver remotely from 2 x86 blade servers, each trading for 7500 users DS8K DASDs – Db2 database size 4 GB System Stack x86 Setup – z15

• LPAR with 2-16 dedicated IFLs, 64 GB memory, SLES 12 SP4 with SMT enabled Apache JMeter IF

k DB2, r h o • WAS Liberty 19.0.0.3 with IBM Java 8.0.5.36 (SR5 FP36) bound to half of the IFLs c t WAS Liberty

x86 blade server 1 i w IF t w

• Db2 11.1.1.4 bound to half of the IFLs e running S N • Db2 database on DS8K DASD storage Apache JMeter IF DayTrader – x86 x86 blade server 2 10 Gbit/s • 2-16 Intel® Xeon® Gold 6140 CPU @ 2.30GHz w/ Hyperthreading turned on, 64 GB of memory, SLES 12 SP4 Local SSDs • WAS Liberty 19.0.0.3 with IBM Java 8.0.5.36 (SR5 FP36) bound to half of the cores • Db2 11.1.1.4 bound to half of the cores x86 server • Db2 database on local SSD storage

IBM Z / September 12, 2019 / © 2019 IBM Corporation 27 Linux Workload Performance on IBM z15

Scale-out Performance NGINX Web Server consolidation under z/VM Scale-out with Docker under z/VM Scale-out with Docker under KVM on z15 versus x86 Consolidation of x86 servers under z/VM NGINX container density versus x86 NGINX container density versus z14 NGINX KVM guest density versus x86 NGINX KVM guest density versus z14 NGINX z/VM guest density versus z14

IBM Z / September 12, 2019 / © 2019 IBM Corporation 28 Scale-out Performance Each NGINX web server processed NGINX Web Server 618K HTTPS transactions per second

Consolidation under z/VM on 4 for LPAR 1 4 for LPAR 2 4 for LPAR 3 4 for LPAR 4 4 for LPAR 5 20 z15 x86 Systems wrk2 wrk2 wrk2 wrk2 wrk2 wrk2 wrk2 wrk2 wrk2 wrk2 running wrk2Workload wrk2Workload wrk2Workload wrk2Workload wrk2Workload wrk2Workload wrk2Workload wrk2Workload wrk2Workload wrk2Workload WorkloadDriver WorkloadDriver WorkloadDriver WorkloadDriver WorkloadDriver Workload WorkloadDriver WorkloadDriver WorkloadDriver WorkloadDriver WorkloadDriver Driver Driver Driver Driver Driver Driver Driver Driver Driver Driver Driver, one

x86 server x86 server x86 server x86 server x86 server for each x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server x86 server z/VM guest

Execute up to 1 trillion HTTPS LPAR 1 - 2 LPAR 3 - 4 LPAR 5 transactions per day on a single z15 Guest 1 Guest 2 - 4 Guest 1 Guest 2 - 4 server Guest 1 Guest 2 - 4 Guest 1 Guest 2 - 4 Guest 1 Guest 2 - 4 NGINX NGINX NGINXNGINX NGINX NGINX NGINX NGINXNGINX NGINX NGINX NGINX NGINX NGINXSLES 12 SP4 NGINX NGINXSLES 12 SP4 NGINX NGINX SLESSLES 1212 SP4SP4 SLES 12 SP4 SLES 12 SP4 SLES 12 SP4 SLESSLES 20 1212 vCPU SP4SP4 SLES 12 SP4 SLES20 12 vCPU SP4 SLES 12 SP4 SLES 12 SP4 SLES20 2012 vCPU vCPUSP4 SLES 12 SP4 SLES20 2012 vCPU vCPUSP4 SLES 12 SP4 SLES20 12 vCPU SP4 18 vCPU 2020 vCPUvCPU64 GB 18 vCPU 20 vCPU64 GB 20 vCPU 18 vCPU 20 vCPU6464 GBGB 16 vCPU 20 vCPU6464 GBGB 18 vCPU 18 vCPU64 GB 64 GB 6464 GBGB 64 GB 64 GB 64 GB 64 GB mem. 64 GB mem. 64 GB mem. 64 GB mem. 64 GB mem. 64 GB mem.

z/VM 7.1 z/VM 7.1 z/VM 7.1 z/VM 7.1 z/VM 7.1 DISCLAIMER: Performance result is extrapolated from IBM internal tests running in a z15 LPAR with 36 or 39 dedicated IFLs and 256 GB memory, a z/VM 7.1 instance in SMT mode with 4 guests running SLES 12 SP4. With LPAR LPAR LPAR LPAR LPAR 36 IFLs each guest was configured with 18 vCPU. With 39 IFLs 3 guests were configured with 20 vCPU and 1 39 IFL, 256 GB memory 39 IFL, 256 GB memory guest was configured with 18 vCPU. Each guest was configured with 64 GB memory, had a direct-attached OSA- 39 IFL, 256 GB memory 38 IFL, 256 GB memory 36 IFL, 256 GB memory Express6S adapter, and was running a dockerized NGINX 1.15.9 web server. The guest images were located on a FICON-attached DS8886. Each NGINX server was driven remotely by a separate x86 blade server with 24 Intel zHypervisor Xeon E5-2697 v2 @ 2.7GHz cores and 256 GB memory, running the wrk2 4.0.0.0 benchmarking tool (https://github.com/giltene/wrk2) with 48 parallel threads and 1024 open HTTPS connections. The transferred web pages had a size of 644 bytes. z15

IBM Z / September 12, 2019 / © 2019 IBM Corporation 29 Scale-out Performance Consolidation of x86 servers NGINX 9 HTTPS running NGINX under z/VM web server transactions/sec, on z15 23% CPU utilization Compared x86 Platform (6 cores, 128 GB memory)

Consolidate 100 x86 servers, each processing HTTPS transactions with a CPU 100 x 9 HTTPS transactions/sec, 88% CPU utilization utilization of 23%, onto a single z15 server processing the entire HTTPS transaction NGINX NGINX NGINX NGINX load at a CPU utilization of 88% web server web server web server web server ...... z/VM guest z/VM guest z/VM guest z/VM guest 1 20 . . . 81 100 (4 vCPU, (4 vCPU, (4 vCPU, (4 vCPU, 32 GB memory) 32 GB memory) 32 GB memory) 32 GB memory)

DISCLAIMER: Performance result is extrapolated from IBM internal tests running the NGINX 1.15.9 web server z/VM 7.1 z/VM 7.1 under z/VM in a z15 LPAR versus on a x86 server. NGINX was driven remotely by x86 blade server with 24 Intel Xeon E5-2697 v2 @ 2.7GHz cores and 256 GB memory, running the wrk2 4.0.0.0 benchmarking tool (https://github.com/giltene/wrk2) with a fixed transaction rate of 9 HTTPS transactions per second, 24 parallel LPAR 1 (36 IFL, 768 GB memory) LPAR 5 (36 IFL, 768 GB memory) threads, 48 open HTTPS connections. The transferred web pages were 12 files of the Silesia Corpus (http://sun.aei.polsl.pl/~sdeor/index.php? page=silesia), data compression was enabled. Results may vary. z15 zHypervisor configuration: LPAR with 36 dedicated IFL and 786 GB memory, running a z/VM 7.1 instance in SMT mode with 20 guests, each with 4 vCPUs and 32 GB memory running a dockerized NGINX instance on Docker 18.09.6 on SLES12 SP4. x86 configuration: 6 Xeon Gold 6126 CPUs @ 2.6GHz with Hyperthreading turned on, 128 GB memory, z15 running a dockerized NGINX instance on Docker 18.09.6 on SLES12 SP4.

IBM Z / September 12, 2019 / © 2019 IBM Corporation 30 Scale-out Performance NGINX Container Density on 10 HTTPS requests / sec per dockerized z15 versus x86 Skylake NGINX web server at constant latency

wrk2 wrk2 wrk2 wrk2 Workload . . . Workload Workload . . . Workload Driver Driver Driver Driver

x86 server x86 server x86 server x86 server Run up to 2.3x more Docker containers per core on a z15 LPAR versus a compared bare-metal x86 platform, running an identical web server load

Dockerized Dockerized Dockerized Dockerized NGINX . . . NGINX NGINX . . . NGINX web server web server web server web server 1 100 1 43

LPAR (2 IFL, 32 GB memory) Compared x86 Platform DISCLAIMER: Performance results based on IBM internal tests running dockerized NGINX web server in a z15 native LPAR compared to running them bare-metal on a compared x86 platform. Results may vary. z15 (2 cores, 32 GB memory) configuration: LPAR with 2 dedicated IFLs, 32 GB memory, 40 GB DASD storage, SLES 12 SP4 (SMT mode) zHypervisor running Docker 18.09.6 and NGINX 1.15.9. x86 configuration: 2 Intel® Xeon® Gold 6140 CPU @ 2.30 GHz with Hyperthreading turned on, 32 GB memory, 40 GB RAID5 local SSD storage, SLES12 SP4 running Docker 18.09.6 and NGINX 1.15.9. z15

IBM Z / September 12, 2019 / © 2019 IBM Corporation 31 Scale-out Performance

NGINX Container Density 10 HTTPS requests / sec per dockerized on z15 versus z14 NGINX web server at constant latency

wrk2 wrk2 wrk2 wrk2 Workload . . . Workload Workload . . . Workload Driver Driver Driver Driver

x86 server x86 server x86 server x86 server Run up to 37% more Docker containers per IFL on a z15 LPAR compared to a z14 LPAR, each running an identical NGINX web server load

Dockerized Dockerized Dockerized Dockerized NGINX . . . NGINX NGINX . . . NGINX web server web server web server web server 1 100 1 73

DISCLAIMER: Performance results based on IBM internal tests running dockerized NGINX web server in LPAR (2 IFL, 32 GB memory) LPAR (2 IFL, 32 GB memory) a z15 native LPAR compared to a z14 LPAR. NGINX was driven remotely by x86 blade server with 24 Intel Xeon E5-2697 v2 @ 2.7GHz cores and 256 GB memory, running the wrk2 4.0.0.0 benchmarking tool (https://github.com/giltene/wrk2) with one thread and one HTTPS connections for each container. The zHypervisor zHypervisor wrk2 transaction rate was adjusted to 10 TPS for both setups. The web pages were created dynamically using PHP. Results may vary. z15 and z14 configuration: LPAR with 2 dedicated IFLs, 32 GB memory, 40 GB DASD storage, SLES 12 SP4 (SMT mode) running Docker 18.09.6 and NGINX 1.15.9. z15 z14

IBM Z / September 12, 2019 / © 2019 IBM Corporation 32 Scale-out Performance In total In total NGINX KVM Guest Density 555K HTTPS requests/sec, 277K HTTPS requests/sec, on z15 versus x86 Skylake 69K HTTPS requests/sec 69K HTTPS requests/sec per KVM guest per KVM guest

wrk2 wrk2 wrk2 wrk2 Workload . . . Workload Workload . . . Workload Driver Driver Driver Driver

x86 server x86 server x86 server x86 server Run up to 2x more KVM guests per core on a z15 LPAR versus a compared x86 platform running identical web server loads NGINX NGINX NGINX NGINX web server web server web server web server ...... Guest 1 Guest 8 Guest 1 Guest 4 (8 vCPU, (8 vCPU, (8 vCPU, (8 vCPU, 10 GB memory) 10 GB memory) 10 GB memory) 10 GB memory)

KVM 2.11.2 KVM 2.11.2

DISCLAIMER: Performance results based on IBM internal tests running NGINX web server on KVM in a z15 LPAR (8 IFL, 512 GB memory) Compared x86 Platform LPAR versus on compared x86 platform. Each KVM guest was configured with 8 vCPUs and 10 GB memory (8 cores, 512 GB memory) and was running a dockerized NGINX 1.15.9 web server. z15 configuration: LPAR with 8 dedicated IFLs, 512 zHypervisor GB memory, 200 GB FlashSystem 900 storage, SLES 12 SP4 (SMT mode) with KVM 2.11.2. x86 configuration: 8 Intel® Xeon® Gold 6140 CPU @ 2.30GHz with Hyperthreading turned on, 512 GB memory, 435 GB RAID5 local SSD storage, SLES12 SP4 with KVM 2.11.2. z15

IBM Z / September 12, 2019 / © 2019 IBM Corporation 33 Scale-out Performance In total In total NGINX KVM Guest Density 557K HTTPS requests/sec, 485K HTTPS requests/sec, 69K HTTPS requests/sec 69K HTTPS requests/sec on z15 versus z14 per KVM guest per KVM guest

wrk2 wrk2 wrk2 wrk2 Workload . . . Workload Workload . . . Workload Driver Driver Driver Driver

x86 server x86 server x86 server x86 server Run up to 15% more KVM guests per core on a z15 LPAR compared to a z14 LPAR, each running an identical NGINX web server load NGINX NGINX NGINX NGINX web server web server web server web server ...... Guest 1 Guest 8 Guest 1 Guest 7 (8 vCPU, (8 vCPU, (8 vCPU, (8 vCPU, 10 GB memory) 10 GB memory) 10 GB memory) 10 GB memory)

KVM 2.11.2 KVM 2.11.2

LPAR (8 IFL, 256 GB memory) LPAR (8 IFL, 256 GB memory) DISCLAIMER: Performance results based on IBM internal tests running NGINX web server on KVM in a z15 LPAR versus a z14 LPAR. Each KVM guest was configured with 8 vCPUs and 10 GB memory and was running zHypervisor zHypervisor a dockerized NGINX 1.15.9 web server. Results may vary. z15 configuration: LPAR with 8 dedicated IFLs, 256 GB memory, 40 GB DASD storage, SLES 12 SP4 (SMT mode) with KVM 2.11.2. z14 configuration: LPAR with 8 dedicated IFLs, 256 GB memory, 40 GB DASD storage, SLES 12 SP4 (SMT mode) with KVM 2.11.2. z15 z14

IBM Z / September 12, 2019 / © 2019 IBM Corporation 34 Scale-out Performance In total In total NGINX z/VM Guest Density 1078K HTTPS requests/sec, 898K HTTPS requests/sec, 89K HTTPS requests/sec on z15 versus z14 89K HTTPS requests/sec per z/VM guest per z/VM guest

wrk2 wrk2 wrk2 wrk2 Workload . . . Workload Workload . . . Workload Driver Driver Driver Driver

x86 server x86 server x86 server x86 server Run up to 20% more z/VM guests per core on a z15 LPAR compared to a z14 LPAR, each running an identical NGINX web server load NGINX NGINX NGINX NGINX web server web server web server web server ...... Guest 1 Guest 12 Guest 1 Guest 10 (4 vCPU, (4 vCPU, (4 vCPU, (4 vCPU, 32 GB memory) 32 GB memory) 32 GB memory) 32 GB memory)

z/VM 7.1 z/VM 7.1

LPAR (12 IFL, 768 GB memory) LPAR (12 IFL, 768 GB memory)

DISCLAIMER: Performance results based on IBM internal tests running NGINX web server on z/VM 7.1 in a zHypervisor zHypervisor z15 LPAR versus a z14 LPAR. Each z/VM guest was configured with 4 vCPUs and 32 GB memory and was running SLES 12 SP4 (SMT mode) with a dockerized NGINX 1.15.9 web server. Results may vary. z15 and z14 configuration: LPAR with 12 dedicated IFLs, 768 GB memory, running z/VM 7.1. z15 z14

IBM Z / September 12, 2019 / © 2019 IBM Corporation 35 Kernel & Other Packages

IBM Z / © 2019 IBM Corporation 36 Linux Kernel – Base IBM Z support

● Virtually Mapped Kernel Stacks (kernel 4.20) LPAR z/VM KVM – Allocate the kernel stack for tasks from the vmalloc space – Use a guard page to trap kernel stack overflows instead of instructions – Slightly faster kernel code and a ~200K size reduction of the kernel

CONFIG_VMAP_STACK=y CONFIG_VMAP_STACK=n, CONFIG_CHECK_STACK=y 0000000000000000 : 0000000000000000 : 000: c0 04 00 00 00 00 brcl 0,0 000: c0 04 00 00 00 00 brcl 0,0 006: eb bf f0 70 00 24 stmg %r11,%r15,112(%r15) 006: eb bf f0 70 00 24 stmg %r11,%r15,112(%r15) 00c: b9 04 00 ef lgr %r14,%r15 00c: a7 f1 3f c0 tmll %r15,16320 010: e3 f0 ff d0 ff 71 lay %r15,-48(%r15) vs 010: b9 04 00 ef lgr %r14,%r15 016: e3 e0 f0 98 00 24 stg %r14,152(%r15) 014: a7 84 00 01 je 946 ... 018: e3 f0 ff d0 ff 71 lay %r15,-48(%r15) 01e: e3 e0 f0 98 00 24 stg %r14,152(%r15) ...

IBM Z / © 2019 IBM Corporation 37 Linux Kernel – Base IBM Z support

● Kernel Address Space Layout Randomization (kernel 5.2) LPAR z/VM KVM – Security improvements by making the address of the kernel harder to predict

● CPU-MF Counters for z15 (kernel 5.3) LPAR z/VM KVM – Adds Measurement Facility (MF) counters for ECC – Access using lscpumf command: # lscpumf -c|fgrep ECC_ r50 ECC_FUNCTION_COUNT r51 ECC_CYCLES_COUNT r52 ECC_BLOCKED_FUNCTION_COUNT r53 ECC_BLOCKED_CYCLES_COUNT

IBM Z / © 2019 IBM Corporation 38 Linux Kernel – Crypto Support

● zcrypt Multiple Device Nodes Support (kernel 4.20) LPAR z/VM KVM – Add new zcrypt device nodes with restrictions in regard to crypto cards, domains, and available IOCTLs – The restricted zcrypt access is needed for container solutions like Docker

LPAR z/VM KVM ● Protected Key API for Random Keys (kernel 4.20) – Introduce new IOCTLs and sysfs attributes to generate random protected and random secure keys – Useful to create encrypted paging devices with protected keys – Path to sysfs attribute can be added directly to /etc/crypttab (e.g. devices/virtual/misc/pkey/protkey/protkey_aes_256_xts)

IBM Z / © 2019 IBM Corporation 39 Linux Kernel – Block Device Support

● Add zFCP Port Speed Capabilities (kernel 4.18) LPAR z/VM – Decode and display the channel link speed of the FCP Express 32S card – The lszfcp -a command can be used to show the port speed

● Support DIF independently of DIX for ZFCP (kernel 5.0) LPAR z/VM – Configure DIF-only (Data Integrity Field) mode with kernel parameters: zfcp.dif=1 zfcp.dix=0 LPAR z/VM ● Thin Provisioning Base Support for DASD (kernel 5.3) – Use with DASD devices configured for thin provisioning on storage server: ● Not all disk space is allocated in the storage server when the disk is empty ● Disk space gets allocated only if in use – Use dasdfmt options -M quick or --mode quick ● Formats first two tracks of disk only ● Significantly speeds up formatting process – Note: Slow write performance for the first write of each newly used track

IBM Z / © 2019 IBM Corporation 40 Linux Kernel – Network Device Support

● Add qeth IPv6 RX/TX Checksum Offload Support (kernel 4.18) LPAR z/VM – Enable RX/TX checksum offload for inbound and outbound IPv6 network traffic – IPv6 checksum offload is available with OSA Express 4S cards

● Report 25Gbit Link Speed for qeth Devices (kernel 4.20) LPAR z/VM – Add the link speed of the OSA Express 7 cards to sysfs and ethtool

● Improve qeth TSO Support (kernel 4.20) LPAR z/VM – Add TCP segmentation offload for IPv6 on L3 devices, as well as, TSO for L2 devices

● Improve invalid frame handling (kernel 5.5) LPAR z/VM – Process valid frames within receive buffers containing invalid ones

● Support HiperSockets Multi-Write (kernel 5.5) LPAR z/VM – Send multiple frames with a single instruction – Saves CPU cycles

IBM Z / © 2019 IBM Corporation 41 Other Packages

● s390-tools v2.12 (12/2019) LPAR z/VM KVM – Userspace tools for use with the Linux kernel and its device drivers on IBM Z – Held in sync with latest kernel releases – v2.12 supports Linux kernel 5.4 – See CHANGELOG for further details

● smc-tools v1.2.2 (10/2019) LPAR z/VM – Package with utilities in support of SMC-R and SMC-D – Latest changes: ● Support for new API as introduced with Linux kernel 5.1 ● Added bash autocompletion support

● qclib v2.0.1 (01/2020) LPAR z/VM KVM – C library providing information on system, capacity, and virtualization layers – Latest changes: ● Support for zCX environment ● Attributes to query model name in clear text IBM Z / © 2019 IBM Corporation 42 Containers

IBM Z / © 2019 IBM Corporation 43 Container Tools on IBM Z

Container Tools Availability Package versions

. Red Hat docker podman . Docker is available and supported in RHEL 7.5 1.13 0.9.2  Ubuntu 16.04 and later RHEL 7.6 1.13 1.4.4  RHEL 7.5 - 7.7 via extras repository RHEL 7.7 1.13 1.4.4  SLES 15 – SLES 15 SP1 RHEL 8 - 1.0.0.2 RHEL 8.1 - 1.4.2 . Podman is available and supported in  RHEL 7.5 and later . SUSE docker podman

 SLES15 SP1 SLES15 17.09 - SLES15 SP1 18.09.1 1.0.1

. Docker is available as community edition  Ubuntu 16.04 and later . Ubuntu docker podman  Fedora 28 and later 16.04 LTS 18.09.7 -  Binaries at docker.com 18.04 LTS 18.09.7 -

IBM Z / © 2019 IBM Corporation 44 44 Kata Containers – The Speed of Containers, the Security of VMs

• Run containers inside a virtual machine to improve isolation • Released end of December 2017, IBM Z support at the end of 2018 • Licensed under OpenStack Foundation and Apache 2.0 licenses • Support for main container engines: Docker, cri-containerd, cri-o and Podman • Support for Kubernetes • Available via Snap

kubernetes kubernetes • No modifications of existing tools • It can be used in parallel to standard container engine container engine containers container runtime: runc container runtime: kata

{ Namespace Namespace KVM KVM "runtimes": { "kata": { container container container "path": "/usr/bin/kata-runtime", container container container } } guest kernel guest kernel /etc/docker/daemon.json Host kernel Host kernel

IBM Z / © 2019 IBM Corporation 45 How to install container tools

• RHEL ● SLES ● Docker ● Requires registration # subscription-manager repos \ # SUSEConnect -p sle-module- \ -enable=rhel-7-server-extras-rpms containers//s390x # install docker # zypper refresh

# systemctl start docker.service ● # systemctl enable docker.service Docker # zypper install docker ● Podman # systemctl start docker.service # yum install podman # systemctl enable docker.service ● Podman • Ubuntu # zypper install podman ● Docker # install docker.io ● Kata # apt-get install -y snapd snapcraft # snap install kata-containers --classic

IBM Z / © 2019 IBM Corporation 46 Image registries and available images

Container image registries: https://hub.docker.com https://quay.io https://gcr.io

Available container images: • Ubuntu, Alpine, Fedora, , ClefOS • Golang, Python, Java, Perl, Ruby, PHP, Node.js, etc • IBM DB2, PostgreSQL, Redis, HAProxy, Apache Tomcat, MongoDB, Apache HTTP Server, GCC, etc

IBM Z / © 2019 IBM Corporation 47 Multi-arch container images

Docker 18.09+ Buildah 1.12.0+

docker buildx (experimental) buildah manifest

# docker buildx create --use --name # buildah bud –t . # docker buildx build --platform # buildah manifest create linux/amd64,linux/s390x --push -t . # buildah manifest add # docker buildx imagetools inspect # buildah manifest push # docker buildx stop # buildah manifest push –all # docker buildx rm # buildah manifest inspect # buildah rmi

See See • https://docs.docker.com/buildx/working-with-buildx/#build-multi- https://buildah.io/releases/2020/01/14/Buildah-version-v1.12.0.html platform-images • https://github.com/docker/buildx

IBM Z / © 2019 IBM Corporation 48 Container Orchestrators on Linux on IBM Z

Community editions • Docker Swarm (from docker-ce) See https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/ • Kubernetes ● Binaries released by the Kubernetes community See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ ● Ubuntu Kubernetes Distro See https://ubuntu-on-big-iron.blogspot.com/2019/08/deploy-cdk-on-ubuntu-s390x.html

Enterprise products • OpenShift (OKD) 3.11 from Sinenomine See https://www.sinenomine.net/products/linux/OpenShift • Ubuntu Kubernetes Distro (Charmed Kubernetes) See https://ubuntu-on-big-iron.blogspot.com/2019/08/deploy-cdk-on-ubuntu-s390x.html

IBM Z / © 2019 IBM Corporation 49 OpenShift Container Platform on IBM Z

OCP 4.2 initial release

• Minimum configuration • 1 LPAR • z/VM Hypervisor • OCP cluster nodes run as guest virtual machines • LPAR/KVM support subject to future releases

See https://try.openshift.com/

IBM Z / © 2019 IBM Corporation 50 KVM

IBM Z / © 2019 IBM Corporation 51 KVM on IBM Z

KVM Availability Package versions

. Red Hat kernel QEMU Libvirt . KVM is available and supported in RHEL 7.5-alt 4.14 2.10 3.9 – SLES12 SP2 and later RHEL 7.6-alt 4.14 2.12 4.5 – RHEL 7.5 via the kernel-alt packages RHEL 8 4.18 2.12 4.5 – Ubuntu 16.04 and later . SUSE kernel QEMU Libvirt . KVM is available in community distributions SLES12 SP2 4.4 2.6.1 2.0 – Debian SLES12 SP3 4.12 2.11 4.0 – Fedora SLES12 SP4 4.12 2.11 4.0 SLES15 SP1 4.12 3.1 5.0 – openSUSE

. Ubuntu kernel QEMU Libvirt 16.04 LTS 4.4 2.5 1.3.1 18.04 LTS 4.15 2.11 4.0

IBM Z / © 2019 IBM Corporation 52 52 How to get KVM

. RHEL # yum install qemu-kvm libvirt virt-install virt-manager Perform a modprobe kvm (once)

. SLES # zypper install qemu-kvm libvirt virt-install virt-manager

. Ubuntu # apt install qemu-kvm libvirt-daemon libvirt-clients virt-manager

IBM Z / © 2019 IBM Corporation 53 KVM Hardware Support: z15 Features

● Miscellaneous instructions – New helper and general purpose instructions

● New Vector Instructions (aka SIMD Extensions) – improve decimal calculations as well as for implementing high performance variants of certain cryptographic operations

● Deflate Conversion – Provide acceleration for zlib compression and decompression

● CPACF Updates – New Message Security Assist MSA9, providing elliptic curve cryptography, supporting message authentication, the generation of elliptic curve keys, and scalar multiplication – No host support required

IBM Z / © 2019 IBM Corporation 54 KVM Hardware Support: z15 CPU Model

hvm [...] [...]

● z15 support provided by new model gen15a, enabling all z15 features per default

● Choose among the following CPU models: – Pre-defined model for z15: gen15a – Migration-safe model (recommended), checks feature existence prior to live migration: – Non-migration-safe model, does not check feature existence prior to a live migration:

● z15 CPU model readily available in RHEL 8.1, SLES 12 SP5, SLES 15 SP1 and Ubuntu 18.04.

IBM Z / © 2019 IBM Corporation 55 Hardware Passthrough Support

. Para-virtualized devices – All devices available to the host can be passed to the guest – E.g. virtio-net-ccw

. Hardware passthrough support – Allows to reflect special characteristics of HW to the guest – Allows to use special functions – May influence performance

IBM Z / © 2019 IBM Corporation 56 KVM Crypto Express Passthrough

Guest 1 Guest n

. Pass crypto cards to KVM guests – Only those that are assigned to the LPAR KVM

. Only full pass-through of adapter/domain tuples – Focus on secure key capabilities LINUX 4.20 – No sharing (i.e. no sharing of adapter/domain between multiple guests) 3.1 – Similar to LPAR/SE granularity 4.9 . Crypto Passthrough Interrupt Support (kernel 5.3) 8.0 – Receive crypto adapter interrupts in KVM guests immediately 15 SP1 – Improves performance 12 SP5

IBM Z / © 2019 IBM Corporation 18.04 57 KVM Experimental CCW Passthrough

. Forward CCW devices directly to the guest Guest 1 Guest n . For the time being only DASD devices – IPL support is work in progress

. Devices based on QDIO are difficult KVM – Close integration into memory management is needed LINUX 4.14 2.11 4.10

15 SP1 12 SP5

IBM Z / © 2019 IBM Corporation 58 KVM PCI Passthrough

. PCI on Z Guest 1 Guest n – RoCE Express

● networking card with RDMA

– SMC-R using RoCE Express KVM – Upcoming:

● SMC-D using ISM virtual PCI devices LINUX 4.14 2.11 . Not paravirtualized – Guest can make full use of the feature set of the card 4.10 – Resources cannot be shared 8.0 15 SP1 12 SP5 19.04 IBM Z / © 2019 IBM Corporation 59 KVM VNC/Spice and Virtio-gpu/input

. Today and future guest console via SCLP ascii console

. In addition to that, how about a graphical tty?

. Virtio-GPU – Low overhead virtual GPU – Can be exported via VNC/SPICE – Used as an additional TTY (not console) LINUX 4.17 . Virtio-input mouse and keyboard 2.11 4.2 15 SP1

IBM Z / © 2019 IBM Corporation 19.04 60 KVM Huge Page Support

. Use libhugetlbfs for guest memory backing

. Less TLB lookups and KVM memory management (fault in) – Expected better VM performance

. EDAT-1 (1M huge pages), no THP support, no vSIE support

Region Region Table(s) Table(s) Segment Segment Table Table Page Table Segment 4k 1M LINUX 4.19 Page 3.1 - IBM Z / © 2019 IBM Corporation 61 Staying Up-To-Date

Webcasts

● In-depth sessions on Linux on Z topics

● Provided by Linux on Z development team

● See https://developer.ibm.com/tv/linux-ibm-z/ for – announcements of future sessions, and – recordings of past sessions

IBM Z / © 2019 IBM Corporation 62 Staying Up-To-Date

Blogs

● Very latest news from the development team

– KVM on Z: http://kvmonz.blogspot.com/

– Linux on Z & containers: http://linux-on-z.blogspot.com/

● Focus primarily on upstream submissions, which will end up in Linux distributions later

● Also features in-depth articles on specific topics

● Provided by Linux on Z development team

IBM Z / © 2019 IBM Corporation 63 References

Documentation ● Linux on Z and LinuxONE Knowledgecenter https://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_main.html ● (Deprecated) Linux on IBM Z on DeveloperWorks - https://www.ibm.com/developerworks/linux/linux390/documentation_dev.html

Webcasts

● https://developer.ibm.com/tv/linux-ibm-z/

Blogs ● Primary places to get news and updates on everything Linux and KVM on Z – KVM on Z: http://kvmonz.blogspot.com/ – Linux on Z, including containers: http://linux-on-z.blogspot.com/

IBM Z / © 2019 IBM Corporation 64 IBMIBM Z /Z © / 2019© 2018 IBM IBM Corporation Corporation Tag Legend

● Supported distributions

x.y 12 for SUSE SLES Service Pack , e.g. SP3 for SLES12 SP3

x.y for RHEL Update , e.g. 7.4 for RHEL7.4

x.y for Ubuntu x.y, e.g. 16.04 for Ubuntu 16.04 LTS

● Suppored environments

LPAR usable for systems running in LPAR mode

z/VM usable for guests running on z/VM

KVM usable for guests running on KVM

IBM Z / © 2019 IBM Corporation 66