Lenovo® X6 Systems Solution™ for SAP HANA® Implementation Guide for System x® X6 Servers

Lenovo Development for SAP Solutions In cooperation with: SAP AG Created on 3rd July 2015 15:13 – Version 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

X6 Systems Solution for SAP HANA Platform Edition

Dear Reader, We wish to explicitly announce that this guide book is for the System X6 based servers for SAP HANA Platform Edition (Type 6241 Model AC3/AC4/Hxx) based on Intel® Xeon® IvyBridge or Haswell EX Family of Processors. Type 3837 of System X6 based servers and the System eX5 based servers for SAP HANA Platform Edition (models 7147-H** and 7143-H**) are not discussed in this manual. The Lenovo Systems X6 solution for SAP HANA Platform Edition is based on System X6 Architecture building blocks that provide a highly scalable infrastructure for the SAP HANA Platform Edition ap- pliance software. The Systems x3850 X6, x3950 X6 and software, such as IBM General Parallel File System™ (GPFS) will be used to run SAP HANA Platform Edition appliance software. Lenovo has created orderable models upon which you may install and run the SAP HANA Platform Edi- tion appliance software according to the sizing charts coordinated with SAP AG. For each workload type, special ordering options for the System x3850 X6 and System x3950 X6 Type 6241 Models AC3/AC4/Hxx have been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition appliance software. The Lenovo – SAP HANA Development Team

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® I 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Copyrights and Trademarks

© Copyright 2010-2015 Lenovo. Lenovo may not offer the products, services, or features discussed in this document in all countries. Consulty our local Lenovo representative for information on the products and services currently available in your area. Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any other product, program, or service. Lenovo may have patents or pending patent applications covering subject matter described in this doc- ument. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: Lenovo (United States), Inc. 1009 Think Place - Building One Morrisville, NC 27560 U.S.A. Attention: Lenovo Director of Licensing

LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EI- THER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WAR- RANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Neither this documentation nor any part of it may be copied or reproduced in any form or by any means or translated into another language, without the prior consent of Lenovo. This document could include technical inaccurancies or errors. The information contained in this doc- ument is subject to change without any notice. Lenovo reserves the right to make any such changes without obligation to notify any person of such revision or changes. Lenovo makes no commitment to keep the information contained herein up to date. Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk. Information concerning non-Lenovo products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide home pages. Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the supplier of those products.

Edition Notice: 3rd July 2015 This is the thirteenth published edition of this document. The online copy is the master.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® II 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Lenovo, the Lenovo logo, System x and For Those Who Do are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. Other product and service names might be trademarks of Lenovo or other companies. A current list of Lenovo trademarks is available on the web at: http://www.lenovo.com/legal/copytrade.html. IBM, the IBM logo, and .com are trademarks of International Business Machines Corp., registered in the United States and/or other countries. Adobe and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Fusion-io is a registered trademark of Fusion-io, in the United States. Intel, Intel Xeon, , and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. SAP HANA is a trademark of SAP Corporation in the United States, other countries, or both. Other company, product or service names may be trademarks or service marks of others.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® III 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Contents 1 Abstract 1 1.1 Preface & Scope ...... 2 1.2 Acknowledgements ...... 2 1.3 Feedback ...... 3 1.4 Disclaimer ...... 3 1.5 Support ...... 3

2 Introduction 6 2.1 Purpose ...... 6 2.2 Applicability ...... 6 2.2.1 SAP HANA Platform Edition Versions ...... 6 2.3 Exclusions and Exceptions ...... 6 2.4 Conventions ...... 6 2.4.1 Icons Used ...... 6 2.4.2 Code Snippets ...... 7

3 Solution Overview 7 3.1 The SAP HANA Appliance Software ...... 7 3.2 Definition of SAP HANA ...... 7

4 Hardware Configurations 9 4.1 SAP HANA Platform Edition T-Shirt Sizes ...... 10 4.2 Single Node versus Clustered Configuration ...... 10 4.2.1 Network Switch Options ...... 11 4.3 SAP HANA Optimized Hardware Configurations ...... 13 4.3.1 System x3850 X6 Single Node Configurations ...... 13 4.3.2 System x3950 X6 Single Node Configurations ...... 13 4.3.3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion 14 4.3.4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations ...... 14 4.3.5 System x3850 X6 Cluster Node Configurations with Storage Expansion ...... 14 4.3.6 System x3950 X6 Cluster Node Configurations ...... 15 4.4 Card Placement ...... 15 4.4.1 Network Interface Cards ...... 15 4.4.2 Slots for additional Network Interface Cards ...... 15 4.4.3 RAID Adapter Cards ...... 16

5 Networking 21 5.1 Networking Requirements ...... 21 5.2 Jumbo Frames ...... 21 5.3 Network Configuration ...... 22 5.4 Network Switch Configuration For Clustered Installations ...... 23 5.5 Customer Site Networks ...... 24 5.6 Network Definitions ...... 24 5.6.1 Numbering conventions ...... 24 5.6.2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit ...... 24 5.6.3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit ...... 26 5.6.4 Internal Networks – Option 3 G8272 RackSwitch 10Gbit ...... 27 5.6.5 Internal Networks – Option 4 G8296 RackSwitch 10Gbit ...... 28 5.6.6 Administrative, SAP-Access and Backup Networks – Option G8052 RackSwitch 1Gbit ...... 29 5.6.7 Network Configurations in a Clustered Environment ...... 30 5.7 Setting up the Switches ...... 31

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® IV 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

5.7.1 Basic Switch Configuration Setup ...... 31 5.7.2 Advanced Setup of the Switches ...... 32 5.7.3 Disable Spanning Tree Protocol ...... 33 5.7.4 Disable Default IP Address ...... 33 5.7.5 Enable L4Port Hash ...... 33 5.7.6 Disable Routing ...... 33 5.7.7 Add Networking ...... 33 5.7.8 VLAN configurations ...... 33 5.7.9 Save changes to switch FLASH memory ...... 36 5.8 Inter-Site Portchannel Configuration ...... 36 5.8.1 Static Trunk over one Inter-Site Link ...... 36 5.8.2 Portchannel over two Inter-Site Links ...... 37 5.8.3 Portchannel over four Inter-Site Links ...... 38 5.8.4 Save and Restore Switch Configuration ...... 38 5.9 Generation of Switch Configurations ...... 39 5.9.1 Script Usage ...... 39 5.9.2 Examples ...... 39 5.9.3 Input Values ...... 40

6 Guided Install of the Lenovo Solution 41 6.1 Preparation ...... 42 6.1.1 Firewall Preparations ...... 42 6.1.2 Lenovo Systems solution for SAP HANA Additional Software Stack ...... 42 6.1.3 Software, Firmware and Drivers ...... 43 6.1.4 Card Placement ...... 45 6.1.5 Hardware UEFI Configuration ...... 45 6.2 Phase 1 ...... 48 6.2.1 Storage Configuration – RAID Setup ...... 48 6.2.2 Mounting Installation Images using the IMM Virtual Media Center ...... 51 6.2.3 Starting the Automatic Installation Process ...... 52 6.3 Phase 2 – SLES for SAP ...... 53 6.4 Phase 2 – RHEL ...... 58 6.5 Interim Check ...... 60 6.5.1 Installation of Mandatory Packages ...... 61 6.5.2 Installation without Network Connectivity ...... 62 6.6 Phase 3 ...... 62 6.6.1 Verification of RAID Controller and HDD/SSD Firmware ...... 62 6.6.2 HANA Installation ...... 62 6.6.3 Single Node with HA Installation with Side-car Quorum Solution ...... 65

7 After Installation 66 7.1 Actions to insure the correctness of the installation ...... 66 7.2 HANA Network Setup ...... 67

8 Disaster Recovery 68 8.1 Architecture ...... 68 8.1.1 Terminology ...... 68 8.1.2 Architectural overview ...... 69 8.1.3 Three site/Tiebreaker node architecture ...... 71 8.2 Mixing eX5/X6 Server in a DR Cluster ...... 71 8.3 Hardware Setup ...... 71 8.3.1 Site A and B ...... 71 8.3.2 Tiebreaker Site C (optional) ...... 71 8.3.3 Acquire TCP/IP addresses and host names ...... 72

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® V 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.3.4 Network switch setup (GPFS and SAP HANA network) ...... 72 8.3.5 Link between site A and B ...... 72 8.3.6 Network integration into customer infrastructure ...... 73 8.3.7 Setup network connection to tiebreaker node at site C (optional) ...... 73 8.4 Software Setup ...... 73 8.4.1 GPFS configuration prerequisites ...... 74 8.4.2 GPFS Server configuration ...... 76 8.4.3 GPFS Disk configuration ...... 77 8.4.4 Filesystem Creation ...... 78 8.4.5 SAP HANA appliance installation ...... 79 8.4.6 Tiebreaker node setup ...... 81 8.4.7 Verify Installation ...... 81 8.5 Extending a DR-Cluster ...... 83 8.6 Mixing eX5/X6 Server in a DR Cluster ...... 83 8.6.1 Hardware Setup ...... 83 8.6.2 GPFS Part 1 ...... 83 8.6.3 HANA Backup Node Installation ...... 85 8.6.4 GPFS Part 2 ...... 86 8.6.5 HANA ...... 86 8.7 Using Non Productive Instances on Inactive DR Site ...... 87 8.7.1 Architecture ...... 87 8.7.2 Setup ...... 88

9 Mixed eX5/X6 Environments 91 9.1 Mixed eX5/X6 HA Clusters ...... 91 9.1.1 Definition & Overview ...... 91 9.1.2 Prerequisites & Limitations ...... 91 9.1.3 New Installation ...... 92 9.1.4 Existing Cluster Extension/Node Replacement ...... 94 9.1.5 Deviating Operation Instructions ...... 94 9.2 Mixed eX5/X6 DR Clusters ...... 97 9.2.1 Definition & Overview ...... 97 9.2.2 Prerequisites & Limitations ...... 97 9.2.3 New Installation ...... 98 9.2.4 Existing Cluster Extension/Node Replacement ...... 99 9.2.5 Deviating Operation Instructions ...... 99

10 Special Single Node Installation Scenarios 103 10.1 Single Node with HA Installation with Side-car Quorum Solution ...... 103 10.1.1 Installation of SAP HANA appliance single node with HA ...... 104 10.1.2 Prepare quorum node ...... 105 10.1.3 Quorum Node Network Setup ...... 106 10.1.4 Adapt hosts file ...... 107 10.1.5 SSH configuration ...... 107 10.1.6 Quorum Node IBM GPFS setup ...... 108 10.1.7 Quorum Node IBM GPFS installation ...... 108 10.1.8 Add quorum node ...... 110 10.1.9 Create descriptor disk ...... 110 10.1.10 Add disk to file system ...... 110 10.1.11 Verify Cluster Setup ...... 110 10.1.12 Installation of SAP HANA ...... 111 10.2 Single Node with stretched HA Installation ...... 111 10.2.1 Installation and configuration of SLES and IBM GPFS ...... 112 10.2.2 Installation of SAP HANA ...... 113

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VI 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10.3 Single Node with DR Installation ...... 113 10.3.1 Installation and configuration of SLES and IBM GPFS ...... 115 10.3.2 Optional: Expansion Storage Setup for Non-Production Instance ...... 115 10.4 Single Node with HA and DR Installation ...... 115 10.4.1 Installation and configuration of SLES and IBM GPFS ...... 116 10.4.2 Optional: Expansion Storage Setup for Non-Production Instance ...... 118 10.5 Single Node DR Installation with SAP HANA System Replication ...... 119 10.5.1 Installation and configuration of SLES and IBM GPFS ...... 120 10.5.2 Installation of SAP HANA ...... 121 10.5.3 Optional: Expansion Storage Setup for Non-Production Instance ...... 121 10.6 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication122 10.6.1 Installation and configuration of SLES and IBM GPFS ...... 124 10.6.2 Installation of SAP HANA ...... 125 10.7 Expansion Storage Setup for Non-productive SAP HANA Instance ...... 126

11 Virtualization 128 11.1 Getting Started ...... 128 11.1.1 Memory Overhead ...... 128 11.1.2 Configure UEFI ...... 129 11.1.3 Start Embedded VMware ESXi Hypervisor ...... 129 11.1.4 Configure Management Network of ESXi Hypervisor ...... 129 11.1.5 Enable SSH on VMware ESXi Hypervisor ...... 133 11.1.6 StorCLI on VMware ESXi 5.5 ...... 133 11.1.7 Setting up ESXi Storage in CLI ...... 135 11.1.8 setting up vswitches ...... 135 11.1.9 setting up nic bonding(teaming) ...... 136 11.1.10 Setting Storage for SLES and HANA ISO ...... 136 11.1.11 Restart VMware ESXi Hypervisor ...... 137 11.1.12 Installing VMware vSphere Client ...... 137 11.2 Configuring and Starting VMs with vSphere Client ...... 138 11.3 Operating System (SLES for SAP 11 SP3) Installation ...... 150 11.4 Operating System (Red Hat Enterprise Server 6.5 and 6.6) Installation ...... 150 11.4.1 Changes after Red Hat Installation ...... 151 11.5 Tuning of Operating System and VM ...... 152 11.5.1 Tuning of OS ...... 152 11.5.2 Tuning of ESXi and VM ...... 152

12 Upgrading the Hardware Configuration 154 12.1 Power Policy Configuration ...... 154 12.2 Reboot Behavior ...... 154 12.3 Adding storage ...... 156 12.3.1 Adding storage via EXP2524 ...... 156 12.3.2 Adding storage on second internal M5210 controller ...... 156 12.3.3 Configure RAID array(s) ...... 157 12.3.4 Deciding for a CacheCade RAID Level ...... 158 12.3.5 Configuring RAID array when CacheCade is not yet configured ...... 158 12.3.6 Configuring RAID array with existing CacheCade ...... 158 12.3.7 Changing the CacheCade RAID Level ...... 158 12.3.8 Configuring GPFS ...... 159 12.4 Adding memory ...... 160 12.5 Adding CPU Books ...... 161

13 Software Updates 162 13.1 Warning ...... 162

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VII 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

13.2 Update Variants ...... 162 13.2.1 General per node update procedure ...... 162 13.2.2 Disruptive Cluster Update ...... 164 13.2.3 Full Cluster Rolling Update ...... 164 13.3 RHEL versionlock ...... 164 13.4 Linux Kernel Update ...... 165 13.4.1 SLES Kernel Update Methods ...... 165 13.4.2 RHEL Kernel Update Methods ...... 166 13.4.3 Kernel Update Procedure ...... 166 13.5 Updating GPFS ...... 167 13.5.1 Disruptive GPFS Cluster Update ...... 168 13.6 Upgrading from GPFS 3.5 to 4.1 ...... 171 13.6.1 Disruptive Upgrade from GPFS 3.5 to 4.1 ...... 172 13.6.2 Rolling upgrade per node from GPFS 3.5 to 4.1 ...... 174 13.7 Update Mellanox Network Cards ...... 176 13.8 SAP HANA ...... 177 13.9 Upgrade VMware ESXi 5.5 to 5.5U2 ...... 177

14 Operating System Upgrade 178 14.1 Upgrade RHEL 6.5 to 6.6 ...... 178 14.2 Rolling Upgrade ...... 178 14.3 Upgrade Overview ...... 178 14.4 Prerequisites ...... 179 14.5 Shutting down services ...... 179 14.6 Upgrade of IBM GPFS ...... 179 14.7 Update Mellanox Drivers ...... 180 14.8 Upgrading Red Hat ...... 180 14.9 Mandatory Kernel Update ...... 181 14.10Update of nss-softokn packages ...... 181 14.11Recompile Linux Kernel Modules ...... 181 14.12Adapting Configuration ...... 181 14.13Start IBM GPFS and HANA ...... 182

15 System Check and Support 183 15.1 System Login ...... 183 15.2 Basic System Check ...... 183 15.3 System Support ...... 186 15.4 Additional Tools for System Checks ...... 187 15.4.1 Lenovo Advanced Settings Utility ...... 187 15.4.2 ServeRAID StorCLI Utility for Storage Management ...... 187 15.4.3 SSD Wear Gauge CLI utility ...... 187 15.5 Getting Support (IBM PMR, SAP OSS) ...... 188

16 Backup and Restore of the Primary Partition 189 16.1 Description ...... 189 16.1.1 Boot Loader ...... 190 16.1.2 Drive Partitions ...... 191 16.2 Prerequisites ...... 191 16.2.1 Correcting the backup fstab ...... 192 16.2.2 Add boot loader entry for backup partition ...... 193 16.3 Backup of the Linux operating system ...... 195 16.4 Restoring the operating system ...... 195

17 SAP HANA Backup and Recovery 197

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® VIII 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

17.1 Description ...... 197 17.2 Backup of SAP HANA ...... 197 17.3 Restore of SAP HANA ...... 201

18 Troubleshooting 204 18.1 Adding SAP HANA Worker/Standby Nodes in a Cluster ...... 204 18.2 GPFS mount points missing after Kernel Update ...... 204 18.3 Degrading disk I/O throughput ...... 204 18.4 SAP HANA will not install after a system board exchange ...... 205 18.5 Known Kernel Updates ...... 205 18.6 Important SAP Notes (SAP Service Marketplace ID required) ...... 205 18.6.1 SAP Note 1641148 HANA server hang caused by GPFS issue ...... 205

Appendices 207

A GPFS Disk Descriptor Files 207

B Topology Vectors (GPFS 3.5 failure groups) 207

C Quotas 209 C.1 Quota Calculation ...... 209 C.2 Quota Calculation Script ...... 209

D Performance Settings 211

E Lenovo X6 Server MTM List & Model Overview 214

F Frequently Asked Questions 216 F.1 FAQ #1: SAP HANA Memory Limits ...... 216 F.2 FAQ #2: GPFS parameter readReplicaPolicy ...... 216 F.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines ...... 216 F.4 FAQ #4: Overlapping NSDs ...... 217 F.5 FAQ #5: Missing RPMs ...... 217 F.6 FAQ #6: CPU Governor set to ondemand ...... 220 F.7 FAQ #7: No disk space left bug (Bug IV33610) ...... 220 F.8 FAQ #8: Setting C-States ...... 221 F.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues ...... 221 F.9.1 Changing Queue Depth ...... 222 F.9.2 Use recommended Firmware version ...... 222 F.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO ...... 223 F.11 FAQ #11: GPFS NSD on Devices with GPT Labels ...... 223 F.12 FAQ #12: GPFS pagepool should be set to 4GB ...... 224 F.13 FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506 ...... 225 F.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup ...... 225

G References 226 G.1 Lenovo References ...... 226 G.2 IBM References ...... 226 G.3 SAP General Help (SAP Service Marketplace ID required) ...... 226 G.4 SAP Notes (SAP Service Marketplace ID required) ...... 227 G.5 Novell SUSE Linux Enterprise Server References ...... 228 G.6 Red Hat Enterprise Linux References (Red Hat account required) ...... 228

H Changelog 229

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® IX 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

List of Figures 1 Current SAP HANA Appliance Scenarios ...... 8 2 Hardware Overview ...... 9 3 SAP HANA Multiple Single Node Example ...... 10 4 SAP HANA Clustered Example with Backup ...... 11 5 Workload Optimized System x3850 X6 2 Socket Rear View ...... 17 6 Workload Optimized System x3850 X6 4 Socket Rear View ...... 18 7 Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on x3950 X6 in an additional Storage Book ...... 18 8 Workload Optimized System x3950 X6 8 Socket Rear View ...... 20 9 G8264 RackSwitch front view ...... 25 10 G8124 RackSwitch front view ...... 26 11 G8272 RackSwitch front view ...... 27 12 G8296 RackSwitch front view ...... 29 13 G8052 RackSwitch front view ...... 30 14 Cluster Node Network Diagram ...... 31 15 Cluster Switch Networking Example ...... 32 16 License Agreement ...... 53 17 Hostname and Domain Name ...... 54 18 Network Configuration ...... 54 19 Cluster Node NIC Configuration dialog bond0 ...... 56 20 Clock and Time Zone ...... 57 21 Advanced NTP Configuration ...... 57 22 Password for the System Administrator ...... 58 23 Installation Mode Selection ...... 63 24 GPFS IP Configuration Dialog ...... 63 25 HANA Password Input Dialog ...... 64 26 DR Architectural Overview ...... 68 27 DR Data Distribution in a Four Node Cluster ...... 69 28 Logical DR Network Setup ...... 70 29 DR Networking View (with no client uplinks shown) ...... 70 30 SAP HANA DR using storage expansion - architectural overview ...... 88 31 Single Node with High Availability ...... 103 32 File System Layout - Single Node HA ...... 104 33 Network Switch Setup for Single Node with HA ...... 107 34 Single Node with stretched HA - Two Site Approach ...... 112 35 Single Node with stretched HA - Three Site Approach ...... 112 36 File System Layout - Single Node stretched HA ...... 113 37 Single Node with Disaster Recovery - Two Site Approach ...... 114 38 Single Node with Disaster Recovery - Three Site Approach ...... 114 39 File System Layout - Single Node with DR with Storage Expansion ...... 115 40 Single Node with HADR using IBM GPFS Storage Replication ...... 116 41 File System Layout - Single Node HADR ...... 117 42 File System Layout - Single Node HADR with Storage Expansion ...... 119 43 Single Node DR with SAP System Replication ...... 120 44 Single Node DR with SAP System Replication ...... 120 45 File System Layout of Single Node DR with SAP System Replication ...... 121 46 File System Layout of Single Node DR with SAP System Replication with Storage Expansion122 47 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication123 48 Single Node with HA using IBM GPFS Storage Replication and DR using System Repli- cation without remote site Switches ...... 124 49 File System of Single Node with HA and DR with System Replication ...... 125 50 File System of Single Node with HA and DR with System Replication and Storage Expansion126

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® X 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

51 login to ESXi ...... 129 52 configure management network ...... 130 53 display network adapters ...... 130 54 display network adapters 2 ...... 131 55 IP configuration ...... 131 56 Set IP,NETMASK,GW ...... 132 57 Set DNS and Hostname ...... 132 58 Set DNS suffix ...... 133 59 ESXi 5.x filesystems on a System x3850 X6. The VFAT filesystems belong to the USB device ...... 135 60 ESXi5.5 Storage Path ...... 137 61 ESXi 5.1 WEB Welcome ...... 138 62 Create new virtual machine ...... 139 63 Choose custom configuration ...... 139 64 Choose a name ...... 140 65 Choose disk storage for VM files ...... 140 66 Newest virtual machine hardware version ...... 141 67 Configure the use of more than 32 CPUs ...... 141 68 Choose Operating System ...... 142 69 Choose number of CPUs ...... 142 70 Choose Memory ...... 143 71 Choose Network Cards ...... 143 72 Choose SCSI controller ...... 144 73 Create new HANA datastore ...... 144 74 Choose datastore size ...... 145 75 Choose datastore ...... 145 76 Choose SCSI Node ...... 146 77 Add a new CD/DVD device ...... 146 78 Select ISO image ...... 147 79 Select IDE device 0:0 ...... 147 80 Finish creation of SLES ISO mount ...... 148 81 Upgrade virtual hardware ...... 148 82 Confirm upgrade ...... 149 83 Upgrade virtual hardware ...... 149 84 Changing the autoyast parameter for installation ...... 150 85 Adding kickstart parameter for install ...... 151 86 Overview of Backup/Restore Operations ...... 190 87 Sample GRUB boot loader screen ...... 196

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XI 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

List of Tables 1 Network Switch Options ...... 12 2 System x3850 X6 Single Node Configurations ...... 13 3 IBM System x3950 X6 Single Node Four Socket Configurations ...... 13 4 System x3950 X6 Single Node Eight Socket Configurations ...... 13 5 System x3950 X6 Single Node Four Socket Configurations with Storage Expansion . . . . 14 6 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations ...... 14 7 System x3850 X6 Cluster Node Configurations with Storage Expansion ...... 14 8 System x3950 X6 Cluster Node Configurations ...... 15 9 Slots which may be used for additional NICs ...... 16 10 Card assignments for a two socket x3850 X6 ...... 16 11 Card assignments for a four socket x3850 X6 ...... 17 12 Network interface card assignments for an eight socket x3950 X6 ...... 19 13 Card placement for x3950 X6 four socket and eight socket ...... 19 14 Customer infrastructure addresses ...... 22 15 IP address configuration ...... 23 16 Numbering conventions ...... 24 17 G8264 RackSwitch port assignments ...... 25 18 G8124 RackSwitch port assignments ...... 27 19 G8272 RackSwitch port assignments ...... 28 20 G8296 RackSwitch port assignments ...... 29 21 G8052 RackSwitch port assignments ...... 30 22 Installation Process and Phases ...... 41 23 SAP HANA references ...... 42 24 DVD Part Numbers ...... 43 25 Supported Firmware, Software and Driver Levels ...... 44 26 Required Operation Modes UEFI settings ...... 46 27 Required Processors UEFI settings ...... 47 28 Required Power UEFI settings ...... 47 29 Required Memory UEFI settings ...... 47 30 Boot options and boot loaders used ...... 48 31 x3850 X6 RAID Controller Configuration ...... 49 32 x3950 X6 RAID Controller Configuration ...... 50 33 Partition Scheme for Single Nodes and Cluster Installations ...... 51 34 DVD/ISO Media Install Options ...... 52 35 Hostname Settings for DR ...... 72 36 Extra Network Settings for DR ...... 72 37 GPFS Settings for DR Cluster ...... 76 38 eX5 T-Shirt Size to X6 Model Mapping ...... 91 39 Stanza file for X6 servers in eX5 clusters ...... 93 40 Stanza file for X6 servers in eX5 clusters ...... 95 41 eX5 T-Shirt Size to X6 Model Mapping ...... 98 42 Stanza file for X6 servers in eX5 clusters ...... 100 43 Stanza file for X6 servers in eX5 clusters ...... 101 44 IBM System x3550 M4 GPFS quorum node ...... 105 45 Single Node with HA OS Partitioning ...... 106 46 Single Node with HA OS Networking Setup ...... 106 47 Single Node with HA Network Switch Definitions ...... 107 48 SAP HANA Virtual Machine Sizes by Lenovo ...... 128 49 RAID array and RAID controller overview ...... 155 50 x3850 X6 Memory DIMM Placement ...... 160 51 x3950 X6 Memory DIMM Placement ...... 160 52 Upgrade GPFS Portability Layer Checklist ...... 166

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XII 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

53 Upgrade GPFS Portability Layer Checklist ...... 168 54 GPFS Upgrade Checklist ...... 172 55 Required SAP HANA directories for restore ...... 200 56 Topology Vectors in a 8 node DR-cluster ...... 208 57 Lenovo MTM Mapping & Model Overview ...... 214 58 Lenovo MTM Mapping & Model Overview ...... 215 59 ServeRAID M5120 Firmware Issues ...... 222

List of Listings 1 SSH login screen ...... 183 2 Support script usage ...... 183 3 Support script output ...... 184 4 Example SUSE fstab entries ...... 191 5 Example Red Hat fstab entries ...... 192 6 Creating a copy of the motd file ...... 192 7 Example SLES primary fstab file ...... 192 8 Example SLES backup fstab file ...... 192 9 Example RHEL backup fstab file ...... 193 10 Changing files for backup partition ...... 193 11 Example UEFI Configuration for Primary Partition ...... 193 12 Example GRUB Configuration for Primary Partition ...... 194 13 Example GRUB Configuration for Backup Partition ...... 195 14 Example rsync command ...... 195 15 Example rsync command ...... 196 16 Example SLES primary fstab file ...... 196 17 Example RHEL primary fstab file ...... 196

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XIII 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

List of Abbreviations

ASU Lenovo Advanced Settings Utility BIOS Basic Input / Output System DR Disaster Recovery (previously SAP Disaster Tolerance) DT SAP Dynamic Tiering (not to be confused with Disaster Recovery (DR), previously Disaster Tolerance (DT)) ELILO EFI Linux Loader IBM GPFS IBM General Parallel File System GRUB Grand Unified Bootloader GSS GPFS Storage Server IMM Integrated Management Module LILO Linux Loader MTM Machine Type Model NIC Network Interface Controller OLAP On Line Analytical Processing OLTP On Line Transaction Processing OS Operating System RHEL Red Hat Enterprise Linux SAP HANA SAP HANA Platform Edition SLES SUSE Linux Enterprise Server SLES for SAP SUSE Linux Enterprise Server for SAP Applications UEFI Unified Extensible Firmware Interface UUID Universally Unique Identifier VLAG Virtual Link Aggregation Group VLAN Virtual Local Area Network

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® XIV 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 Abstract

This document provides general information specific to the Lenovo Systems Solution for SAP HANA Platform Edition (short: Lenovo Solution). This document assumes that the reader understands the basic structure and components of the SAP HANA Platform Edition (SAP HANA) software, that he has a solid understanding of Linux administration processes, and that he has been instructed how to install the SAP HANA1 software on Lenovo Systems hardware. Lenovo Solution is built with Lenovo Systems hardware based on Intel Xeon Architecture as building blocks for a scale-up or scale-out SAP HANA system. These provide a highly-scalable infrastructure for SAP HANA. The Lenovo Systems servers with local storage and Lenovo Systems Networking switches will be used to run SAP HANA. Lenovo has created orderable models upon which you may install and run the SAP HANA according to the sizing charts coordinated with SAP AG. For each workload type, special ordering options for the Lenovo System servers, storage and switches have been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA. Attention IMPORTANT: Please do not attempt to install a system without having been instructed about the content of this document.

Note It is considered best practice to create backups before and recover the SAP HANA system after a major failure instead of relying on a fresh install with the help of this document. For details on Backup and Recovery please refer to the Lenovo Solution Backup & Restore Guide as well as the Lenovo Solution Hardware, Operating System & GPFS Operations Guide (SAP Note 1650046). © Copyright 2014-2015 Lenovo. All Rights Reserved. Neither this documentation nor any part of it may be copied or reproduced in any form or by any means or translated into another language, without the prior consent of Lenovo. Lenovo makes no warranties or representations with respect to the content hereof and specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. Lenovo assumes no responsibility for any errors that may appear in this document. The information contained in this document is subject to change without any notice. Lenovo reserves the right to make any such changes without obligation to notify any person of such revision or changes. Lenovo makes no commitment to keep the information contained herein up to date. Edition Notice: 3rd July 2015 This is the published edition of this document. The online copy is the master.

1SAP HANA Platform Edition

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 1 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1.1 Preface & Scope

The objective of this paper is to document the installation and configuration of the SAP HANA Platform Edition (SAP HANA) on System x hardware using a managed set up rather than manually installing each node from scratch. The major products installed here are SAP HANA, IBM General Parallel File System (IBM GPFS) and the operating systems SUSE Linux Enterprise Server for SAP Applications (SLES for SAP), or Red Hat Enterprise Linux (RHEL). For instructions how to administrate SAP HANA Platform Edition (SAP HANA) please refer to the SAP HANA Technical Operations Manual2. Instructions how to administrate and maintain the other components delivered with the System x solution can be found in the SAP Note 1650046– Lenovo Systems Solution Hardware, Operating System & GPFS Operations Guide. The solution for SAP HANA Quick Start Guide provides an overview of the complete solution and instructions how to find service and support for your Lenovo Solution.

1.2 Acknowledgements

The authors of this document are: • Martin Bachmaier, Lenovo Development for SAP Solutions, Germany • Florian Bausch, Lenovo Development for SAP Solutions, Germany • Hans-Peter Droste, Lenovo Development for SAP Solutions, Germany • Detlev Freund, Lenovo Development for SAP Solutions, Germany • Patrick Hartman, Lenovo Development for SAP Solutions, Germany • Guido Kampe, Lenovo Development for SAP Solutions, Germany • Nils König, Lenovo Development for SAP Solutions, Germany • Christoph Nelles, Lenovo Development for SAP Solutions, Germany • Richard Ott, Lenovo Development for SAP Solutions, Germany • Volker Pense, Lenovo Development for SAP Solutions, Germany • Michael Reumann, Lenovo Development for SAP Solutions, Germany The authors would like to thank the following Lenovo and IBM colleagues: • Herbert Diether, Lenovo Development for SAp Solutions, Germany • Oliver Rettig, Lenovo Development for SAP Solutions, Germany • Keith Frisby, Lenovo Systems Lab Services, US • Thorsten Nitsch, IBM GTS, Germany • Alexander Trefs, Lenovo Technical Sales, Germany And many people at SAP Development, Walldorf, Germany; specifically: • Abdelkader Sellami, SAP HANA Support, Walldorf, Germany • Adolf Brosig, SAP HANA Development, Walldorf, Germany • Helmut Cossmann, SAP HANA Development, Walldorf, Germany • Henning Sackewitz, SAP Development, Walldorf, Germany

2http://help.sap.com/hana_platform

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 2 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

• Michael Becker, SAP HANA Support Development, Walldorf, Germany • Oliver Rebholz, SAP HANA Development, Walldorf, Germany

1.3 Feedback

We are interested in your comments and feedback. Please send it to [email protected]. The full guidebook can be downloaded, depending on its version, from following community (SAP HANA Support Document section) – SAP Solutions at Lenovo Community.

1.4 Disclaimer

This document is subject to change without notification and will not cover the issues encountered in every customer situation. It should be used only in conjunction with the official product literature. The information contained in this document has not been submitted to any formal test and is distributed AS IS. All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a defini- tive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in Lenovo product announcements. The information is presented here to communicate Lenovo’s current investment and development activities as a good faith effort to help with our customers’ future planning. This document is for educated service personnel only. If you are not familiar with the described system, we will ask you to restrain from trying to apply what is described herein – you could void the preloaded system installation – and void the SAP certified configuration. This will void the warranty and support of said machine. Please contact the [email protected] to get enrolled for education prior to installing an Lenovo Solution appliance. In case of issues with the SAP HANA appliance, the customer is asked to open a SAP Help Desk request (OSS ticket) first and foremost. Only by following this path, can we ensure the proper configuration of the Lenovo Solution. If the customer would open an Lenovo support ticket for the system, he might be requested to perform system upgrades to firmware or software to the latest available levels which might not be supported with the SAP HANA appliance. If identified as a hardware or file system issue, the ticket will be forwarded to the Lenovo support team and handled appropriately. Although this may be contrary to standard Lenovo Support processes, it is the approved and accepted support process for all SAP Appliances including the SAP HANA appliance.

1.5 Support

The System x SAP HANA development team provides new images for the SAP HANA appliance at reg- ular intervals. These images have dependencies regarding the hardware, operating systems, and hardware drivers. The use of the latest image for maintenance and installation of SAP HANA appliance is highly recommended. Whenever the firmware level recommendations (fixes known firmware issues) for the Lenovo components of the SAP HANA appliance are given by the individual System x support representatives, it is the customers’ responsibility to upgrade (or downgrade) to the recommended levels as instructed by System x support representatives. A list of the minimally required versions can be found in SAP Note 1880960 – Lenovo Systems Solution for SAP HANA Platform Edition FW/OS/Driver Maintenance.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 3 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Whenever the operating systems recommendations (fixes known operating systems issues) for the SUSE Linux components of the SAP HANA appliance are given by the SAP, SUSE, or IBM/Lenovo support representatives, it is the customers’ responsibility to upgrade (or downgrade) to the recommended levels as instructed by SAP through an explicit SAP Note or a Customer OSS Message. SAP describes their operational concept, including updating of the operating system components in SAP Note 1599888– SAP HANA: Operational Concept. If the Linux kernel is updated, you have to recompile IBM GPFS software as well. Whenever other hardware or software recommendations (that fix known issues) for components of the SAP HANA appliance are given by the individual Lenovo support representatives, it is the customers’ responsibility to upgrade (or to downgrade) to the recommended levels as instructed by Lenovo support representatives. If software and documentation updates are available, you can download them from the respective Lenovo, IBM, SUSE or SAP website. To check for updates, go to the following websites. Follow the procedure in the included documentation to update the software. • Firmware and drivers for System X6 Servers – You can obtain updates for System x3850/x3950 X6 servers on the IBM support website (Fix Central) at http://www.ibm.com/support/fixcentral using the the ’Find product’ tab. • IBM General Parallel File System (IBM GPFS3) and IBM Spectrum Scale updates – You can obtain updates for GPFS on the IBM support website for GPFS 3.5.0, GPFS 4.1.0 and IBM Spectrum Scale/GPFS 4.1.1 • SUSE Linux Enterprise Server for SAP Applications 11 SP3 – You can download the installation package from the SUSE website at http://download. novell.com/Download?buildid=XL0RqEykZpc~ • SUSE Linux patches and updates – You can obtain the latest code updates for SUSE from the SUSE website at http://download. novell.com/patch/finder/ • Red Hat Enterprise Linux 6.5 and 6.6 – You can download the installation package from the Red Hat website at http://www.redhat. com/en/technologies/linux-platforms/enterprise-linux • VMware ESX Server patches and updates – You can obtain the latest code updates for vSphere ESX server from the VMware website at http://www.vmware.com/support/ • SAP HANA appliance updates – You can obtain the latest code updates from SAP at the SAP Service Marketplace at http: //service.sap.com/swdc Lenovo recommends that customers follow the software upgrade recommendations set out by SAP in the SAP HANA Technical Operations Manual4 (TOM). It is important to understand that the corrections listed in this note are those known to be a solution to a definite problem when running SAP HANA appliance on the System x solutions. This knowledge was derived from internal testing, or customers who ran into a specific problem. In parallel, the organizations owning the individual products provide a lot more fixes that are unknown to the Lenovo-SAP team, yet are recommend to be applied, nevertheless. In particular, there are fixes that IBM/Lenovo recommends to install that are not listed here. It is expected

3IBM General Parallel File System 4http://help.sap.com/hana/SAP_HANA_Technical_Operations_Manual_en.pdf

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 4 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

that you contact your IBM/Lenovo service contact to get a list of those fixes as well as a reasonably current service level in general.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 5 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

2 Introduction

2.1 Purpose

This document is intended to provide a single point of reference for techniques and product behaviors when dealing with SAP HANA.

2.2 Applicability

The techniques and product behaviours outlined in this document apply to: • SAP HANA appliance Platform Edition v1.0 • SLES for SAP5 11 SP3 • RHEL6 6.5 and 6.6 • IBM GPFS 3.5 and 4.1 • Lenovo Systems solution for SAP HANA appliance based on the: – System x3850/x3950 X6 Workload Optimized Server

2.2.1 SAP HANA Platform Edition Versions

In this document, we reference to several different versions of the Lenovo Solution guided installation software. The following numbering refers to the corresponding SAP HANA Platform Edition version. 1.7.x SAP HANA Platform Edition v 1.0 SPS07 - First release on IBM/Lenovo Systems X6 hardware 1.8.x SAP HANA Platform Edition v 1.0 SPS08 1.9.x SAP HANA Platform Edition v 1.0 SPS09

2.3 Exclusions and Exceptions

The techniques and product behaviours outlined in this document may not be applicable to future releases.

2.4 Conventions

This guide uses several conventions to improve the reader’s experience and the ease of understanding.

2.4.1 Icons Used

The following information boxes indicate important information you should follow according to the level of importance. Attention ATTENTION – pay close attention to the instructions given

5SUSE Linux Enterprise Server for SAP Applications 6Red Hat Enterprise Linux

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 6 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Warning WARNING – this is something to take into consideration

Note INFORMATION – extra information describing in detail

2.4.2 Code Snippets

When reading code snippets you have to note the following: Lines of code that are too long to be shown in one line will be automatically broken. This line break is indicated by an arrow at the end of the first and an arrow at the start of the second line:

1 This is a code snippet that is too long to be printed in one single line, therefore ←- ,→you will see an automatic line break.

There are also line numbers at the left side of each code snippet to improve the readability. Code examples that contain commands that have to be executed on a command line follow these rules: • Lines beginning with a # indicate commands to be executed by the root user. • Lines beginning with a $ indicate commands to be executed by an arbitrary user.

3 Solution Overview

This document provides general information specific to the Lenovo Solution. This document assumes that the reader understands the basic structure and components of the SAP HANA Platform Edition. SAP HANA should be installed on hardware that has been specifically certified for SAP HANA by SAP. This hardware may not be configured from individual parts, rather it is to be ordered and delivered as a single unit using an Lenovo manufacturer type/model number specified later.

3.1 The SAP HANA Appliance Software

The Lenovo Solution is based on building blocks that provide a highly scalable infrastructure for SAP HANA based on the System x architecture: x3850/x3950 X6 as well as software, such as IBM GPFS, that will be used to run SAP HANA. Lenovo has created several system models upon which you may install and run SAP HANA according to the sizing charts coordinated with SAP. For each workload type a special System x type/model has been approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition.

3.2 Definition of SAP HANA

The following picture defines the current SAP HANA scenarios that can be leveraged through the System x solution for the SAP HANA Platform Edition.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 7 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Corporate Business Intelligence (BI)

SAP Business Warehouse

SAP HANA DB Appliance 1.0 SPS 03

Local BI Local BI

SAP ERP SAP ERP n Customer SAP SAP SAP (CRM (CRM, Application HANA SRM,SCM) SRM,SCM) HANA HANA

Data Mart … Data Mart Data Mart SAP HANA DB SAP HANA DB SAP HANA DB SAP HANA SAP HANA Appliance Appliance Appliance SAP HANA 1.0 1.0 1.0 1.0 SPS 05 1.0 SPS 05 1.0 SPS 05

Figure 1: Current SAP HANA Appliance Scenarios

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 8 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

4 Hardware Configurations

The System X6 Workload Optimized servers for SAP HANA are based upon two building blocks that can be used to fulfill the hardware requirements for SAP HANA. The SAP HANA appliance software must be installed only on a certified and tested hardware configuration based on one of these two models. Lenovo provides a model/type number for four (4) socket and eight (8) socket systems that are to be setup for each certified model by SAP. A customer needs only to choose the model and the extra options to fulfill their requirements. Models created manually will neither be supported by Lenovo nor SAP due to the high-performance criteria set out by SAP during certification.

(a) System x3850 X6 (b) System Storage EXP2524 (c) System x3950 X6

Figure 2: Hardware Overview

System x3850 X6 Workload Optimized Server for SAP HANA (Figure 2a) • 2×–4×Intel Xeon E7-8880v27,8 or E7-8880v39 Family of Processors • 128-2048GB DDR3 Memory • Internal Storage: – 6×1.2TB 2.5" HDD for RAID1 and RAID5 – 2×400GB SSD for LSI CacheCade • One (1) External Storage (EXP2524) for systems > 512GB (stand-alone configurations) or ≥ 512GB (cluster configurations) • 2 ×Dual-Port 10GbE NICs • 1 ×Quad-Port 1GigE NICs • IBM General Parallel File System • Certified for SLES for SAP OS and SAP HANA appliance software Optional System Storage EXP2524 (Figure 2b) • Up to 20×1.2TB 2.5" HDD RAID510 • Up to 4×400GB SSD for LSI CacheCade System x3950 X6 Workload Optimized Server for SAP HANA (Figure 2c) • 4×–8×Intel Xeon E7-8880v2/v3 and other Family of Processors (refer to System x3850 X6) • 512GB–6TB DDR3 Memory

7For improved performance, E7-8890v2 is supported as an optional feature. 8For customers who confirm that an upgrade to an 8 socket system will never be desired, the Intel processors E7-4880v2 or E7-4890v2 will also be supported as optional alternate features. 9E7-8890v3 (for improved performance) or E7-8880Lv3 (for improved efficiency) are supported as optional feature 10RAID6 optional

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 9 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

• Internal Storage: – 12×1.2TB 2.5" HDD for RAID1 and RAID5 – 4×400GB SSD for LSI CacheCade • One (1) External Storage (EXP2524) for systems ≥ 3TB (stand-alone configurations) or > 1024GB (cluster configurations) • 2 ×Dual-Port 10GbE NICs • 1 ×Quad-Port 1GigE NICs • IBM General Parallel File System • Certified for SLES for SAP OS and SAP HANA appliance software

4.1 SAP HANA Platform Edition T-Shirt Sizes

Lenovo and SAP have certified a set of configurations to be used with the SAP HANA Platform Edition that are based on the Intel Xeon IvyBridge EX E7-4880v2, E7-4890v2, E7-8880v2, E7-8890v2 or Intel Xeon Haswell EX E7-8880v3, E78880Lv3, E7-8890v3 processor family.

4.2 Single Node versus Clustered Configuration

The Systems X6 Solution servers can be configured in two ways: 1. As a single node configuration with separate, independent HANA installations (example: produc- tion, test, development). These servers all have individual GPFS clusters that are independent from each other. These should be installed as single servers.

SAP ERP SAP ERP SAP ERP SAP ERP Clients SAP ERP Clients SAP ERP Clients (Prod) (Test) (Dev)

Server 1 Server 2 Server 3 (Production) (Test) (Development)

SAP HANA SAP HANA SAP HANA database database database

GPFS GPFS GPFS

InternalInternal Internal Internal storagestorage storage storage

Figure 3: SAP HANA Multiple Single Node Example

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 10 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

2. As a clustered configuration with a distributed HANA instance across servers. All server (nodes) form one HANA cluster. All servers (nodes) form one GFPS cluster. These should be installed as clustered servers.

Clients

SAP BW SAP ERP

S A P H A N A C l u s t e r

Server 1 Server 2 Server 3

SAP HANA SAP HANA SAP HANA Backup/Recovery Database Database Database Master node Worker node Standby node SAN Storage

SAN storage GPFS GPFS GPFS Primary Secondary Additional node node node SAN storage

SAN Internal Storage Internal Storage Internal Storage storage SAP HANA SAP HANA SAP HANA Data&Log Data&Log Data&Log SAN storage G P F S C l u s t e r

Figure 4: SAP HANA Clustered Example with Backup

The term scale-out or cluster is used interchangeably in this document. What is meant is the use of multiple single Lenovo workload optimized servers connected via one or more configuration specific network switches in such a way that all servers act as one single high performance SAP HANA instance. These servers will need to be configured different from a single node system and are therefore defined here explicitly. Further documentation will differentiate between non-clustered (single or consolidated) and clustered installations.

4.2.1 Network Switch Options

For clustered configurations, extra hardware such as network switches and adapters need to be pur- chased in addition to the clustered appliances. Currently, the supported network switches for the Lenovo Workload Optimized server in a clustered configuration are:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 11 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Network Description Part Number RackSwitch G8296 (Rear-to-Front) 7159GR6 RackSwitch G8296 (Front-to-Rear) 7159GF5 10Gb Ethernet RackSwitch G8272 (Rear-to-Front) 7159CRW RackSwitch G8272 (Front-to-Rear) 7159CFV RackSwitch G8264 (Rear-to-Front) 7159G64 RackSwitch G8264 (Front-to-Rear) 715964F RackSwitch G8124E (Rear-to-Front) 7159BR6 RackSwitch G8124E (Front-to-Rear) 7159BF7 RackSwitch G8052 (Rear-to-Front) 7159G52 1Gb Ethernet RackSwitch G8052 (Front-to-Rear) 715952F

Table 1: Network Switch Options

Note These configurations may change over time, so please contact [email protected] for any update.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 12 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

4.3 SAP HANA Optimized Hardware Configurations

SEO models exist for certain configurations, please see the E: Lenovo X6 Server MTM List & Model Overview on page 214 for more details.

4.3.1 System x3850 X6 Single Node Configurations

SAP Models 128 256 384 512 256 512 Product x3850 X6 Type/Model 6241–AC3 CPU 2 ×Intel Xeon® E7-8880v2/v3 4 ×Intel Xeon® E7-8880v2/v3 Memory 128GB 256GB 384GB 512GB 256GB 512GB Disk 6×1.2TB HDD 2×400GB SSD Controller 1 ×M5210 Disk Layout 3.6 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 2: System x3850 X6 Single Node Configurations

4.3.2 System x3950 X6 Single Node Configurations

SAP Models 256 512 768 1024 1536 2048 Product x3950 X6 Type/Model 6241–AC4 CPU 4 ×Intel Xeon® E7-8880v2/v3 Memory 256GB 512GB 768GB 1024GB 1536GB 2048GB Disk 6×1.2TB HDD 2×400GB SSD 12×1.2TB HDD 4×400GB SSD Controller 1 ×M5210 2 ×M5210 3.6 TB RAID5 for 9.6 TB RAID5 for Disk Layout SAP HANA data/log SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 3: IBM System x3950 X6 Single Node Four Socket Configurations

SAP Models 512 1024 1536 2048 Product x3950 X6 Type/Model 6241–AC4 CPU 8 ×Intel Xeon® E7-8880v2/v3 Memory 512GB 1TB 1.5TB 2TB Disk 6×1.2TB HDD 2×400GB SSD 12×1.2TB HDD 4×400GB SSD Controller 1 ×M5210 2 ×M5210 Disk Layout 3.6 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 4: System x3950 X6 Single Node Eight Socket Configurations

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 13 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

4.3.3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion

SAP Models 768 1024 1536* 2048* Product x3850 X6 Type/Model 6241–AC3 CPU 4 ×Intel Xeon® E7-8880v2/v3 Memory 768GB 1TB 1.5TB 2TB Disk 15×1.2TB HDD & 4×400GB SSD Controller 1 ×M5210 & 1 ×M5120/M5225 Disk Layout 13.2 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 5: System x3950 X6 Single Node Four Socket Configurations with Storage Expansion * For Suite on HANA only, not Datamart and BW

4.3.4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations

SAP Models 3TB 4TB 6TB Product x3950 X6 Type/Model 6241–AC4 CPU 8 ×Intel Xeon® E7-8880v2/v3 Memory 3TB 4TB 6TB Disk 21×1.2TB HDD & 6×400GB SSD 30×1.2TB HDD & 8×400GB SSD Controller 2 ×M5210 & 1 ×M5120/M5225 Disk Layout 19.2 TB RAID5 for SAP HANA data/log 28.8 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 6: System x3950 X6 SAP ERP on SAP HANA Single Node Configurations

4.3.5 System x3850 X6 Cluster Node Configurations with Storage Expansion

SAP Models 256 512 1024 Product x3850 X6 Type/Model 6241–AC3 CPU 2 ×Intel Xeon® E7-8880v2 4 ×Intel Xeon® E7-8880v2/v3 Memory 256GB 512GB 1TB Disk 15×1.2TB HDD & 4×400GB SSD Controller 1 ×M5210 & 1 ×M5120/M5225 Disk Layout 13.2 TB RAID5 for SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 7: System x3850 X6 Cluster Node Configurations with Storage Expansion

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 14 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

4.3.6 System x3950 X6 Cluster Node Configurations

SAP Models 512 1024 1024 2048 Product x3950 X6 Type/Model 6241–AC4 CPU 4 ×Intel Xeon® E7-8880v2/v3 8 ×Intel Xeon® E7-8880v2/v3 Memory 512GB 1TB 1TB 2TB 12×1.2TB HDD 21×1.2TB HDD Disk & 4×400GB SSD & 6×400GB SSD Controller 2 ×M5210 2 ×M5210 & 1 ×M5120/M5225 9.6 TB RAID5 19.2 TB RAID5 Disk Layout for SAP HANA data/log for SAP HANA data/log 2 × Dual-Port 10GbE Network 1 × Quad-Port 1GigE

Table 8: System x3950 X6 Cluster Node Configurations

4.4 Card Placement

Attention You need to make sure, that the cards are placed in the correct PCI slot. Please refer to the tables below for the assignment regarding in which slot a certain card should be. This step must be done before the installation. Please be aware, that only with the correct card layout your machine is supported by Lenovo. Depending on having two, four or eight socket machines, there is a different card placement. Please refer to figure 5 and table 10 two socket machines, figure 6 and table 11 on page 17 regarding four socket machines and figure 8 and table 12 on page 19. Concerning the numbering of the slots please note that PCI slots 11 and 12 are located in the Storage Book, see figure 7. A x3950 X6 machine has an additional Storage Book containing PCI slots 43 and 44. The Storage Books are accessible from the front.

4.4.1 Network Interface Cards

The x3850 X6 machine comes with two Mellanox Connect X-3 10GbE adapters that provide two 10GbE ports or two Mellanox ConnectX-3 FDR IB VPI adapters that provide two QSFP ports. With QSA adapters the QSFP ports support SFP+ transceivers for 10GbE connectivity. A quad port Intel I-350 provides four 1GbE ports and is placed in slot 10. In a x3950 X6 an additional I-350 card can be placed in slot 42. Intel I-340 PCI cards is available optionally, if more 1GbE ports are needed. Please see the tables and figures below regarding the assignment regarding in which slot a certain card should be, depending on your machine type and configuration.

4.4.2 Slots for additional Network Interface Cards

If the customer needs more network ports, the PCI slots shown in table 9: Slots which may be used for additional NICs on page 16 may be used for additional NICs.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 15 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Machine PCI Slots x3850 X6 two sockets 9, 10 x3850 X6 four sockets 2, 3, 5, 6, 10 x3950 X6 four sockets 9, 10, 41, 42 x3950 X6 eight sockets 5, 6, 10, 37, 38, 42

Table 9: Slots which may be used for additional NICs

4.4.3 RAID Adapter Cards

The internal RAID adapter is a ServeRAID M5210 which resides in slot 12 in the Storage Book. Regarding the x3950 X6, there are two internal RAID adapter used, residing in slot 12 and 44. The first external RAID adapter (ServeRAID M5120 or M5225) in a x3850 X6 will be placed in slot 8, the second in slot 7 and then slot 9 for the third. Regarding a x3950 X6 machine, placement will start in slot 40, then 39, then 41 and finally 7 and 8, refer to table 13 for details.

Ethernet Card Port Label Slot Device ServeRAID M5210 (internal) – 12 – E eth4 F eth5 Intel I-350 1GbE quad port 10 G eth6 H eth7 – eth8 – eth9 Intel I-340 1GbE quad port * 9 – eth10 – eth11 A eth0 Mellanox ConnectX-3 (10GbE or FDR IB VPI) 8 B eth1 C eth2 Mellanox ConnectX-3 (10GbE or FDR IB VPI) 7 D eth3 100MbE internal Ethernet Adapter for System I – – Management via the IMM

Table 10: Card assignments for a two socket x3850 X6 * This card is optional

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 16 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 5: Workload Optimized System x3850 X6 2 Socket Rear View

Ethernet Card Port Label Slot Device ServeRAID M5210 (internal) – 12 – E eth4 F eth5 Intel I-350 1GbE quad port 10 G eth6 H eth7 ServeRAID M5120/M5225 (external) * – 9 – ServeRAID M5120/M5225 (external) * – 8 – ServeRAID M5120/M5225 (external) * – 7 – – eth8 – eth9 Intel I-340 1GbE quad port * 5 – eth10 – eth11 C eth2 Mellanox ConnectX-3 (10GbE or FDR IB VPI) 4 D eth3 A eth0 Mellanox ConnectX-3 (10GbE or FDR IB VPI) 1 B eth1 100MbE internal Ethernet Adapter for System I – – Management via the IMM

Table 11: Card assignments for a four socket x3850 X6 * This cards are only used in certain configurations, please refer to section 4.4.3 for details

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 17 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 6: Workload Optimized System x3850 X6 4 Socket Rear View

Figure 7: Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on x3950 X6 in an additional Storage Book

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 18 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Ethernet Card Port Label Slot Device E eth4 F eth5 Intel I-350 1GbE quad port 10 G eth6 H eth7 C eth2 Mellanox ConnectX-3 (10GbE or FDR IB VPI) 36 D eth3 A eth0 Mellanox ConnectX-3 (10GbE or FDR IB VPI) 4 B eth1 100MbE internal Ethernet Adapter for System I – – Management via the IMM 100MbE internal Ethernet Adapter for System J – – Management via the IMM K e.g. eth8 L e.g. eth9 Intel I-350 1GbE quad port * 42 M e.g. eth10 N e.g. eth11 – e.g. eth8 – e.g. eth9 Intel I-340 1GbE quad port * 5 – e.g. eth10 – e.g. eth11

Table 12: Network interface card assignments for an eight socket x3950 X6 * This cards is optional, please refer to table 13 for details

4 processors 8 processors Slot 512GB 4S 1TB 4S 1TB 2TB 4TB 6TB* 12TB* 4 MLNX MLNX MLNX MLNX MLNX S/C 7 MLNX MLNX M5120/ M5225 S/C 8 M5120/ M5225 10 I350 I350 I350 I350 I350 I350 I350 12 M5210 M5210 M5210 M5210 M5210 M5210 36 MLNX MLNX MLNX MLNX MLNX S/C S/C C M5120/ 39 MLNX MLNX M5120/ M5120/ M5225 M5225 M5225 C C C S/C C M5120/ C M5120/ 40 M5120/ M5120/ M5120/ M5120/ M5225 M5225 M5225 M5225 M5225 M5225 C M5120/ C M5120/ 41 M5225 M5225 42 I350 I350 I350 I350 I350 I350 I350 44 M5210 M5210 M5210 M5210 M5210 M5210 M5210

Table 13: Card placement for x3950 X6 four socket and eight socket

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 19 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 8: Workload Optimized System x3950 X6 8 Socket Rear View

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 20 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

5 Networking

5.1 Networking Requirements

The networking for the Lenovo Solution, the Integrated Management Module (IMM) and the correspond- ing switches should be set up and integrated into the customer network environment according to the customer’s requirements and the recommendations from SAP. SAP currently recommends that individual workloads are separated by either physical or virtual LAN addresses or subnets. The individual workloads described by SAP are: • SAP HANA internal communication via SAP HANA private networking • Customer access to the SAP HANA appliance via: – SAP Landscape Transformation Replication (LT) – Sybase Replication (SR) – SAP Business Objects Data Services (DS) – Business Objects XI, Microsoft Excel, etc. – Server data management tools for: ∗ System/DB backup and restore operations – Logical server application management (can be partially accomplished via Integrated Manage- ment Module) ∗ SSH access, VNC access, SAP Support access We strongly recommend that the following SAP Workloads are dedicated and distinct subnets using separate Ethernet adapters (NICs). If not, the network setup will become more complicated. • SAP HANA client access • Server data management • Server application management Additionally to the SAP workloads the Lenovo Solution defines two additional workloads: • IBM clustered files system communications for GPFS • Physical server management via the Integrated Management Module – Hardware support, console web access and SSH access It is necessary to separate the IBM GPFS and SAP HANA internal networks from all other networks as well as from each other. Servers being configured in a clustered scenario require two dedicated high speed NICs (e.g. 10GbE) with separate physical private LANs for the internal communication of GPFS and SAP HANA. In addition external networks, e.g. for SAP Client/BW and SAP management communication should be separated as well. If not, SAP HANA performance may be compromised and the system is not supported by SAP nor Lenovo.

5.2 Jumbo Frames

It is possible and allowed to activate the usage of so-called jumbo frames for the HANA and GPFS networks. Jumbo frames are Ethernet frames with a Maximum Transmission Unit (MTU) up to 9000 bytes. The standard MTU is 1500.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 21 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

The advantage of jumbo frames is less overhead for headers and checksum computation. This can lead to a better network performance on the HANA and GPFS networks. Attention Jumbo frames can only be used, if all network components (for example networking adapters and switches) that have to process these jumbo frames support the usage. If erroneously activated, jumbo frames cause the loss of network connectivity. The switches G8264, G8272, G8296 and G8124E are certified for the usage in the Lenovo Solution appliance with jumbo frames. In a standard cluster setup jumbo frames can be activated. In DR11 or High Availability setups the HANA and GPFS networks may communicate via non-Lenovo customer switches that cannot handle jumbo frames, therefore it is recommended to not use jumbo frames in these setups. To change this behaviour, you have to change the MTU size. This can be done like the following: • SUSE: in the YaST module for networking: General tab in the configuration of the network de- vice/bond • Red Hat: changing the MTU size in the file /etc/sysconfig/network-scripts/ifcfg-* of the interface/bond

Warning Jumbo frames are activated during the installation phase for bond0 and bond1. You may have to deactivate the usage of jumbo frames in certain scenarios.

5.3 Network Configuration

Before you configure the server and install the Lenovo Solution, please gather the following network information from your network administrator where indicated with the b symbol. Please use only IPv4 addresses. Note In case the customer plans to install a single node configuration, but would like to scale it out to a cluster by adding more severs: plan the network configuration for the GPFS and HANA networks as if the cluster would be already existing to simplify a later scale out.

IP Address b Default Network Prefix b Default Netmask b Default Gateway b Primary DNS IP b Secondary DNS IP b Domain Search b NTP Server b

Table 14: Customer infrastructure addresses

11Disaster Recovery (previously SAP Disaster Tolerance)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 22 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Port Label Network Single Cluster IP Address Hostname Netmask Gateway Server Node 01 (Worker/Stand-By/Single) IBM GPFS 127.0.1.1 (default) gpfsnode01 255.255.255.0 None Private Net- 192.168.10.101 (ex- any A / C (mandatory) (recommended) (recom- work (prede- ample) b b mended) fined) b SAP HANA 127.0.2.1 (default) hananode01 255.255.255.0 None Private Net- 192.168.20.101 (ex- any B / D (mandatory) (recommended) (recom- work (prede- ample) b b mended) fined) b Any of the re- Customer maining NIC b b b b Network ports IMM I b b b b Server Node 02 (Worker/Stand-By) IBM GPFS 127.0.1.1 (default) None Private Net- gpfsnode02 255.255.255.0 any A/C 192.168.10.102 (ex- (recom- work (prede- (mandatory) (recommended) ample) mended) fined) SAP HANA 127.0.2.1 (default) None Private Net- hananode02 255.255.255.0 any B/D 192.168.20.102 (ex- (recom- work (prede- (mandatory) (recommended) ample) mended) fined) . . for all other nodes . .

Table 15: IP address configuration

5.4 Network Switch Configuration For Clustered Installations

In a clustered configuration with high availability, the internal networks of the appliance for GPFS and HANA are set up with redundant links. These connect to redundant G8264, G8272, G8296 or G8124E 10GigE switches. Both switches are connected with a minimum of two ISL ports. It is recommended to use the 40GbE ports for the ISLs. On host side the two corresponding ports of each network are configured as Linux bond devices. The data replication connection to the primary data source can also be set up in a redundant fashion and connects directly to the appliance internal 10GigE HANA network. The details for this setup depend strongly on the customers network infrastructure and need to be planned accordingly. Details to the exact configuration can be found in chapter 5.6.7: Network Configurations in a Clustered Environment on page 30. Warning When connecting the data replication network directly to the internal 10GigE network, an ACL needs to be configured on the uplink port to isolate the internal networks (e.g. 127.0.n.24) from the customer network. If a network adapter or one of the switches fail, the SAP HANA network and the GPFS network are taken over by the remaining switch and network adapter. It is recommended to establish redundant network connections for the other networks (e.g. client network) as well. This setup is similar to the internal networks and requires two identical 1GigE or 10GigE switches

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 23 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

(e.g. G8052 1GigE or G8264 10GigE). As long as there is one redundant path to each server the remaining appliance and data management networks can be implemented with a single link. Each of the networks will then connect to one of the two switches. To implement network redundancy on the switch level, a Virtual Link Aggregation Group (VLAG) needs to be created on the two network switches. A VLAG requires a dedicated inter-switch link (ISL) for synchronization. More details can be found in Chapter 5.6.7: Network Configurations in a Clustered Environment on page 30. Note For more details on VLAGs please obtain the Application Guide respective to the RackSwitch model and N/OS you have installed and consult chapter "Virtual Link Aggregation Groups" (e.g. "RackSwitch G8272 Application Guide").

5.5 Customer Site Networks

We allow the customer to define and use their own networks and connect them to the dedicated customer network NICs using their own switch infrastructure. Please ensure the proper IP address setup on the Lenovo Solution server. This guide does not go into detail regarding the customers switch configuration, nor for the configuration in the cluster.

5.6 Network Definitions

5.6.1 Numbering conventions

Network IP-Interf. VLAN LACP-Key VLAG-Key Tier-ID Network MGMT (G8264) 128 4095* - - - - MGMT (G8272) 128 4095* - - - - MGMT (G8296) 128 4095* - - - - MGMT (G8124) 128 4095* - - - - MGMT (G8052) 128 4092** - - - - ISL - 4094 VLAN+1000 LACP Key 10 - GPFS - 100(++) port#+1000 LACP-Key - 192.168.10.0/24 HANA - 200(++) port#+1000 LACP-Key - 192.168.20.0/24 IMM (BMC) - 300(++) port#+1000 LACP-Key - 192.168.30.0/24 * VLAN 4095 is internally assigned to the management port(s) and cannot be changed. ** VLAN 4092 is a suggestion for the management VLAN.

Table 16: Numbering conventions

Note The "(++)" in the table above indicates that +1 should be added for every new network in case of multiple GPFS, HANA or IMM LANs.

5.6.2 Internal Networks – Option 1 G8264 RackSwitch 10Gbit

This option is defined to use the G8264 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA. This allows up to 24 Lenovo Solution servers (or 26 servers with "40G -> 4x 10G" breakout cable on ports 9 or 13) to be connected. The setup is as follows:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 24 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

18,20,22,24,26,28...64 (HANA) .------,5_____ MGMT| G8264 Switch |1_____\__ Inter-Switch 40Gb Link (ISL) ‘------’ \_\_____Port 5 bonded ISL 17,19,21,23,25,27...63 (GPFS) / \ 18,20,22,24,26,28...64 (HANA)/ \___Port 1 bonded ISL .------,5____/ / MGMT| G8264 Switch |1______/ ‘------’ 17,19,21,23,25,27...63 (GPFS)

Figure 9: G8264 RackSwitch front view

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC g8264-1 MGMT 4095 n/a g8264-1 17 100 192.168.10.101 gpfsnode01 bond0 g8264-1 18 200 192.168.20.101 hananode01 bond1 g8264-1 19 100 192.168.10.102 gpfsnode02 bond0 g8264-1 20 200 192.168.20.102 hananode02 bond1 ...... g8264-1 63 100 192.168.10.124 gpfsnode24 bond0 g8264-1 64 200 192.168.20.124 hananode24 bond1 g8264-2 MGMT 4095 n/a g8264-2 17 100 192.168.10.101 gpfsnode01 bond0 g8264-2 18 200 192.168.20.101 hananode01 bond1 g8264-2 19 100 192.168.10.102 gpfsnode02 bond0 g8264-2 20 200 192.168.20.102 hananode02 bond1 ...... g8264-2 63 100 192.168.10.124 gpfsnode24 bond0 g8264-2 64 200 192.168.20.124 hananode24 bond1

Table 17: G8264 RackSwitch port assignments

Note There is no public network attached to these switches.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 25 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

5.6.3 Internal Networks – Option 2 G8124 RackSwitch 10Gbit

This option is defined to use the G8124 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA. This allows up to 7 Lenovo Solution servers to be connected. The setup is as follows: 2,4,6,8,10,12,14 (HANA) .------,24____ MGMT| G8124 Switch |23____\__ Inter-Switch 10Gb Link (ISL) ‘------’ \_\_____Port 24 bonded ISL 1,3,5,7,9,11,13 (GPFS) / \ 2,4,6,8,10,12,14 (HANA)/ \___Port 23 bonded ISL .------,24___/ / MGMT| G8124 Switch |23______/ ‘------’ 1,3,5,7,9,11,13 (GPFS)

Figure 10: G8124 RackSwitch front view

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it should be used consistently as the internal (private) network within this guide.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 26 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Switch Port VLAN IP Address Hostname Server NIC g8124-1 MGMT-b 4095 n/a g8124-1 1 100 192.168.10.101 gpfsnode01 bond0 g8124-1 2 200 192.168.20.101 hananode01 bond1 g8124-1 3 100 192.168.10.102 gpfsnode02 bond0 g8124-1 4 200 192.168.20.102 hananode02 bond1 g8124-1 5 100 192.168.10.103 gpfsnode03 bond0 g8124-1 6 200 192.168.20.103 hananode03 bond1 ...... g8124-1 13 100 192.168.10.107 gpfsnode07 bond0 g8124-1 14 200 192.168.20.107 hananode07 bond1 g8124-2 MGMT-b 4095 n/a g8124-2 1 100 192.168.10.101 gpfsnode01 bond0 g8124-2 2 200 192.168.20.101 hananode01 bond1 g8124-2 3 100 192.168.10.102 gpfsnode02 bond0 g8124-2 4 200 192.168.20.102 hananode02 bond1 g8124-2 5 100 192.168.10.103 gpfsnode03 bond0 g8124-2 6 200 192.168.20.103 hananode03 bond1 ...... g8124-2 13 100 192.168.10.107 gpfsnode07 bond0 g8124-2 14 200 192.168.20.107 hananode07 bond1

Table 18: G8124 RackSwitch port assignments

5.6.4 Internal Networks – Option 3 G8272 RackSwitch 10Gbit

This option is defined to use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA. This allows up to 24 Lenovo Solution servers (or 32 servers with "40G -> 4x 10G" breakout cables on ports 49,50,51 or 52) to be connected. The setup is as follows: 2,4,6,8,10,12...48 (HANA) .------,54_____ MGMT| G8272 Switch |53_____\__ Inter-Switch 40Gb Link (ISL) ‘------’ \_\_____Port 54 bonded ISL 1,3,5,7,9,11...47 (GPFS) / \ 2,4,6,8,10,12...48 (HANA) / \___Port 53 bonded ISL .------,54____/ / MGMT| G8272 Switch |53______/ ‘------’ 1,3,5,7,9,11...47 (GPFS)

Figure 11: G8272 RackSwitch front view

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 27 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC g8272-1 MGMT 4095 n/a g8272-1 1 100 192.168.10.101 gpfsnode01 bond0 g8272-1 2 200 192.168.20.101 hananode01 bond1 g8272-1 3 100 192.168.10.102 gpfsnode02 bond0 g8272-1 4 200 192.168.20.102 hananode02 bond1 ...... g8272-1 47 100 192.168.10.124 gpfsnode24 bond0 g8272-1 48 200 192.168.20.124 hananode24 bond1 g8272-2 MGMT 4095 n/a g8272-2 1 100 192.168.10.101 gpfsnode01 bond0 g8272-2 2 200 192.168.20.101 hananode01 bond1 g8272-2 3 100 192.168.10.102 gpfsnode02 bond0 g8272-2 4 200 192.168.20.102 hananode02 bond1 ...... g8272-2 47 100 192.168.10.124 gpfsnode24 bond0 g8272-2 48 200 192.168.20.124 hananode24 bond1

Table 19: G8272 RackSwitch port assignments

Note There is no public network attached to these switches.

5.6.5 Internal Networks – Option 4 G8296 RackSwitch 10Gbit

This option is defined to use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape for IBM GPFS and SAP HANA. This allows up to 43 Lenovo Solution servers (or 47 servers with "40G -> 4x 10G" breakout cables on ports 87 and 88) to be connected. The setup is as follows: 2,4...48/50,52...86 (HANA) .------,96_____ MGMT| G8296 Switch |95_____\__ Inter-Switch 40Gb Link (ISL) ‘------’ \_\_____Port 96 bonded ISL 1,3...47/49,51...85 (GPFS) / \ 2,4...48/50,52...86 (HANA) / \___Port 95 bonded ISL .------,96____/ / MGMT| G8296 Switch |95______/ ‘------’ 1,3...47/49,51...85 (GPFS)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 28 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 12: G8296 RackSwitch front view

This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it should be used consistently as the internal (private) network within this guide.

Switch Port VLAN IP Address Hostname Server NIC g8296-1 MGMT 4095 n/a g8296-1 1 100 192.168.10.101 gpfsnode01 bond0 g8296-1 2 200 192.168.20.101 hananode01 bond1 g8296-1 3 100 192.168.10.102 gpfsnode02 bond0 g8296-1 4 200 192.168.20.102 hananode02 bond1 ...... g8296-1 85 100 192.168.10.143 gpfsnode43 bond0 g8296-1 86 200 192.168.20.143 hananode43 bond1 g8296-2 MGMT 4095 n/a g8296-2 1 100 192.168.10.101 gpfsnode01 bond0 g8296-2 2 200 192.168.20.101 hananode01 bond1 g8296-2 3 100 192.168.10.102 gpfsnode02 bond0 g8296-2 4 200 192.168.20.102 hananode02 bond1 ...... g8296-2 85 100 192.168.10.143 gpfsnode43 bond0 g8296-2 86 200 192.168.20.143 hananode43 bond1

Table 20: G8296 RackSwitch port assignments

Note There is no public network attached to these switches.

5.6.6 Administrative, SAP-Access and Backup Networks – Option G8052 RackSwitch 1Gbit

The G8052 RackSwitch 1Gbit Ethernet switch is mainly used for the administrative networks. It can be used also for SAP-Access, backup network or other client specific networks. These networks are both public and private and need to be carefully separated with VLANs. The landscape is as follows:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 29 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

2,4,6,8,10,12,14,...48 52.------,50______51| G8052 Switch |49______\__ Inter-Switch 1Gb Link (ISL) ‘------’ \_\_____Port 50 bonded ISL 1,3,5,7,9,11,13,....47 (IMM) / \ 2,4,6,8,10,12,14,...48 / \___Port 49 bonded ISL 52.------,50_____/ / 51| G8052 Switch |49______/ ‘------’ 1,3,5,7,9,11,13,....47 (IMM)

Figure 13: G8052 RackSwitch front view

This guide defines the Integrated Management Module (IMM) Network to be 192.168.30.0/24. If the customer wants to use a different IP range for the Integrate Management Module (IMM) he may do so, but it should be used consistently within this guide.

Switch Port VLAN IP Address Hostname Server NIC g8052-1 52 4092 n/a g8052-1 1 300 192.168.30.101 cust-imm01.site.net sys-mgmt g8052-1 3 300 192.168.30.102 cust-imm02.site.net sys-mgmt . . . . . g8052-1 . . . . . g8052-1 47 300 192.168.30.124 cust-imm24.site.net sys-mgmt g8052-2 52 4092 n/a g8052-2 1 300 192.168.30.125 cust-imm25.site.net sys-mgmt g8052-2 3 300 192.168.30.126 cust-imm26.site.net sys-mgmt . . . . . g8052-2 . . . . . g8052-2 47 300 192.168.30.148 cust-imm48.site.net sys-mgmt

Table 21: G8052 RackSwitch port assignments

5.6.7 Network Configurations in a Clustered Environment

The networking in the clustered environment is the cornerstone of the Lenovo Solution. Therefore it is important that you ensure that the network (switches, wires, etc. ) has been set up before starting the installation of the servers. Below is one example of how to connect the customers network infrastructure with the clustered environment, see figure 14. Please read section 5.7: Setting up the Switches on page 31 for the RackSwitch setup.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 30 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Interface SAP client Legend 0 1 GigE Inter Switch Bonded SAP HANA GPFS Interface 1GbE IMM Links 6 Optional 10 GigE 10GbE 10 GbE 10 GbE 1 GbE 40 GbE Interface 8

Customer Customer Interface Zone SAP HANA Appliance Interface Zone SAP HANA Appliance

IMM IMM IMM IMM 0 1 0 1 0 1 0 1 Node1 Node2 Node3 NodeN

10 GbE 1GigE 1 6 6 6 6 Customer HANA HANA HANA 10GigE 1 HANA Switch Choice Optional 8 8 8 8

System management

SAP Business Suite 10GigE 2 7 7 7 7 1GigE 2 GPFS GPFS GPFS GPFS Customer 9 9 9 9 Switch Choice Optional

3 5 10 3 5 10 3 5 10 3 5 10

2 4 11 2 4 11 2 4 11 2 4 11

Figure 14: Cluster Node Network Diagram

5.7 Setting up the Switches

5.7.1 Basic Switch Configuration Setup

5.7.1.1 Configuring SSH/SCP Features on the Switch SSH and SCP features are disabled by default. To change the SSH/SCP settings, use the following procedure. Connect to the switch via a serial console and execute the following commands: RS 8XXX> enable RS 8XXX# configure terminal RS 8XXX(config)# ssh enable RS 8XXX(config)# ssh scp-enable RS 8XXX(config)# interface ip 128 RS 8XXX(config-ip-if)# ip address RS 8XXX(config-ip-if)# enable RS 8XXX(config-ip-if)# exit Example: Configuring gateway RS 8XXX(config)# ip gateway 4 address RS 8XXX(config)# ip gateway 4 enable Save changes to switch FLASH memory RS 8XXX# copy running-config startup-config

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 31 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

5.7.1.2 Simple Network Management Protocol Version 3 SNMP version 3 (SNMPv3) is an enhanced version of the Simple Network Management Protocol, approved by the Internet Engineering Steering Group in March, 2002. SNMPv3 contains additional security and authentication features that provide data origin authentication, data integrity checks, timeliness indicators and encryption to protect against threats such as masquerade, modification of information, message stream modification and dis- closure. SNMPv3 allows clients to query the MIBs securely. SNMPv3 configuration is managed using the following command path menu: RS 8XXX(config)# snmp-server ? The default configuration of N/OS has two SNMPv3 users by default. Both of the following users have access to all the MIBs supported by the switch: • User name is adminmd5 (password adminmd5). Authentication used is MD5 • User name is adminsha (password adminsha). Authentication used is SHA You can try to connect to the switch using the following command. # snmpwalk -v 3 -c Public -u adminmd5 -a md5 -A adminmd5 -x des -X adminmd5 -l authPriv sysDescr.0

5.7.2 Advanced Setup of the Switches

For every switch in the cluster do the following: It is mandatory to setup Virtual Link Aggregation Group (VLAG) between the switches as well as a Virtual Local Area Network (VLAN) for each private network. The following illustration shows the setup for an M-sized cluster using the G8264 RackSwitches.

G8264 #1 G8264 #2 MGMT 1 IP: 192.168.255.253/24 VLAN 4095 MGMT 2 IP: 192.168.255.252/24 VLAN 4095 ISL VLAN 4094 Tier-ID 10 (port 1, 5)

mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 mgt 5-8 13-16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 1-4 9-12 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 1-4 9-12 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

Port: 30 Port: 29 Port: 18

Port: 17 Port: 18 Port: 30 Port: 17 bond0 bond0 GPFS bond1 Port: 29 bond1 HANA

eth6 eth6 eth0 node 2 eth7 eth0 eth2 node 1 eth7 eth2 eth8 eth8 eth9 eth1 eth3 eth4 eth5 sys eth9 eth1 eth3 eth4 eth5 sys

Figure 15: Cluster Switch Networking Example

Note Please make sure that you pick the same port of each of the two Mellanox adapters for each of the internal networks. This reduces complexity.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 32 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note The management IP addresses are examples and need to be customized according to the customer’s network. These instructions are for RackSwitch N/OS Version 8.2. Newer versions may have different commands. Please check the RackSwitch Industry-Standard CLI Reference for the version of the CLI that correlates to the switch N/OS version.

5.7.3 Disable Spanning Tree Protocol

RS 8XXX (config)# spanning-tree mode disable RS 8XXX (config)# no spanning-tree stg-auto Note Spanning-Tree is disabled globally with "spanning-tree mode disable". The setting "no spanning-tree stg-auto" prevents the switch from automatically creating STG groups when defining VLANs.

5.7.4 Disable Default IP Address

RS 8XXX (config)# no system default-ip data

5.7.5 Enable L4Port Hash

RS 82XX (config)# portchannel thash l4port

5.7.6 Disable Routing

RS 8XXX (config)# no ip routing

5.7.7 Add Networking

For each subnetwork, you should create the following VLANs and Trunk VLAG configurations as de- scribed.

5.7.8 VLAN configurations

5.7.8.1 IBM GPFS Storage Network • Create IP interface for the GPFS storage network # Define Switch 1,2 RS 8XXX (config)# vlan 100 RS 8XXX (config)# interface ip 10 # next line for the 1st switch: RS 8XXX (config-ip-if)# ip address 192.168.10.249 255.255.255.0 # next line for the 2nd switch: RS 8XXX (config-ip-if)# ip address 192.168.10.248 255.255.255.0 RS 8XXX (config-ip-if)# vlan 100 RS 8XXX (config-ip-if)# enable

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 33 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

RS 8XXX (config-ip-if)# exit • Define LACP Trunk for each VLAN # Define on Switches 1,2 # RS 8264 ports 9-63, odd (bottom) ports # RS 8272 ports 1-47, odd (bottom) ports # RS 8296 ports 1-78, odd ports # RS 8124 ports 1-21, odd ports RS 8XXX (config)# interface port RS 8XXX (config-if)# switchport access vlan 100 RS 8XXX (config-if)# lacp mode active RS 8XXX (config-if)# lacp key 1000+ RS 8XXX (config-if)# bpdu-guard RS 8XXX (config-if)# spanning-tree portfast RS 8XXX (config-if)# exit Repeat this for every port that needs to be configured.

5.7.8.2 SAP HANA Network • Create IP interface for the HANA network # Define Switch 1, 2 RS 8XXX (config)# vlan 200 RS 8XXX (config)# interface ip 20 # next line for the 1st switch: RS 8XXX (config-ip-if)# ip address 192.168.20.249 255.255.255.0 # next line for the 2nd switch: RS 8XXX (config-ip-if)# ip address 192.168.20.248 255.255.255.0 RS 8XXX (config-ip-if)# vlan 200 RS 8XXX (config-ip-if)# enable RS 8XXX (config-ip-if)# exit • Define LACP Trunk for each VLAN # Define on Switches 1,2 # RS 8264 ports 10-64, even (top) ports # RS 8272 ports 2-48, even (top) ports # RS 8296 ports 2-88, even ports # RS 8124 ports 2-22, even ports RS 8XXX (config)# interface port RS 8XXX (config-if)# switchport access vlan 200 RS 8XXX (config-if)# lacp mode active RS 8XXX (config-if)# lacp key 1000+ RS 8XXX (config-if)# bpdu-guard RS 8XXX (config-if)# spanning-tree portfast RS 8XXX (config-if)# exit Repeat this for every port that needs to be configured.

5.7.8.3 Integrated Management Module (IMM) Network • Create IP interface for the IMM network # Define Switch 1,2 RS 8052 (config)# vlan 300

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 34 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

RS 8052 (config)# interface ip 30 # next line for the 1st switch: RS 8052 (config-ip-if)# ip address 192.168.30.249 255.255.255.0 # next line for the 2nd switch: RS 8052 (config-ip-if)# ip address 192.168.30.248 255.255.255.0 RS 8052 (config-ip-if)# vlan 300 RS 8052 (config-ip-if)# enable RS 8052 (config-ip-if)# exit • Set access VLAN for switchports # Define Switch 1,2 # RS 8052 ports 1-47 RS 8052 (config)# interface port RS 8052 (config-if)# switchport access vlan 300 RS 8052 (config-if)# bpdu-guard RS 8052 (config-if)# exit # RS 8052 port 48 as managment port RS 8052 (config)# interface port 48 RS 8052 (config-if)# description MGMTPort RS 8052 (config-if)# switchport access vlan 4092 RS 8052 (config-if)# bpdu-guard RS 8052 (config-if)# exit

5.7.8.4 Enabling VLAG Setup • Create trunk (dynamic or static) used as ISL # one of the next four lines is valid according to the switch type RS 8264 (config)# interface port 1,5 RS 8272 (config)# interface port 53,54 RS 8296 (config)# interface port 95,96 RS 8124 (config)# interface port 23,24 RS 8052 (config)# interface port 49,50 RS 8XXX (config-if)# switchport mode trunk # next line defines the VLANs needed on the ISL on the HANA/GPFS-switches RS 82XX (config-if)# switchport trunk allowed vlan add 4094,[HANA VLAN(S),GPFS VLAN(S)] # next line defines the VLANs needed for the ISL on the IMM-switches RS 8052 (config-if)# switchport trunk allowed vlan add 4094,[IMM VLAN(S)] RS 8XXX (config-if)# lacp mode active RS 8XXX (config-if)# lacp key 5094 RS 8XXX (config-if)# enable RS 8XXX (config-if)# exit RS 8XXX (config)# vlag enable • Define VLAG peer relationship for each VLAN # Define Switch 1 RS 8XXX (config)# vlag tier-id 10 RS 8XXX (config)# vlag hlthchk peer-ip RS 8XXX (config)# vlag isl adminkey 5094 # For each in RS 8XXX (config)# vlag adminkey 1000+ enable

# Define Switch 2 RS 8XXX (config)# vlag tier-id 10

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 35 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

RS 8XXX (config)# vlag hlthchk peer-ip RS 8XXX (config)# vlag isl adminkey 5094 # For each in RS 8XXX (config)# vlag adminkey 1000+ enable

5.7.9 Save changes to switch FLASH memory

RS 8XXX# copy running-config startup-config

5.8 Inter-Site Portchannel Configuration

In a stretched HA or DR scenario a inter-site port channel needs to be configured. The inter-site port channel configuration depends on the customer premise equipment and infrastructure. This chapter describes diverse options how this configuration can be implemented. The following examples are based on G8264 port layout. For other supported rackswitch types following ports should be used: • G8124 solution: depending on the connection type, the switch ports 22, or 21-22 respectively • G8272 solution: depending on the connection type, the switch ports 48, or 47-48 respectively • G8296 solution: depending on the connection type, the switch ports 86, or 86-87 respectively If the port channel configuration is needed for a stretched HA setup, the HANA and the GPFS VLANs have to be enabled on the trunk interfaces. If the port channel trunk is for a DR setup, only GPFS VLANs have to be enabled on the trunk interfaces.

5.8.1 Static Trunk over one Inter-Site Link

If there is just one single site-interconnect available - as described with the drawing below - the following configuration has to be applied to the switches to establish a static inter-site connection.

Single Inter-Site Link .------. | | HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64 .------,5_____ .------,5_____ MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\ ‘------’ \ ‘------’ \ GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 / .------,5____/ .------,5____/ MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/ ‘------’ ‘------’ GPFS 17,19,21,23,25,27...63 GPFS 17,19,21,23,25,27...63

• Switchport Portchannel Configuration # Define Switch 1a,2a # RS 8264 port 64 # RS 8272 port 48 # RS 8296 port 86 # RS 8124 port 22 RS 8XXX (config)# interface port RS 8XXX (config-if)# switchport mode trunk

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 36 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

# The next 2 configuration statements are valid in case of a stretched HA solution. In a # stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN] # The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN # must be enabled on the trunk interface in a DR scenario. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN] RS 8XXX (config-if)# exit

5.8.2 Portchannel over two Inter-Site Links

If there are two site-interconnect fibres - as described with the drawing below - each cable should connect to two switches, instead of connecting them both to just one switch pair. The following configuration has to be applied to the switches to establish one logical static inter-site connection over 2 cables.

Redundant Inter-Site Link (one on each switch) .------. | | HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64 HANA 18,20,22,24,26,28...64 .------,5_____ .------,5_____ MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\ ‘------’ \ ‘------’ \ GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 / .------,5____/ .------,5____/ MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/ ‘------’ ‘------’ GPFS 17,19,21,23,25,27...63(64) GPFS 17,19,21,23,25,27...63(64) | | ‘------’ Redundant Inter-Site Link (one on each switch) • Switchport Portchannel Configuration # Define Switch 1a,2a,1b,2b # RS 8264 port 64 # RS 8272 port 48 # RS 8296 port 86 # RS 8124 port 22 RS 8XXX (config)# interface port RS 8XXX (config-if)# switchport mode trunk # The next 2 configuration statements are valid in case of a stretched HA solution. In a # stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN] # The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN # must be enabled on the trunk interface in a DR scenario. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN] RS 8XXX (config-if)# exit RS 8XXX (config)# portchannel 63 port RS 8XXX (config)# portchannel 63 enable

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 37 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

RS 8XXX (config)# vlag portchannel 63 enable

5.8.3 Portchannel over four Inter-Site Links

If there are four site-interconnect fibres - as described with the drawing below - two of them should be connected on port 63 and port 64 on each switch. The following configuration has to be applied to the switches to establish one logical static inter-site connection over 4 cables. Portchannel over four inter-site links (two on each switch) .------. | .------+---. | | | | HANA 18,20,22,24,26,28...64(+63) HANA 18,20,22,24,26,28...64(+63) .------,5_____ .------,5_____ MGMT| G8264 Switch 1a |1_____\ MGMT | G8264 Switch 2a |1_____\ ‘------’ \ ‘------’ \ GPFS 17,19,21,23,25,27...63 / ISL GPFS 17,19,21,23,25,27...63 / ISL HANA 18,20,22,24,26,28...64 / HANA 18,20,22,24,26,28...64 / .------,5____/ .------,5____/ MGMT| G8264 Switch 1b |1___/ MGMT | G8264 Switch 2b |1___/ ‘------’ ‘------’ GPFS 17,19,21,23,25,27...63(+64) GPFS 17,19,21,23,25,27...63(+64) | | | | | ‘------+---’ ‘------’ Portchannel over four inter-site links (two on each switch)

• Switchport Portchannel Configuration # Define Switch 1a,1b,2a,2b # RS 8264 port 63,64 # RS 8272 port 47,48 # RS 8296 port 85,86 # RS 8124 port 21,22 RS 8XXX (config)# interface port RS 8XXX (config-if)# switchport mode trunk # The next 2 configuration statements are valid in case of a stretched HA solution. In a # stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN] # The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN # must be enabled on the trunk interface in a DR scenario. RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN] RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN] RS 8XXX (config-if)# exit RS 8XXX (config)# portchannel 63 port RS 8XXX (config)# portchannel 63 port RS 8XXX (config)# portchannel 63 enable RS 8XXX (config)# vlag portchannel 63 enable

5.8.4 Save and Restore Switch Configuration

5.8.4.1 Save Switch Configuration Locally Execute:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 38 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

# scp [email protected]:getcfg .

5.8.4.2 Restore Switch Configuration Execute: # scp getcfg [email protected]:putcfg

5.9 Generation of Switch Configurations

The script SwitchAutoConfig.sh can be used to create a basis configurations for the switch models G8124 and G8264. We recommend to copy and paste the created configuration into the serial console of the switches. SwitchAutoConfig.sh can be found in /opt/lenovo/saphana/bin/. As a prerequisite for SwitchAutoConfig.sh, the switches must be base configured as described in chapter 5.7.1: Basic Switch Configuration Setup on page 31, and reachable via ssh over network.

5.9.1 Script Usage

./SwitchAutoConfig.sh -h

usage: ./SwitchAutoConfig.sh [-c type] [-d type] styletypes=[G8264|G8052|G8124]

-c just creates switch configurations for the chosen switch type. -d creates and also deploys the switch configurations for the chosen switch type

Example: SwitchAutoConfig.sh -d G8264 Note The current version of the script does not support the automated creation of G8272 and G8296 RackSwitch configurations. To obtain such configuration files generate the configuration for G8264 RackSwitch (SwitchAutoConfig.sh -c G8264) and adapt the port numbers according to table 19: G8272 RackSwitch port assignments on page 28 or table 20: G8296 RackSwitch port assignments on page 29. Therfore it is also not possible to use the -d option.

5.9.2 Examples

The following command will create the configurations for a G8264 switch pair. You will be asked to enter configuration details like IP addresses. After the configuration part you have to enter the ssh password of the switches, twice per switch. The first you enter the ssh password, the script will check the firmware version of the switches, the second time the password must be entered for the deployment process. ./SwitchAutoConfig.sh -c G8264 The following command will create and deploy the configurations for a G8264 switch pair. ./SwitchAutoConfig.sh -d G8264

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 39 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Attention Please be very careful, if you create the configuration for a switch connected to the customer network. In this case make sure, that the switch is disconnected during the setup. Only if the configuration is complete and matches the customer requirements bring up the connection to the customer network. After the configuration deployment the switches should be checked manually. Afterwards the configuration can be saved as described in chapter 5.7.9: Save changes to switch FLASH memory on page 36.

5.9.3 Input Values

All the default values are based on the Networking Guide standards, but can be changed if needed. Most input values like hostname or IP address need to be provided by the customer. Portchannel is only needed in case of a DR or HA cluster. If portchannel should be configured, the script will ask for the type of port channel that has to be configured. There are two port channel options - HA or DR. The GPFS, HANA, xCat and IMM VLAN IPs are IPs that reside within those VLANs. Their purpose is to be able to ping server addresses within these VLANs from the switch. For G8052 the script will ask for a MGMT Port, because the G8052 has no dedicated management port.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 40 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6 Guided Install of the Lenovo Solution

This section describes the installation and configuration of HANA on SUSE Linux Enterprise Server for SAP Applications 11 SP3 and HANA on Red Hat Enterprise Linux 6.6. Subsections that only apply to one of these operating systems are marked accordingly. This section can be applied starting from the non-OS component DVD version 1.9.96-13. The software installation and configuration is executed at the customer site. This includes networking customization, IBM GPFS cluster setup and SAP HANA installation. It does not include the connection and replication to SAP Business Suite back end systems (such as ERP or BW).

Attention Please read SAP Note 2001528– Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11, and SAP Note 2159166– SAP HANA SPS 09 Database Revision 96 to learn about known issues and recommendations by SAP.

Note It is highly recommended to check the system setup and software versions of installed compo- nents after the complete installation process. See section 15.2: Basic System Check on page 183 how to achieve this.

Phase Actions BoMC Firmware upgrades (recommended) Reboot 1 OS installation Reboot 2 OS, network configuration Reboot RAID, GPFS configuration & installation 3 HANA configuration & installation

Table 22: Installation Process and Phases

Guided Installation Instructions for Single Node Installations: 1. Certified hardware ordered and available: Chapter 4: Hardware Configurations on page 9 2. Preparation: Chapter 6.1: Preparation on page 42 3. Acquiring TCP/IP addresses and host names: Section 5.3: Network Configuration on page 22 4. Phase 1: Installation of the operating system: Section 6.2: Phase 1 on page 48 5. Phase 2: OS configuration: Section 6.3: Phase 2 – SLES for SAP on page 53, or 6.4: Phase 2 – RHEL on page 58 6. Interim system check: Section 6.5: Interim Check on page 60 7. Phase 3: Installing IBM GPFS , SAP HANA and final configuration: Section 6.6: Phase 3 on page 62 8. Final system check: Chapter 15: System Check and Support on page 183

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 41 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Guided Installation Instructions for Clustered Nodes Installations: 1. Certified hardware ordered and available: Chapter 4: Hardware Configurations on page 9 2. Preparation: Chapter 6.1: Preparation on page 42 3. Acquiring TCP/IP addresses and host names: Section 5.3: Network Configuration on page 22 4. Cluster network switch setup: Section 5.6.7: Network Configurations in a Clustered Environment on page 30 5. Phase 1: Installation of the operating system: Section 6.2: Phase 1 on page 48 6. Phase 2: OS configuration: Section 6.3: Phase 2 – SLES for SAP on page 53, or 6.4: Phase 2 – RHEL on page 58 7. Interim system check: Section 6.5: Interim Check on page 60 8. Phase 3: Installing IBM GPFS , SAP HANA and final configuration: Section 6.6: Phase 3 on page 62 9. Final system check: Chapter 15: System Check and Support on page 183

6.1 Preparation

As you might not be able to access online documentation at the customer site, please make yourself familiar with the following links and downloads before arriving without all information that is useful. Please note that these documents in turn might reference to other documentation not mentioned here, so you would need to get this as well. We highly recommend the SAP HANA Installation Guides as well as the SAP HANA TOC Manual.

Experience SAP HANA http://experiencesaphana.com SAP Service Marketplace https://service.sap.com/hana* SAP Help Portal – SAP HANA http://help.sap.com/hana_appliance SAP HANA 1.0: Central Note https://service.sap.com/sap/support/notes/1514967* SAP HANA Sizing Guide https://service.sap.com/sap/support/notes/1514966* Release Restrictions Note https://service.sap.com/sap/support/notes/1513496*

Table 23: SAP HANA references * SAP Service Marketplace ID required

Depending on the customer’s operation guidelines it might be necessary to prepare the customer infras- tructure beforehand so that the HANA appliance can be integrated in a smooth and timely manner. What follows are a few tips we have collected while talking with SAP.

6.1.1 Firewall Preparations

If the customer has firewalls running between the HANA appliance and the connected components (ERP, clients, backup & restore server, etc.), make sure that the appropriate network ports are opened. For details on the relevant ports please refer to the SAP HANA security guide at http://help.sap.com/ hana_appliance → Security.

6.1.2 Lenovo Systems solution for SAP HANA Additional Software Stack

The customer needs to have the "Non OS content for Lenovo Systems solution for SAP HANA appliance additional software stack" before the service person arrives. A DVD should have arrived with every

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 42 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

system. It is not possible due to legal reasons to download the DVD from the Internet. In case a customer has lost the DVD, or did not receive such, he needs to order it directly from Lenovo. In order to do this, please direct the customer to contact Lenovo support and provide part number (p/n) for the latest version from the table below. The other numbers are here for reference.

P/N Description Remarks Supported OS SLES for SAP 11 SP3, 00MV674 SAP HANA FRU Pkg v. 1.9.96-13 for X6 latest version RHEL 6.6 Previous versions not covered by this document.

Table 24: DVD Part Numbers

6.1.3 Software, Firmware and Drivers

The System x servers software, firmware and driver versions should either be at the exact level as given here or can be above if indicated so. For details please refer to table 25. The versions listed in that table have been certified with SAP. If an upgrade to a higher version is supported without consultation of Lenovo/SAP, this is indicated with a . Updates that require a statement from Lenovo or SAP before upgrading are indicated with b. Certain firmware levels have been declared as static and an upgrade to higher version is not supported, this is indicated with . In general you should use BoMC12 to apply the newest firmware versions before starting the OS in- stallation, unless there are restrictions for certain firmware packages in table 25: Supported Firmware, Software and Driver Levels on page 44. If unsure, you should first contact SAP Support (via the SAP OSS System) with a direct question regarding the latest drivers and their support.

Attention Mandatory kernel update after installation on SLES for SAP 11 SP3 to kernel version 3.0.101-0.47.52, or higher.

Attention Mandatory kernel update after installation on RHEL 6.6 to 2.6.32-504.16.2.el6, or higher. See SAP Note 2136965– SAP HANA DB: Recommended OS settings for RHEL 6.6.

Attention Mandatory update of the GCC runtime environment for SAP HANA SPS08 (Revision 80) or higher. See 2001528– Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 for details.

Attention Mandatory update of the GNU C Library is required after installation when installing SAP HANA Database revision 80 or higher. See SAP Note 1888072– SAP HANA DB: Indexserver crash in __strcmp_sse42 for details.

12https://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-BOMC

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 43 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

SLES – OS Software and Drivers Component Version Recommended SLES for SAP Applications 11 SP3 kernel 3.0.101-0.47.52* or higher  The GNU C Library (glibc) 2.11.3-17.56.2 or higher  GCC runtime environment (gcc47-runtime) 4.7.2_20130108-0.17.2 or higher  SLES for SAP Applications 11 SP3 software and drivers Updates within SP3 as allowed by SAP  RHEL – OS Software and Drivers RHEL 6.6 kernel 2.6.32-504.16.2.el6*  GCC runtime environment (compat-sap-c++) 4.7.2-10  Network Security Services (nss-softokn-freebl) 3.14.3-22  Updates within RHEL 6.6 as allowed by RHEL 6.6 software and drivers SAP  Misc. Software, Firmware and Drivers Component Version IBM General Parallel File System (GPFS) Recommended: 4.1.0-8 or higher  ServeRAID M5120 Controller Firmware FW Package Build: 23.33.0-0018 (for external expansion unit) FW Version: 3.450.55-4187  ServeRAID M5225 Controller Firmware FW Package Build: 24.2.1-0052 (for external expansion unit) FW Version: 4.220.120-3749  ServeRAID M5210 Controller Firmware FW Package Build: 24.7.0-0052 (for internal disks) FW Version: 4.270.00-4288  System x3850 X6 Specific Firmware Component Version Integrated Management Module (IMM) TCOO08Z  UEFI (FW/BIOS) Flash A9E122XUS  DSA DSALA65  System x3950 X6 Specific Firmware Component Version Integrated Management Module (IMM) TCOO08Z  UEFI (FW/BIOS) Flash A9E122XUS  DSA DSALA65  Lenovo Networking Operating System Component Version Lenovo RackSwitch G8052 N/OS 8.3.1.0 or higher  Lenovo RackSwitch G8124 N/OS 8.2.1.0 or higher  Lenovo RackSwitch G8264 N/OS 8.2.1.0 or higher  Lenovo RackSwitch G8272 N/OS 8.2.2.0 or higher  Lenovo RackSwitch G8296 N/OS 8.2.1.0 or higher 

Table 25: Supported Firmware, Software and Driver Levels * Update of kernel will need recompiling the GPFS drivers, see the Lenovo Operations Guide for SAP HANA appliance for further details.

Note UEFI and IMM firmware levels should always be updated in parallel to avoid possible con- tention problems between the two.

Note When installing or performing upgrades, the operator should be prepared to expect multiple reboots. Please refer to chapter 12.2: Reboot Behavior on page 154.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 44 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Warning Do not downgrade existing firmware levels unless otherwise explicitly recommended to do so by Lenovo.

6.1.4 Card Placement

Attention You may need to change the card placement. The machine coming from the factory may have a different card layout than we require. Please refer to section 4.4: Card Placement on page 15 for the assignment regarding in which slot a certain card should be. This step must be done before the installation. Please be aware, that only with the correct card layout your machine is supported by Lenovo.

6.1.5 Hardware UEFI Configuration

These steps are necessary before the operating system can be installed. When the system comes from Lenovo, it should already been set to the settings listed, but after UEFI firmware updates, it may happen that these parameters are reset. Follow the next instructions on how to configure the servers UEFI parameters correctly for use with SAP HANA appliance. Please check in this step also the power policy settings like described in chapter 12.1: Power Policy Configuration on page 154.

6.1.5.1 Obtaining web interface access for IMM To access the web interface of the IMM and use the remote presence feature, you need the IP address for the IMM. You can modify the IMM IP address through the UEFI Setup utility. To locate or change the IP address, complete the following steps: 1. Turn on the server. 2. When the prompt Setup is displayed, press F1 . 3. From the setup utility main menu, select System Settings Integrated Management Module Network Configuration . 4. Obtain or change the network settings (IP address, host name, subnet mask, gateway). 5. Save network settings, confirm to restart IMM. 6. Press Esc to get back to main menu.

6.1.5.2 Feature on Demand Activation To be able to configure the RAID adapters correctly some Feature on Demand (FoD) keys need to be activated. It is possible that they are already activated when shipped. • ServeRAID M5100/M5200 Series Performance Key for Lenovo System x • ServeRAID M5100/M5200 Series SSD Caching Enabler for Lenovo System x • (optional, only if RAID6 is required by customer) ServeRAID M5100/M5200 Series RAID 6 Upgrade for Lenovo System x (RAID6 can only be configured on external M5120/M5225 RAID adapters.) The necessary documentation was shipped with the servers to the customer. You can activate the FoDs via IMM: After the login go to IMM Management Activation Key Management .

Note We recommend that the customer keeps a backup of the Feature on Demand keys.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 45 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6.1.5.3 Disable GPT Recovery 1. Select System Settings Recovery & RAS Disk GPT Recovery Disk GPT Recovery . 2. Choose None . 3. Press Esc three times. 4. Select Save Settings and press .

6.1.5.4 General Performance-optimized Settings for SAP HANA 1. In the UEFI/BIOS select Load Default Settings . 2. Select System Settings Operating Modes . 3. Choose Operating Mode Custom Mode . 4. Choose C1 Enhance Mode Disable . 5. Choose Power/Performance Bias Platform Controlled . 6. Choose Platform Controlled Type Maximum Performance . 7. Press Esc twice. 8. Select Save Settings and press . 9. Select System Settings Power . 10. Choose Workload Configuration I/O sensitive . 11. Press Esc twice. 12. Select Save Settings and press . Please check and set the settings in UEFI according to the following tables. Note Please be aware, that not every setting is available on every platform. Section Operation Modes

Setting Value ASU tool setting Choose Operating Mode Custom Mode OperatingModes.ChooseOperatingMode Memory Speed Max Performance Memory.MemorySpeed Memory Power Management Automatic Memory.MemoryPowerManagement Proc Performance States Enable Processors.ProcessorPerformanceStates C1 Enhance Mode Disable Processors.C1EnhancedMode QPI Link Frequency Max Performance Processors.QPILinkFrequency Turbo Mode Enable Processors.TurboMode CPU States Enable Processors.C-States Package ACPI C-State Limit ACPI C3 Processors.PackageACPIC-StateLimit Power/Performance Bias Platform Controlled Power.PowerPerformanceBias Platform Controlled Type Max Performance Power.PlatformControlledType

Table 26: Required Operation Modes UEFI settings

Section Processors

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 46 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Setting Value ASU tool setting Turbo Mode Enable Processors.TurboMode Processor Performance States Enable Processors.ProcessorPerformanceStates C-States Enable Processors.C-States Package ACPI C-State Limit ACPI C3 Processors.PackageACPIC-StateLimit C1 Enhanced Mode Disable Processors.C1EnhancedMode Hyper Threading Enable Processors.Hyper-Threading Execute Disable Bit Enable Processors.ExecuteDisableBit Intel Virtualization Technology Enable Processors.IntelVirtualizationTechnology Enable SMK Disable Processors.EnableSMX Hardware Prefetcher Enable Processors.HardwarePrefetcher Adjacent Cache Prefetch Enable Processors.AdjacentCachePrefetch DCU Streamer Prefetcher Enable Processors.DCUStreamerPrefetcher DCU IP Prefetcher Enable Processors.DCUIPPrefetcher Direct Cache Access (DCA) Enable Processors.DirectCacheAccessDCA Cores in CPU Package All CoresinCPUPackage QPI Link Frequency Max Performance Processors.QPILinkFrequency Energy Efficient Turbo Enable Processors.EnergyEfficientTurbo Uncore Frequency Scaling Enable Processors.UncoreFrequencyScaling MWAIT/MMONITOR Enable Processors.MWAITMMONITOR

Table 27: Required Processors UEFI settings

Section Power

Setting Value ASU tool setting Active Energy Manager Capping Disable Power.ActiveEnergyManager Power/Performance Bias Platform Controlled Power.PowerPerformanceBias Platform Controlled Type Max Performance Power.PlatformControlledType Workload Configuration I/O sensitive Power.WorkloadConfiguration 10Gb Mezz Card Standby Power Disable Power.10GbMezzCardStandbyPower

Table 28: Required Power UEFI settings

Section Memory

Setting Value ASU tool setting Memory Mode Independent Memory.MemoryMode Memory Memory Speed Max Performance Memory.MemorySpeed Memory Power Management Automatic Memory.MemoryPowerManagement Socket Interleave NUMA Memory.SocketInterleave Memory Data Scrambling Enable Memory.PatrolScrub PatrolScrub Enable Memory.MemoryDataScrambling Mirroring Disable Memory.Mirroring Sparing Disable Memory.Sparing RankMarginingTest Disable Memory.RankMarginingTest

Table 29: Required Memory UEFI settings

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 47 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6.1.5.5 Boot Order The installer supports (starting from release 1.8.80-10) only the installation in UEFI Mode. For the boot loaders used see table 30. Note When you reinstall a system but changed the Legacy/UEFI Mode, make sure the partition table is cleared, either by wiping it or by recreating the RAID1 VD for the OS.

Type SLES 11 SP3 RHEL 6.6 Supported from 1.7.70-8 1.9.96-13 Boot loader ELILO Grub

Table 30: Boot options and boot loaders used

If you want to install in UEFI Mode, you do not have to change the boot order at all. The default boot order is: CD/DVD Rom, Hard Disk 0, PXE Network. After successful installation there will be a new entry on top of the list for the newly installed operating system. Attention You must not activate UEFI Secure Boot – it is disabled by default – because the installation of GPFS and other software add-ons will fail.

6.2 Phase 1

The Lenovo Systems Solution for SAP HANA appliance is ready for an installation with the factory provided image.

6.2.1 Storage Configuration – RAID Setup

The RAID configuration of all RAID5 and RAID6 arrays is executed by the automated installer starting with release 1.8.80-10. The only manual step the installing person has to do is to configure the RAID1 for the OS. The following tables are meant as an overview and a reference in case that the automated RAID config- uration is not working properly. Tables 31: x3850 X6 RAID Controller Configuration on page 49 and 32: x3950 X6 RAID Controller Configuration on page 50 describe possible configurations of the RAID controllers. There are different possible setups for the RAID controllers with different numbers of SSDs and HDDs: • M5210 (on x3950 X6: first internal) – 2 SSDs + 6 HDDs: 1 × RAID1 for OS, 1 × RAID5 for GPFS • M5210 (only x3950 X6, second internal) – 2 SSDs + 6 HDDs: 1 × RAID5 for GPFS • M5120/M5225 – 2 SSDs + 9 HDDs: 1 × RAID5 for GPFS – 2 SSDs + 10 HDDs: 1 × RAID6 for GPFS – 2 SSDs + 18 HDDs: 2 × RAID5 for GPFS – 2 SSDs + 20 HDDs: 2 × RAID6 for GFPS

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 48 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

– Optionally: +2 SSDs13

Physical Controller Models VD ID Type Config Comment Drives 0 HDD 2 RAID1 VD for OS M5210 all GPFS, (3+p) 1 HDD 4 CacheCade RAID5 enabled CacheCade 2 SSD 2 RAID0† of VD1 (8+p) Single node: 9 GPFS, M5120/ 0* HDD RAID5 ≥ 768GB, CacheCade M5225 (8+2p) Cluster: 10 enabled RAID6 ≥ 512GB CacheCade 1 SSD 2 RAID0 of VD0

Table 31: x3850 X6 RAID Controller Configuration. * There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to the controller. † RAID1 for all CacheCade arrays is possible, but may require additonal hardware. See section CacheCade RAID1 Configuration in the Operations Guide for X6 based models for more details.

13Optionally: +2 SSDs for CacheCade RAID1. For details on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1 Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 49 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Physical Controller Models VD ID Type Config Comment Drives 0 HDD 2 RAID1 VD for OS 1st M5210 all GPFS, (3+p) 1 HDD 4 CacheCade RAID5 enabled CacheCade 2 SSD 2 RAID0† for VD1 GPFS, (5+p) Single node: 0 HDD 6 CacheCade 2nd M5210 RAID5 ≥ 768GB, enabled Cluster: CacheCade 1 SSD 2 RAID0 ≥ 512GB for VD1 (8+p) Single node: 9 GPFS, 0* HDD RAID5 ≥ 3072GB, CacheCade 1stM5120/ Cluster: (8+2p) enabled M5225 10 ≥ 2048GB RAID6 (8+p) Single node: 9 GPFS, 1* HDD RAID5 ≥ 6144GB, CacheCade (8+2p) Cluster: 10 enabled ≥ 3072GB RAID6 CacheCade 1/2** SSD 2/4* RAID0 for VD0&1 (8+p) Single node: HDD 9 GPFS, 0* RAID5 ≥ 12.288, CacheCade 2nd M5120/ Cluster: (8+2p) enabled M5225 10 ≥ 4096GB RAID6 (8+p) Single node: 9 GPFS, 1* HDD RAID5 ≥ 12.288GB, CacheCade (8+2p) Cluster: 10 enabled ≥ 6144GB RAID6 CacheCade 1/2* SSD 2 RAID0 for VD0&1 (8+p) Single node: 9 GPFS, 3rd M5120/ 0* HDD RAID5 ≥ 12.288GB, CacheCade M5225 (8+2p) Cluster: 10 enabled RAID6 ≥ 6144GB CacheCade 1 SSD 2 RAID0 for VD0

Table 32: x3950 X6 RAID Controller Configuration. * There are different possible configurations for this VD depending on the number of SSDs/HDDs connected to the controller. ** This number will depend on the availability of VD1 † RAID1 for all CacheCade arrays is possible, but may require additonal hardware. See section CacheCade RAID1 Configuration in the Operations Guide for X6 based models for more details.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 50 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Partition Device Partition # Size File system Mount Point Name* 1 /dev/sda1 148MB vfat /boot/efi 2 /dev/sda2 64GB ext3/4 / /dev/sda 3 /dev/sda3 32GB swap (none) /var/backup/ 4 /dev/sda4 148MB vfat boot/efi 5 /dev/sda5 64GB ext3/4 /var/backup Unpartitioned (whole device) 100% GPFS /sapmnt /dev/sd[b-z] (sapmntdata)

Table 33: Partition Scheme for Single Node and Cluster Installations * The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.

Warning At this point, only the RAID1 for the OS will be configured. Other RAID arrays are generated automated in phase 3 of the setup.

6.2.1.1 Starting the MegaRAID Configuration Tool 1. In the UEFI main menu select System Settings Storage . 2. Select the internal RAID controller. If your server has two M5210 controllers, only configure the first controller as described here. You can determine the first internal controller by the smaller bus number on the right side of the "Storage" view. 3. Select Main Menu Configuration Management . 4. (If shown) Select Manage Foreign Configuration Clear Foreign Configuration . 5. Select Clear Configuration and confirm. 6. Select Create Profile Based Virtual Drive . (If this is not possible, press Esc and select Configuration Management again.) 7. Select Generic RAID 1 . 8. The RAID1 must be configured on HDDs. 9. Select Save Configuration and confirm. 10. Leave the controller configuration.

6.2.2 Mounting Installation Images using the IMM Virtual Media Center

Using the IMM, the machine can be booted into the installation media. Directions on how to use the IMM can be found in the Lenovo server installation guidelines respective to the System x model purchased. The server software installation process varies slightly depending on how the mounted software images are attached to the server. This section describes the different image mounting methods and the available options to install the images for each method. See table 34: DVD/ISO Media Install Options on page 52. Installations via USB drives are supported. There are two Lenovo DVDs shipped besides the DVDs of the operating system media kit. The "Lenovo Installation" DVD (Lenovo non-OS components), contains all files that are needed for a successful instal- lation of the appliance. The "Additional Products" DVD contains additional files for SAP HANA that are not required for a successful installation. If you want to have these files automatically transferred to

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 51 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

the server(s) during installation, you must use option 1 in table 34. We recommend not mounting this DVD. When installing RHEL there is an additional RHEL for HANA DVD shipped containing necessary com- patibility RPMs. When installing SLES for SAP there is an additional SLES DVD shipped containing necessary compati- bility RPMs.

Order in Virtual USB DVD/ISO Media Option Media Manager Stick SLES for SAP/RHEL (1st) 1 Lenovo non-OS Components (2nd) RHEL for HANA or (3rd) Compat. files for SLES Additional Products (4th, optional) SLES for SAP (1st) 2 Lenovo non-OS Components  RHEL for HANA or (2nd) Compat. files for SLES

Table 34: DVD/ISO Media Install Options

6.2.3 Starting the Automatic Installation Process

• SLES, UEFI Mode: After you mount the software images for the execution of phase one install, restart the system and wait until the black boot-option screen from SUSE is displayed. – In the boot-option screen, use + to select Installation and press e . – Go to the line linuxefi /boot/x86_64/loader/linux. 1. Option 1, and 2 in table 34: Append autoyast=usb:///. Press F10 . • RHEL, UEFI Mode: After you mount the software images for the execution of phase one install, restart the system and press any key as soon as the RHEL boot loader starts to enter the boot options menu. – In the boot-option screen, use + to select Red Hat Enterprise Linux 6.6 and press e . – Press e again to edit the kernel paramters. 1. Option 1 in table 34: Append ks=cdrom:/ks.cfg. 2. Option 2 in table 34: Append ks=hd:sdb:/ks.cfg. – Press and then b . • SLES and RHEL: The media will automatically install the SLES for SAP or RHEL operating system. The installer will copy the extra software necessary for the SAP HANA product (GPFS and other software add-ons). The machine will be properly partitioned, installed and initially configured. You will not need to touch this system at this point. After the system reboot phase two of the installation will begin. Note Continue with Section 6.3: Phase 2 – SLES for SAP on page 53, or 6.4: Phase 2 – RHEL on page 58.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 52 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6.3 Phase 2 – SLES for SAP

Warning If you had to restart the server in one of the next steps and you see this screen again, 1. change into a console or open a terminal and execute service openibd start. If you do not do this, you will not be able to configure the network correctly in later steps. To open a console, press Ctrl + Alt + + x , then enter the command and then enter exit to close the console.

At the welcome screen select Next . 2. Ensure that the customer accepts the SUSE(R) Linux Enterprise Server for SAP Applications 11 SP3 – SUSE Software License Agreement. Select Next .

Figure 16: License Agreement

3. On the next screen enter keyboard preferences. Select Next . 4. Assign the server’s host and domain name according to customer’s wishes. Select Next .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 53 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 17: Hostname and Domain Name

5. The networking adapters need to be configured to the customers network landscape. Depending on the customer’s network infrastructure the other Ethernet adapters need to be modified according to table 15: IP address configuration on page 23. This is left to the customer and service personnel to properly define in advance.

Figure 18: Network Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 54 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

(a) Click on the green highlighted and underlined Network Interfaces . (b) There are two bonded devices configured for the Mellanox adapters. This is used by default for the IBM GPFS and SAP HANA private networks and should not be changed. This is a private network and does not need to be connected to the customers network landscape. (c) Single node: If the customer wishes to use the 10Gb adapters for his client access, then you need to change the adapter used for each of these bonded adapters. It is not necessary in a single node installation to use two adapters, only that one adapter is assigned with the correct private networking host names and IP addresses. Note In case the customer plans to scale out the single node installation to a cluster by adding more severs: Plan the network configuration for the GPFS and HANA networks as if the cluster was already present to simplify a later scale out. (d) Single node without Mellanox cards: If the machine is configured without a Mellanox card, bond0 and bond1 will be empty (i.e. no slave interfaces), but still be present. • Delete bond0 and bond1: Select the interface and click Delete . • Select an interface that will be configured (e.g. for external communication) and click Edit . • In the Address tab select Add . • As "Alias Name" enter "GPFS", as "IP Address" enter "127.0.1.1", as "Netmask" enter "/24". Click OK . • In the Address tab select Add . • As "Alias Name" enter "HANA", as "IP Address" enter "127.0.2.1", as "Netmask" enter "/24". Click OK . • Click Next . Ports of NICs that are placed in the server as a replacement for the Mellanox cards will be named starting from eth100. (e) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN / 127.0.1.1 (e.g 192.168.10.101/24) in order to properly auto-configure the private network. Follow the advice given by the customer in table 15: IP address configuration on page 23. It is also important to modify the host name/IP address pair of hananodeNN / 127.0.2.1 (e.g 192.168.20.101/24) in order to properly auto-configure the private network. Follow the advice given by the customer in table 15: IP address configuration on page 23. Warning If not changed, the installation will fail at a later point. Please see figure 19 on page 56. Please change the value in the marked black box to reasonable values, e.g. 192.168.10.101/24 gpfsnode01 for bond0 and 192.168.20.101/24 hanaode01 for bond1. Do not use the preset values in the fields IP address and hostname.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 55 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 19: Cluster Node NIC Configuration dialog bond0

(f) Under the tabs Hostname/DNS and Routing confirm host name, domain name, name servers, search domain(s) and routing information and add any missing information. Select Next .

Warning You have to assign the correct IP and the fully qualified domain name of the server to the interface that will be connected to the customer’s network.

6. On the next screen enter clock and time zone information. Select Next .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 56 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 20: Clock and Time Zone

7. A network time protocol (NTP) server should be configured. It is mandatory to configure it on cluster nodes and highly recommended to configure it on single node installations. Select OK .

Figure 21: Advanced NTP Configuration

8. Enter the root password. Select Next .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 57 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 22: Password for the System Administrator

9. Register the SLES System using the supplied envelope in the customers delivery. 10. On the "Installation Completed" screen press Finish . 11. Follow the instructions in section 6.5: Interim Check on page 60. Warning Mandatory Kernel Update on SLES for SAP 11 SP3: At the time this document is created, kernel version 3.0.101-0.47.52 is mandatory for SLES for SAP 11 SP3. Please consult SAP if there is now a higher version recommended. Please see 13.4: Linux Kernel Update on page 165 for the steps needed to be performed.

6.4 Phase 2 – RHEL

1. At the "Welcome" screen click Forward . 2. Ensure that the customer accepts the license agreements for RHEL. Select Forward . 3. Skip the software registration, since this is not possible at this time due to missing network config- uration. Do not forget to register the system after the successful installation. 4. Configure the keyboard layout and select Forward . 5. Enter a root password and select Forward . 6. If the customer wants to, you can create a further (non-root) user on this machine. Then select Forward . 7. In the time configuration, select "Synchronize date and time over the network". Configure the time servers; if there is no Internet access, remove the default time servers. For cluster installations the configuration of an NTP server is mandatory. For single node installations it is highly recommended.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 58 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8. Select the timezone tab and select the correct timezone. Select Forward . 9. Deselect "Enable kdump?". Select Finish , then select No . 10. Log in as root user. 11. Configure /etc/hosts: Add a line for gpfsnodeXX and hananodeXX (where XX is the node number, e.g. 01) and a line for the external IP and hostname, for example:

1 192.168.10.110 gpfsnode10 2 192.168.20.110 hananode10 3 10.10.10.10 myhananode10.domainname myhananode10

12. Execute system-config-network and select DNS configuration. Do not use the Device configuration option. • As "Hostname" enter the fully qualified domain name. • Enter the DNS servers. • As "DNS search path" enter the domain. 13. Edit the configuration file of the network device for the external communication, e.g. ifcfg-eth4, in /etc/sysconfig/network-scripts/. (Do not change the settings for eth0-3, they are the slaves of bond0-1.) Make sure that the file contains the line ONBOOT=yes but the line HWADDR= is deleted. At the end the file should look like this:

1 DEVICE=eth[X] 2 TYPE=Ethernet 3 UUID=[UUID] 4 ONBOOT=yes 5 NM_CONTROLLED=no 6 BOOTPROTO=none 7 IPV6INIT=no 8 IPADDR=[IP address] 9 NETMASK=[netmask] 10 GATEWAY=[gateway]

The networking adapters need to be configured to the customers network landscape. Depending on the customer’s network infrastructure the other Ethernet adapters need to be modified according to table 15: IP address configuration on page 23. This is left to the customer and service personnel to properly define in advance. (a) There are two bonded devices configured for the Mellanox adapters. This is used by default for the IBM GPFS and SAP HANA private networks and should not be changed. This is a private network and does not need to be connected to the customers network landscape. (b) Single node: If the customer wishes to use the 10Gb adapters for his client access, then you need to change the adapter used for each of these bonded adapters. It is not necessary in a single node installation to use two adapters, only that one adapter is assigned with the correct private networking host names and IP addresses. Configure the interfaces via the files ifcfg-bond0, and ifcfg-bond1 in the directory /etc/sysconfig/network-scripts/. Note In case the customer plans to scale out the single node installation to a cluster by adding more severs: Plan the network configuration for the GPFS and HANA networks as if the cluster was already present to simplify a later scale out.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 59 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

(c) Single node without Mellanox cards: If the machine is configured without a Mellanox card, bond0 and bond1 will be empty (i.e. no slave interfaces), but still be present. There is no need to change the IP addresses of both bonded interfaces and can remain 127.0.1.1 and 127.0.2.1. Ports of NICs that are placed in the server as a replacement for the Mellanox cards will be named starting from eth100. (d) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN / 127.0.1.1 (e.g 192.168.10.101/24) in order to properly auto-configure the private network. Follow the advice given by the customer in table 15: IP address configuration on page 23. It is also important to modify the host name/IP address pair of hananodeNN / 127.0.2.1 (e.g 192.168.20.101/24) in order to properly auto-configure the private network. Follow the advice given by the customer in table 15: IP address configuration on page 23. Configure the interfaces via the files ifcfg-bond0, and ifcfg-bond1 in the directory /etc/ sysconfig/network-scripts/. Warning If not changed, the installation will fail at a later point. Please see figure 19 on page 56. Please change the value in the marked black box to reasonable values, e.g. 192.168.10.101/24 gpfsnode01 for bond0 and 192.168.20.101/24 hanaode01 for bond1. Do not use the preset values in the fields IP address and hostname. Execute

1 service network restart

to load the new network configuration. Try to connect via SSH to the machine to ensure the network connectivity. 14. Reboot the server. Attention Mandatory kernel update after installation on RHEL 6.6. At the time this document is created, kernel version 2.6.32-504.16.2.el6, or higher, is mandatory for use with SAP HANA. See SAP Note 2136965– SAP HANA DB: Recommended OS settings for RHEL 6.6. Please see 13.4: Linux Kernel Update on page 165 for the steps needed to be performed.

6.5 Interim Check

Before starting phase three, it is a good practice to ensure that you can access all machines on the network and that each node is ready to install and configure the SAP HANA appliance software. You can use the following commands to determine that each system is ready for the cluster install. On every node run the following commands and check that they are consistent with the cluster you are about to install: 1. Review the physical partitions (sdx):

1 # cat /proc/partitions | awk '{ print $4 }' | sort

2. This command must properly show the node itself (not every node):

1 # cat /etc/hosts | grep gpfsnode

3. This command must properly show the node itself (not every node):

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 60 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # cat /etc/hosts | grep hananode

4. The following command lists all reachable servers in both internal networks. Ensure all servers are reachable. Except for the server’s own adapter, MAC address are shown and can be used for verifying that the right servers were found and not other servers in the same network reachable through other network connections:

1 # nmap -sP 192.168.10.0/24 192.168.20.0/24

5. Ensure the IBM GPFS private network is set up correctly:

1 # cat /proc/net/bonding/bond0

6. Ensure the SAP HANA private network is set up correctly:

1 # cat /proc/net/bonding/bond1

7. Check the time settings and NTP:

1 # ntpq -p 2 # date

If any of these values are not as expected, you should correct them and repeat this test before starting with phase three.

6.5.1 Installation of Mandatory Packages

Attention The following steps are mandatory for a successful installation of the appliance. Due to legal restrictions these steps are not automatically executed by the installation program.

6.5.1.1 SLES for SAP 11 SP3 Install the updates for libgcc_s1, and libstdc++6 shipped on the extra DVD shipped with the appliance.

1 unzip [mount point of DVD]/gcc47-runtime.zip -d /tmp 2 zypper install /tmp/libgcc_s1-4.7.2_20130108-0.17.2.x86_64.rpm /tmp/libstdc++6-4.7.2←- ,→_20130108-0.17.2.x86_64.rpm

6.5.1.2 RHEL 6.6 For RHEL the compatibility pack is shipped on an additional DVD.

1 yum -y install [mount point of RHEL for HANA DVD]/Packages/compat-sap*.rpm

Install the update for nss-softokn-freebl from the official repositories:

1 yum update nss-softokn-freebl

Or if the customer downloaded the RPMs:

1 yum install nss-softokn-freebl-3.14.3-22.el6_6.x86_64.rpm nss-softokn-freebl←- ,→-3.14.3-22.el6_6.x86_64.rpm

See also SAP Note 2001528– Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 61 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6.5.2 Installation without Network Connectivity

Attention Phase three needs uplink network connectivity and working DNS resolution to execute prop- erly. If there is no connectivity to the customer’s network and DNS server use this workaround: Add the external host names specified in step 9 of phase 2 (dialog "SAP HANA Configuration", see screenshot above) to the /etc/hosts file on all nodes so that every node can resolve the external host name of the other nodes. Test this by pinging the external host name of all nodes on every node before continuing with the next phase.

6.6 Phase 3

6.6.1 Verification of RAID Controller and HDD/SSD Firmware

Ensure that the RAID controllers, and the HDDs and SSDs run with the latest firmware. If you used BoMC14 in an earlier step to install all available firmware updates on this server, skip this step. Note Firmware bugs in older firmware versions may lead to decreasing performance or even data loss.

6.6.2 HANA Installation

Attention The SAP HANA installation packages are copied to the node in this step. Make sure that the Lenovo non-OS components DVD is still mounted via IMM (or USB thumb drive), or the installation will fail. Phase three starts after the machine has rebooted and you have ascertained that all the networking is working. Either from the console, or from a SSH connection, you may call the Lenovo SAP HANA appliance configuration tool. It is recommended to call the configuration tool on the first node but it can be started on any node of the cluster. Attention In case, you are connecting via SSH from a machine that is not set for the English language, you must set the LANG environment variable to "C" beforehand. If not, the SAP HANA Database Installation may break while trying to determine the hardware requirements. # export LANG=C Download the latest hardware check script from SAP Note 1658845– Recently certified SAP HANA hardware/OS not recognized. Copy the ZIP file to the server to /root/HanaHwCheck.zip. The automated (Lenovo) installer will update the HANA hardware check script automatically, if it finds this file at this location. Attention Not providing the most recent HANA hardware check script may cause the HANA installation to fail.

14https://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-BOMC

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 62 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6.6.2.1 Single Node Installation Execute the following command as root user.

1 # saphana-setup-saphana.sh

1. Read the License Agreement. Use to select Accept and press . 2. Check that the appliance was detected correctly and confirm with . 3. Select Single Node and confirm.

Figure 23: Installation Mode Selection

4. Accept the external hostname or set the correct value. Select OK . 5. Choose, if you want to get the RAID arrays configured automatically. We recommend to choose Yes . Only choose No , if the RAID was already configured before. 6. Make sure that gpfsnode01 is assigned to the correct IP. Press to select OK .

Figure 24: GPFS IP Configuration Dialog

7. Repeat for hananode01. 8. Confirm the shared filesystem mountpoint for HANA, or enter a customized value. HANA will be installed below this path. You can choose any other absolute, not yet existing path, such as /hana. Note Currently the default and recommended value is /sapmnt. Nowadays SAP recommends to use /hana and this may become the default path in future releases. Both paths are supported, but for new installations in legacy environments /sapmnt is strongly recommended. The IBM GPFS internal name of this filesystem will still be sapmntdata in any case. 9. Enter a SID. Select OK . 10. Enter an Instance Number. Select OK . 11. Confirm the User ID of the HANA user, or enter a customized value. Select OK .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 63 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

12. Confirm the Group ID of the HANA user, or enter a customized value. Select OK . 13. Enter the SAP HANA password. Select OK .

Figure 25: HANA Password Input Dialog

14. Confirm the password. Select OK . 15. Read GPFS license agreement and accept it with "1".

6.6.2.2 Cluster Installation Execute the following command as root user on every node in the cluster.

1 # saphana-setup-saphana.sh

1. Read the License Agreement. Use to select Accept and press . 2. Check that the appliance was detected correctly and confirm with . 3. Select Cluster (Worker) and confirm. 4. Accept the external hostname or set the correct value. Select OK . 5. Choose, if you want to get the RAID arrays configured automatically. We recommend to choose Yes . Only choose No , if the RAID was already configured before. 6. Read GPFS license agreement and accept the with "1". Execute the following command as root user only on the first node in the cluster after the previous step was completed for every node in the cluster.

1 # saphana-setup-saphana.sh

1. Read the License Agreement. Use to select Accept and press . 2. Check that the appliance was detected correctly and confirm with . 3. Select Cluster (Master) and confirm. 4. Enter the number of nodes in the cluster. Select OK . 5. Enter the number of standby nodes in the cluster. Select OK . 6. Make sure that the gpfsnode entries are assigned to the correct IPs. Press to select OK . 7. Repeat for hananode entries. 8. Confirm the shared filesystem mountpoint for HANA, or enter a customized value. You can choose any other absolute, not yet existing path, such as /hana.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 64 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note Currently the default and recommended value is /sapmnt. Nowadays SAP recommends to use /hana and this may become the default path in future releases. Both paths are supported, but for new installations in legacy environments /sapmnt is strongly recommended. The IBM GPFS internal name of this filesystem will still be sapmntdata in any case. 9. Enter a SID. Select OK . 10. Enter an Instance Number. Select OK . 11. Confirm the User ID of the HANA user, or enter a customized value. Select OK . 12. Confirm the Group ID of the HANA user, or enter a customized value. Select OK . 13. Enter the SAP HANA password. Select OK . 14. Confirm the password. Select OK .

Note Follow the instructions in Section 7: After Installation on page 66. Please also review SAP Note 1906381– Network setup for external communication for an overview how HANA can connect to the client network.

6.6.3 Single Node with HA Installation with Side-car Quorum Solution

Adding a second node for high availability is described in section 10.1: Single Node with HA Installation with Side-car Quorum Solution on page 103. Please refer to there when installing a simple single node HA solution.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 65 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

7 After Installation

After the installation of the Lenovo Solution you have to take several actions to ensure that the installation is correct.

7.1 Actions to insure the correctness of the installation

• At first execute a system check (see Section 15.2: Basic System Check on page 183) with the latest version of the check script. Follow the instructions given by the check script to prevent unwanted behaviour of the appliance. Warning Update the kernel and IBM GPFS to the suggested levels. Earlier versions of GPFS and the kernel have known bugs that may cause the appliance to stop working.

Attention Do not change the SSH configuration for the root user (e.g. not allowing SSH logins). SSH is required for IBM GPFS and is configured accordingly. • On x3850 X6 and x3950 X6 servers you can create a symbolic link from /sapmnt/ to /sapmnt/ shared/ to simulate the GPFS filesystem layout of eX5 based appliances, if you use scripts or other tools that use this path hard coded:

1 ln -s /sapmnt/shared/ /sapmnt/

• Install the SAP Solution Manager Diagnostics Agent (SMD). If the customer plans to integrate the new HANA server(s) into his existing SAP management infrastructure (SAP Solution Man- ager, System Landscape Directory) the SMD must be installed in preparation. The SAP Solution Manager Diagnostic Agent can be installed via the SAP HANA Lifecycle Manager (HLM). To install the SMD via the HANA Lifecycle Manager open a browser and navigate to https: //:1129/lmsl/HLM//ui?sid=, choose Add Solution Manager Diagnostics Agent (SMD) and follow the instructions on screen. Skip the registration forms for the Solution Manager and the System Landscape Directory if you do not wish to register the HANA installation at this time. For other means to use the HLM or if the HLM is not accessible please refer to the SAP HANA Update and Configuration Guide15. The installation of SAP Solution Manager Diagnostics Agent is documented in the chapter Adding a Solution Manager Diagnostics Agent on an SAP HANA System in the aforementioned guide. • Check that the HANA log mode is configured correctly. If the log mode is wrong, the appliance will experience an out-of-space condition on the IBM GPFS (/sapmnt/). See SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery (No. 26 What general configuration options do I have for saving the log area?). • Make sure that the backup paths are configured correctly. They only are allowed to point to the GPFS filesystem if it is used as a staging area for a third party backup solution. Permanent backups on the GPFS are unsupported. • Check, if the SAP Host agent is running on every server. If not you can either reboot every server in the cluster or start it by executing on every server:

1 service sapinit start

15Obtainable from http://help.sap.com/hana_appliance

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 66 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

7.2 HANA Network Setup

There are several options how to setup HANA regarding the connection on the client network. This depends highly on the setup of the customer network. A good overview of the possibilities gives: SAP Note 1906381– Network setup for external communication

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 67 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8 Disaster Recovery

The scope of this section is to provide a guide for the Lenovo Disaster Recovery (previously SAP Disaster Tolerance) solution for SAP HANA . The solution is implemented in two physically independent locations, with one location used as the production site and the second which serves as the backup or disaster site. A third optional location is possible for a tie breaking (quorum) feature of GPFS. The goal of DR is to enable a secondary data center to take over production services with exactly the same set of data as stored in the primary site’s data center. Synchronous data replication between the primary and secondary site ensures zero data loss (RPO=0). This allows the protection of a data center against events like power outage, fire, flood, or hurricane. The time required to recover the services (RTO) is different for each installation depending upon the exact client implementation.

8.1 Architecture

This sections briefly explains the architecture of the Lenovo DR solution for SAP HANA and provides examples how it can be installed in a standard two-tier or three-tier data center environment.

Site C Quorum Node (Optional)

Site A Site B FS: FS: sapmntdata sapmntdata

SynchronousSynchronous ReplicationReplication

Figure 26: DR Architectural Overview

8.1.1 Terminology

The terms site A, primary site, and active site are used interchangeably in this document to refer to the location where the productive SAP HANA HA is initially set up and used. Similarly, site B, backup site, and passive site all refer to the second location where the productive SAP HANA HA system is copied to in the case of a disaster. After a failover the naming of these two sides may be swapped, depending whether the customer wants to switch back as soon as possible or keep using the former backup site as the primary site. Site C will refer to the quorum or tiebreaker site. SAP also uses the terms Disaster Recovery (DR) and Disaster Tolerant (DT) interchangeably. We will try to be consistent and use DR in this document.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 68 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.1.2 Architectural overview

The Lenovo DR solution for SAP HANA can be thought of two standard Lenovo HA clusters in two different sites combined into one large cluster. Each site can be planned as a standard Lenovo HA cluster with the same hardware requirements as the standard solution. Currently, the only architectural requirement is that both sites have the same number of server nodes and each site has the same number of network switches as the existing Lenovo HA cluster offering. The idea of Lenovo DR solution for SAP HANA is to have one stretched IBM GPFS cluster spanning both sites and providing one file system for SAP HANA. There are two separate SAP HANA clusters on both sites that can access data in this single shared file system. Synchronous data replication built into the file systems ensures that at any given point in time there is the exact same data in both data centers. Figure 27: DR Data Distribution in a Four Node Cluster on page 69 shows the high-level architecture. Warning As of December 2012, SAP has published an end-to-end value of 320µs latency between the synchronous Sites of a DR cluster. It is known by both SAP and Lenovo that this number of itself is not enough to describe if the SAP HANA database can recover from a disaster or not. Latency is a term that can be split into many different categories such as: network latency, or application latency; each of which has their own values necessary for a proper DR setup. It is also dependent on whether you use On Line Analytical Processing (OLAP) or On Line Transaction Processing (OLTP) workloads. Currently SAP is considering this value on a case per case basis, and it is important that you discuss these values with your customer and the SAP consultant on site. The Lenovo DR solution for SAP HANA works with a total of three data copies. The first copy is kept local to the writing node. The second copy is stored on any other node except the writing node and the third copy is always stored on any node on the remote site. Depending on the file size and actual disk space usage of a certain node, GPFS tends to either cluster blocks on a node or stripe them across multiple nodes. The same applies to distribution over disks within a node.

Site A Site B node1 node2 node3 node4 node5 node6 node7 node8

HDD HDD HDD HDD HDD HDD HDD HDD

a

t

c

i

s

l

r

i

p

f

e

r

synchronous

d a sy

n n c c i h

l o ro

c

p n ou

e e s

r

s

a

d c

i

r

l

i

p

h

t

e

r

a

a

t

t

e

a fio fio fio fio fio fio fio fio

d

m

FG FG FG FG FG FG FG FG 1,0,1 1,0,2 2,0,1 2,0,2 1,1,1 1,1,2 2,1,1 2,1,2

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 sdb2 OS OS OS OS OS OS OS OS

Figure 27: DR Data Distribution in a Four Node Cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 69 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

The details of the network setup are not strictly defined. It is up to the project team to develop a solution that is suitable to the customer’s existing network infrastructure. This must be discussed well in advance together with the customer networking team. The basic requirement is to have at least two sites, a third network site is needed if a so called tiebreaker node will be part of the Disaster Tolerance architecture. Each site will use a standard HA setup with its own dedicated GPFS and SAP HANA network. This can be provided by using the standard IBM RackSwitch G8264 10 Gbps Ethernet switches, which are part of the standard SAP HANA HA offering of Lenovo. The standard network requirements of a HA solution regarding the customer’s uplink connectivity also apply to DR. For the tiebreaker node at site C, there are no special network requirements as there is only one server. For the connectivity between the two main sites, HANA HANA at least one dedicated optical fibre connection end- to-end between both sides is recommended. A routed or non-dedicated connection may be used, GPFS but no guarantees about performance or operation can be made. Using redundant optical fibres end- to-end may improve performance and reliability. The project team is responsible to work out a solu- tion with respect to the customer’s infrastructure Site C (optional) and requirements. A dedicated Ethernet network needs to be provided for the GPFS network. For Figure 28: Logical DR Network Setup the configuration of the inter-site portchannel see Section 5.8: Inter-Site Portchannel Configuration on page 36. Figure 27 on page 69 shows a scenario with four nodes on each site. Only the HANA internal network and the GPFS network are shown, no uplinks connecting the HANA cluster to the client network. In a solution with a quorum site, the tiebreaker node must be reachable from within the internal GPFS IPv4 network, each node must be able to reach the tiebreaker node and vice versa. There are no other special requirements on this connection; neither bandwidth or latency guarantees are needed. It is acceptable to use a routed connection through the customer’s internal network as long as it is reliable.

IBM RackSwitch G8264 #1 IBM RackSwitch G8264 #3 HANA internal HANA internal 10 Gbit GPFS GPFS

IBM RackSwitch G8264 #2 IBM RackSwitch G8264 #4 ISL 40 Gbit 40 Gbit ISL HANA internal HANA internal 10 Gbit GPFS GPFS

4 ports from each node: 2x GPFS, 2x HANA internal 4 ports from each node: 2x GPFS, 2x HANA internal

node1 node2 node3 node4 node5 node6 node7 node8

Figure 29: DR Networking View (with no client uplinks shown)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 70 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.1.3 Three site/Tiebreaker node architecture

If the customer decides to use a tiebreaker node in a third site, an additional server with an appropriate GPFS license is required. Although the use of any server is possible, we recommend to use the Side-Car Quorum Node x3550 M3/M4 defined in section 10.1.2: Prepare quorum node on page 105. This definition includes the necessary licenses and services required for the tiebreaker node. This node is optional but recommended for increased reliability and simplicity in the case of a disaster. The solution has been tested in setups with and without this additional node. The rationale for this node is the split-brain scenario where the connection between the two main sites is lost. The tiebreaker node helps in deciding which site is the active site and, thus, prevents the primary site from going down for data integrity reasons. Additionally, this server eases some operational procedures by reducing both the time needed for recovery and the likelihood of operating errors. This document will describe the use of the tiebreaker node and explain the deviations when it is not necessary.

8.2 Mixing eX5/X6 Server in a DR Cluster

Please read chapter 9.2: Mixed eX5/X6 DR Clusters on page 97. Information given there takes precedence over the instructions below.

8.3 Hardware Setup

This section talks about how to physically install System x machines and how to prepare uEFI for HANA. It also provides information about the network has to be set up.

8.3.1 Site A and B

The hardware setup of the nodes at each site has to be performed as described in section 6: Guided Install of the Lenovo Solution on page 41. The following list summarises these steps. • Ensure certified hardware is available and connected to power • Verify firmware levels. They must be identical on all nodes • Modify / Check UEFI settings. They must be identical on all nodes • Configure storage (RAID setup)

8.3.2 Tiebreaker Site C (optional)

It is recommended to setup the tiebreaker node according to the description in section 10.1.2: Prepare quorum node on page 105. The tiebreaker node must have a small partition (50 MB is sufficient) to hold a replica of the GPFS file system descriptors. It will not contain any data or metadata information. The node must be able to reach all other nodes at both site A and site B of the GPFS cluster. The partition can reside on a logical volume (LVM) if desired. However, GPFS must be able to recognize the partition, so, when using LVM, the name /dev/dm-X must be used instead of the logical volume name. Performance is not critical for this partition.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 71 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.3.3 Acquire TCP/IP addresses and host names

Refer to section 5.3: Network Configuration on page 22 which contains a template that can be used to gather all the required networking parameters. Ideally, this is done before the installation starts at the customer location.

8.3.3.1 Tiebreaker node The following parameters must be available for the installation of the cluster:

Parameter Value Hostname IP address for Hostname IP address for GPFS Network

Table 35: Hostname Settings for DR

In case of a new installation these additional parameters are required. See Table 36 on page 72

Parameter Value Netmask Default gateway Additional routes DNS server NTP server

Table 36: Extra Network Settings for DR

The tiebreaker node must be able to reach all cluster nodes on both sites with the IP addresses and hostnames used for GPFS (gpfsnodeXX) with which the GPFS cluster uses to communicate internally. Conversely, the cluster nodes must reach the tiebreaker node with the same host name and IP address. This can be achieved, for example, via routing, tunneling, a VPN connection, or through a dedicated physical network.

8.3.4 Network switch setup (GPFS and SAP HANA network)

The setup of the switches used for the GPFS and SAP HANA network is described in section 5.4: Network Switch Configuration For Clustered Installations on page 23. For the link between the switches on both sites refer to the next sections.

8.3.5 Link between site A and B

The GPFS network will be stretched over site A and B, while the SAP HANA network must not. This means that the GPFS network on both sites will be one subnet and each node can reach all other nodes on both sites; whereas, the SAP HANA networks on site A and B are isolated from each other. The GPFS network on both sites should be connected with at least a dedicated 10GBit connection. A routed network is not recommended as it may have severe impact on the synchronous replication of the data. The SAP HANA network is separated on both sites. This is due to SAP HANA being operated in a cold standby mode. For this reason, both sites will use the same hostnames and IP adresses for SAP HANA. This requires a strict isolation of these two networks.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 72 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.3.6 Network integration into customer infrastructure

The network connections in the customer network for SAP HANA access, management, backup and other connections depends very much on the customer network and his requirements. General guidance can be found in section 6: Guided Install of the Lenovo Solution on page 41.

8.3.7 Setup network connection to tiebreaker node at site C (optional)

The tiebreaker node at site C needs to be integrated as well into the GPFS cluster. Every node in the cluster must be able to contact the tiebreaker node and vice versa. This depends on the configuration of the tiebreaker node (one or more network interfaces), the subnet used for GPFS traffic (private or public) and other parameters. It is up to the project team to come to an agreed solution with the customer. Possible setups include a multi-homed tiebreaker node or static host routes when private address ranges are used. VPN, NAT or router capabilities are further options. The following is an example for a setup with a GPFS subnet of 192.168.10.x and a tiebreaker node with one network adapter and a public IP address in a 10.x.x.x range: 1. On the tiebreaker node add the GPFS address as an alias to the NIC attached to the public network e.g.

1 # ifconfig eth0:1 192.168.10.199 netmask 255.255.255.0

To make this permanent, add an entry like this to the respective ifcfg-ethX file in /etc/syscon- fig/network

1 IPADDR_1='192.168.10.99/24'

2. Add host routes on every node in the GPFS cluster to this IP alias.

1 # route add -host 192.168.10.199 gw

3. Add host routes on the tiebreaker node for every node in the cluster.

1 # route add -host 192.168.10.101 gw 2 # route add -host 192.168.10.102 gw 3 ... 4 # route add -host 192.168.10.10X gw

4. Verify that the newly created alias is reachable throughout the cluster and all nodes can be pinged from the tiebreaker node via the internal GPFS network addresses.

8.4 Software Setup

Note The base installation changed with the advent of the new text based installer which also allows the installation on Red Hat Enterprise Linux. This replaces the manual installation described here in earlier releases.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 73 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note Starting with appliance version 1.9.96-13 the mount point for the GPFS file system sapmntdata is user configurable during installation. SAP HANA will be also installed into this path. Lenovo currently recommends to use /sapmnt, while SAP promotes /hana. The following commands and code snippets use /sapmnt. For any other path please replace /sapmnt with the chosen path. Install all standard DR servers as described in section 6: Guided Install of the Lenovo Solution on page 41. In phase 3 choose the role Cluster Node (Worker) for all servers. Please note that in the interim check in section 6.5: Interim Check on page 60 each site is only expected to see only the site-local nodes in the HANA network test. For the optional quorum node, please follow the instructions given in section 10.1.2: Prepare quorum node on page 105 and following to install the base operating system and software.

8.4.1 GPFS configuration prerequisites

Create /etc/hosts entries for GPFS To ensure communication over the correct network interfaces, define the host entries manually on each node (including the tiebreaker node if available) for the GPFS and SAP HANA networks. Ensure that the entry for the local machine is always the first entry in the list. This is required for the installer scripts. Do not copy this file from one node to the other as it will break other installation scripts. Each node in the cluster (except the tiebreaker node) has the following two names associated with it

1 192.168.10.1XX gpfsnodeXX 2 192.168.20.1XX hananodeXX

The tiebreaker node only has a gpfsnode name as it is used solely for GPFS communication

1 192.168.10.1XX gpfsnodeXX

The GPFS network spans both sites, which means in an example with four nodes per site you have gpfsnode01 up to gpfsnode08 (gpfsnode01-04 at site A, gpfsnode05-08 at site B). The SAP HANA network is restricted to only one site, which in turn means you should use each hanan- odeXX entry twice (once per site). This effectively couples any active SAP HANA node to a backup node on the second site. In the example with four nodes on each site you have hananode01 to hananode04 at site A and hananode01 to hananode04 at site B.

8.4.1.1 Example two sites with four nodes each

1 ... 2 # Second node on first site: 3 192.168.10.102 gpfsnode02 4 192.168.20.102 hananode02 5 192.168.10.101 gpfsnode01 6 192.168.20.101 hananode01 7 192.168.10.103 gpfsnode03 8 192.168.20.103 hananode03 9 192.168.10.104 gpfsnode04 10 192.168.20.104 hananode04 11 ... 12 # Second node on second site (physically the sixth node) 13 192.168.10.106 gpfsnode06

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 74 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

14 192.168.20.102 hananode02 15 192.168.10.105 gpfsnode05 16 192.168.20.101 hananode01 17 192.168.10.107 gpfsnode07 18 192.168.20.103 hananode03 19 192.168.10.108 gpfsnode08 20 192.168.20.104 hananode04 21 ...

The optional tiebreaker node only has GPFS addresses. This has two consequences: the tiebreaker node only has gpfsnodeXX entries in the /etc/hosts file for all nodes; and, all other nodes have no hananodeXX entry for this special node. In our example above, a tiebreaker node would get allocated gpfsnode99. After editing the /etc/hosts entries it is a good idea to verify network connectivity. To do so, execute the following command to list all nodes of the DR clusters attached to the GPFS network:

1 # nmap -sP 192.168.10.0/24

and execute this command at each site to confirm the SAP HANA network:

1 # nmap -sP 192.168.20.0/24

Only the nodes of the site should be listed using the second command. Verify that you got the correct machines by comparing the displayed MAC addresses with the MAC addresses of the bond1 device on each respective node.

8.4.1.2 SSH key exchange As GPFS uses SSH, the root ssh keys on all nodes need to be exchanged to allow for password-less SSH connectivity within the cluster. This is a general GPFS requirement. Please note that the following commands will overwrite any additional SSH key authorizations you may have installed yourself. Run the following commands all from the first node in the GPFS cluster. Generate the known_hosts file on the first node

1 # for node in gpfsnode0{1..8} ; do ssh-keygen -R $node ; ssh-keyscan -t rsa $node >>←- ,→ /root/.ssh/known_hosts ; done 2 # for ip in 192.168.10.{1..8} ; do ssh-keygen -R $ip ; ssh-keyscan -t rsa $ip >> /←- ,→root/.ssh/known_hosts ; done

Generate a new SSH key for passwordless ssh access, authorize it and distribute it to the other nodes:

1 # ssh-keygen -q -b 4096 -N "" -C "Unique SSH key for root on DR Cluster" -f /root/.←- ,→ssh/id_rsa 2 # cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys 3 # for node in gpfsnode0{1..8} ; do scp /root/.ssh/id_rsa /root/.ssh/id_rsa.pub /root←- ,→/.ssh/authorized_keys root@$node:.ssh/ ; done

Distribute the known_hosts file to the other nodes:

1 # for node in gpfsnode0{2..8} ; do scp /root/.ssh/known_hosts root@${node}:/root/.←- ,→ssh/ ; done

A small explanation for the gpfsnode01..8 value, this generates a list of names from gpfsnode01 to gpf- snode08. If the host names are non-successive, replace gpfsnode01..8 with a space separated list of the hostnames. The distribution of the known_hosts file omits the first node, as on this node the files are already prepared.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 75 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note In previously releases of this document the shipped SSH root key was used and distributed among the nodes in the DR-enabled. This imposes a security risk and you should consider replacing this key with a new unique key. Please contact support.

8.4.2 GPFS Server configuration

Create the necessary configuration files. On the first node (which will be the primary configuration server), create a file /var/mmfs/config/nodes.cluster and add one line per node containing its GPFS network hostname. If applicable, add the tiebreaker node as last node. Next append ":quorum" (no spaces) to the end of line for some hosts, according to the following rules: a) Distribute all available nodes (except tiebreaker) in four equal sized groups and append ":quorum" to the first node of each group. b) If a quorum node is available, mark it as quorum. c) Without a quorum node, mark the second node of the first group as a quorum. With an example of 8 nodes, you should have 5 nodes marked as quorum nodes. See the following example for an 8 node DR cluster without and with a dedicated tiebreaker node (gpfsnode99):

Topology nodes.cluster file with nodes.cluster file with- Vector Quorum Node out Quorum Node Failure group 1 1,0,x gpfsnode01:quorum gpfsnode01:quorum gpfsnode02 gpfsnode02:quorum Failure group 2 2,0,x gpfsnode03:quorum gpfsnode03:quorum gpfsnode04 gpfsnode04 Failure group 3 1,1,x gpfsnode05:quorum gpfsnode05:quorum gpfsnode06 gpfsnode06 Failure group 4 2,1,x gpfsnode07:quorum gpfsnode07:quorum gpfsnode08 gpfsnode08 Failure group 5 3,0,1 gpfsnode99:quorum (not applicable) (tie breaker)

Table 37: GPFS Settings for DR Cluster

The nodes.cluster file for an eight node setup without separate quorum node (i.e. tiebreaker node) should look like this:

1 gpfsnode01:quorum-manager 2 gpfsnode02:quorum-manager 3 gpfsnode03:quorum-manager 4 gpfsnode04: 5 gpfsnode05:quorum-manager 6 gpfsnode06: 7 gpfsnode07:quorum-manager 8 gpfsnode08:

Note Adding node designation ’manager’ is optional as quorum nodes are automatically eligible to be chosen as cluster manager. One comment regarding the topology vectors, as they will be used in a later step. The value of x has to be replaced with the number of the node within the failure group. If you have 3 nodes in each failure

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 76 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

group, and the number of the nodes is from 1 to 3 in each failure group, then the second node in the first failure group will be 1,0,2; the second node in the third failure group will be 1,1,2. Create the GPFS cluster with the first node of each site as primary (-p) resp. secondary server (-s)

1 # mmcrcluster -n /var/mmfs/config/nodes.cluster -p gpfsnode01 -s gpfsnode05 -C ←- ,→HANADR1 -A -r /usr/bin/ssh -R /usr/bin/scp

Mark all nodes as licensed. Mark all the quorum nodes (including the optional tiebreaker node) and the configuration servers with a server license and all other nodes as FPO licensed.

1 # mmchlicense server --accept -N gpfsnode01,gpfsnode02,..,gpfsnode99 2 # mmchlicense fpo --accept -N gpfsnode03,gpfsnode04,...

Please take care of your actual licensing. Start the GPFS daemon on all nodes

1 # mmstartup -a

Apply the following cluster configuration changes

1 # mmchconfig unmountOnDiskFail=meta -i 2 # mmchconfig panicOnDiskFail=meta -i 3 # /usr/bin/yes 999 | /usr/lpp/mmfs/bin/mmchconfig dataStructureDump=/tmp/GPFSdump,←- ,→pagepool=4G,maxMBpS=2048,maxFilesToCache=4000,skipDioWriteLogWrites=1,←- ,→nsdInlineWriteMax=1M,prefetchAggressivenessWrite=2,readReplicaPolicy=local,←- ,→enableRepWriteStream=false,enableLinuxReplicatedAIO=yes,nsdThreadsPerDisk=24

After this last command you need to restart GPFS with

1 # mmshutdown -a 2 # mmstartup -a

8.4.3 GPFS Disk configuration

On the first node, create a file /var/mmfs/config/disk.list.data.fs. For each node add entries as described in the following section, but replace the failureGroup with the correct topology vector for the particular node. Make sure that the pool definitions are only once in this file.

8.4.3.1 GPFS 3.5 Disk Definitions For every HDD RAID device /dev/sdb and subsequent devices add a NSD definition like the following template:

1 %nsd: device=/dev/sdb 2 nsd=data01node01 3 servers=gpfsnode01 4 usage=dataAndMetadata 5 failureGroup=1,0,1 6 pool=system

Please don’t forget to increment the first number in the nsd line, e.g. data02node01 for the second HDD block device. You can get a device list with lsscsi. Then after adding als device stanzas add these lines unaltered:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 77 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 %pool: 2 pool=system 3 blockSize=1M 4 usage=dataAndMetadata 5 layoutMap=cluster 6 allowWriteAffinity=yes 7 writeAffinityDepth=1 8 blockGroupFactor=1

When using a tiebreaker node add the following lines to the stanza file:

1 %nsd: device=/dev/sda3 2 nsd=desc01node99 3 servers=gpfsnode99 4 usage=descOnly 5 failureGroup=3,0,1 6 pool=system

Replace device, nsd name, server with the correct values where necessary. If your setup includes a tiebreaker node determine the device name of the partition allocated for the descriptor-only NSD and change the line in disk.list.data.fs starting with %nsd: device= accord- ingly.

8.4.4 Filesystem Creation

Create the NSDs

1 # mmcrnsd -F /var/mmfs/config/disk.list.data.fs -v no

Create the filesystem

1 # mmcrfs sapmntdata -F /var/mmfs/config/disk.list.data.fs -A no -B 512k -N 3000000 -←- ,→v no -m 3 -M 3 -r 3 -R 3 -j hcluster --write-affinity-depth 1 -s ←- ,→failureGroupRoundRobin --block-group-factor 1 -Q yes -T /sapmnt

Create filesets

1 # mmcrfileset sapmntdata hanadata -t "Data Volume for HANA database" 2 # mmcrfileset sapmntdata hanalog -t "Log Volume for HANA database" 3 # mmcrfileset sapmntdata hanashared -t "Shared Directory for HANA database"

Mount the filesystem on all nodes

1 # mmmount sapmntdata -a

To verify the file system is successfully mounted execute

1 # mmlsmount sapmntdata -L

Link the filesets in the filesystem

1 # mmlinkfileset sapmntdata hanadata -J /sapmnt/data 2 # chmod 755 /sapmnt/data 3 # mmlinkfileset sapmntdata hanalog -J /sapmnt/log 4 # chmod 755 /sapmnt/log 5 # mmlinkfileset sapmntdata hanashared -J /sapmnt/shared 6 # chmod 755 /sapmnt/shared

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 78 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Set a quota on the hanalog fileset The formula for the log quota in a DR scenario is: <# of active nodes> * RAM * <# of GPFS replicas> Example: In a 7+7 scenario with L nodes using 6 worker nodes and 1 standby 6 * 1024G * 3 = 18432G Set the quota

1 # mmsetquota -j hanalog -h 18432G -s 18432G /dev/sapmntdata

8.4.5 SAP HANA appliance installation

Warning SAP HANA in this DR solution must be installed using the hostname of the HANA-internal network (usually on bond1, hostname hananodeXX). The host based routing used in the HA solution is not applicable for the DR solution. We recommend to install SAP HANA on the backup site first and thereafter on the primary site. This is safer to install because your backup site installation cannot accidentally make changes to your production environment.

8.4.5.1 Install HANA on backup site Before continuing with the installation make sure that the GPFS file system sapmntdata is mounted at /sapmnt. In order to prepare the backup site, it is necessary to do a standard HANA installation and then delete the installed content on the shared filesystem.

8.4.5.1.1 Install SAP HANA software on backup site Please install SAP HANA on the backup site as described in the official SAP documentation available here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is /var/tmp/saphana. The roles (worker or standby) are not important, except that the first one needs to be a worker. We recommend to install all other nodes as standby, as this installation type is faster.

8.4.5.1.2 Stop HANA and SAP Host agent on backup site Log in as adm on one node and stop SAP HANA:

1 $ HDB stop

Then log in as root and stop SAP Host agent and other services:

1 # /etc/init.d/sapinit stop

Afterwards disable the autostart of the sapinit service

1 # chkconfig sapinit off

Do the last two steps on all backup nodes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 79 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.4.5.1.3 Delete SAP HANA shared content The purpose of this installation is to install the node local parts of a SAP HANA system. After installing SAP HANA on all backup site nodes the data in /sapmnt must be deleted:

1 # rm -r /sapmnt/data/ 2 # rm -r /sapmnt/log/ 3 # rm -r /sapmnt/shared/

8.4.5.1.4 Disable mmfsup script on backup site nodes An installation with the Recovery Image will install a mmfsup script which will automatically start SAP HANA after the file system comes up. This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.) The script resides in /var/mmfs/etc. Disable it on all cluster nodes.

1 # chmod 644 /var/mmfs/etc/mmfsup

Note In previous releases of this document the mmfsup script was deleted. This is not necessary as disabling the script is sufficient and will keep the file for future use.

8.4.5.2 Install HANA on primary site Now install SAP HANA again on the primary site as described in the official SAP documentation available here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is /var/tmp/saphana. Install SAP HANA with the same parameters as on the backup site. This is very important for DR to work properly. Please make sure that you install the individual HANA nodes with the correct roles, for example, five worker and one standby node in a six node per site solution. After the installation finished deactivate the autostart of SAP Services

1 # chkconfig sapinit off

Please verify that the user adm and the group SAPSYS have the same UID resp. GID on all nodes. Use the command

1 # id adm

and compare the numerical IDs of adm and group sapsys. You can specify the id’s in the SAP HANA Installation process either over a configuration file or a commandline parameter, you find details in the SAP documentation: SAP HANA Server Installation and Update Guide.

8.4.5.3 Disable mmfsup script on production site nodes An installation with the Recovery Image will install an mmfsup script which will automatically start HANA after the file system comes up. This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.) The script resides in /var/mmfs/etc. Remove it on all cluster nodes.

1 # chmod 644 /var/mmfs/etc/mmfsup

Note In previous releases of this document the mmfsup script was deleted. This is not necessary as disabling the script is sufficient and will keep the file for future use.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 80 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.4.6 Tiebreaker node setup

8.4.6.1 Quorum node setup using a new node The setup of a new server can be done by following the instructions in section 10.1.2: Prepare quorum node on page 105 excluding the setup of the switches which does not apply to a DR configuration.

8.4.6.2 Tiebreaker node setup using an existing node If an existing node will be used as the tiebreaker node, please consult the system administrator and ask him to: • Provide a partition which will be used for to hold the GPFS file descriptor information • Install GPFS • Build the GPFS portability layer. Note: This may require the installation of the kernel header files / sources and some development tools (compiler, make...) • Setup network access to all other GPFS cluster nodes in the GPFS network • Exchange ssh keys so that the tiebreaker node root account can be accessed without a password from the other GPFS cluster nodes. Follow the instructions in sections 10.1.6: Quorum Node IBM GPFS setup on page 108 and 10.1.7: Quorum Node IBM GPFS installation on page 108. General information how to install and setup GPFS can be found online in the Information Center section Installing GPFS on Linux nodes.

8.4.7 Verify Installation

8.4.7.1 GPFS Cluster configuration • Verify that all nodes are up and running

1 # mmgetstate -a

• Verify distribution of the configuration servers The primary and secondary GPFS configuration servers must each be on one site. Otherwise, fail-over to the standby site will not work. This is checked with

1 # mmlscluster

• Verify distribution of quorum nodes The current active quorum setup can be checked with

1 # mmgetstate -aLs

The cluster configuration is listed with

1 # mmlscluster

When using the tiebreaker node check that the tiebreaker node is a quorum node and that the remaining quorum nodes are distributed evenly among the other file system failure groups. You see the failure groups with

1 # mmlsdisk sapmntdata

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 81 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Information about the failure group setting can be found in section 8.4.2: GPFS Server configuration on page 76. If not using the tiebreaker make sure that the active site has at least one more quorum node than the passive site. In general, try to keep an odd number of quorum nodes. • Verify cluster manager location Verify the location of the cluster manager depending on the use of the tiebreaker node

1 # mmlsmgr

If the solution uses a tiebreaker node, the cluster manager must be on the passive/backup site, in a solution without a tiebreaker node, the cluster manager must be on the active site. To change the cluster manager issue

1 # mmchmgr -c

• Verify replication factor 3 (= three copies, two local and one remote copy)

1 # mmlsfs sapmntdata

Verify that the following values are all set to 3:

1 -m Default number of metadata replicas 2 -M Maximum number of metadata replicas 3 -r Default number of data replicas 4 -R Maximum number of data replicas

• Test replication factor 3 Write a new file to the shared filesystem and verify replication level applied to this file:

1 # mmlsattr

All values must be set to 3 and no flags (like illbalanced, metaupdatemiss, etc.) must be shown. Please check the GPFS documentation or ask IBM GPFS support if there are flags shown after restripe. • Check failure groups You should have four failure groups 1,0,x 2,0,x 1,1,x and 2,1,x. If you are using the tiebreaker node, a fifth failure group 3,1,1 should be in the file system. Get the list of failure groups from the disk list

1 # mmlsdisk sapmntdata

Make sure that the server nodes are distributed evenly among the failure groups. • Disk availability All GPFS disks must be online.

1 # mmlsdisk sapmntdata -e 2 All disks up and ready

If there are disks down or suspended, check the reason (eg. hardware failure, system reboot, ...) and restart them once the problem has been resolved. The following command will try to start all disks in the file system. This has no effect on already started disks.

1 # mmchdisk sapmntdata start -a

If disks are suspended you can resume them all with the following command:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 82 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # mmchdisk sapmntdata resume -a

Note Follow the instructions in Section 7: After Installation on page 66.

8.5 Extending a DR-Cluster

This section describes how to grow a DR cluster. Growing a DR enabled cluster requires that both sites grow by the same number of nodes. In general the installation of each active/backup server couple needs not to be done at the same time, but it’s highly recommended. The overcautious technician may also decide to install the backup node prior to the active node. The following sections will only explain the differences from the basic DR installation in the sections before.

8.6 Mixing eX5/X6 Server in a DR Cluster

Please read chapter 9.2: Mixed eX5/X6 DR Clusters on page 97. Information given there takes precedence over the instructions below.

8.6.1 Hardware Setup

Please refer to 8.3: Hardware Setup on page 71 and follow the instructions there. Ping the new machine on the GPFS network from all machines to test if the network configuration is correct. Ping the new machine on the HANA network from all servers, it is supposed to be reachable only from nodes on the same site.

8.6.2 GPFS Part 1

1. First step is to add /etc/hosts entries on every machine. Let’s assume that the new nodes are the 9th and 10th nodes with node09 going to the active site and 10 into the backup site. Distribute any new nodes evenly into the existing failure groups (topology), so that a failure group has at most one more node than the other, put the backup server into the corresponding FG on the backup site. In the example above, the 9th node will go into failure group 1 (1,0,x) getting the topology vector 1,0,3 and the 10th node will go into failure group 3 (1,1,x) with topology vector 1,1,3. On all existing nodes, add host entries for the the GPFS network, .e.g.:

1 192.168.1.109 gpfsnode09 2 192.168.1.110 gpfsnode10

On the new nodes add entries for all other nodes. Copying the entries from one of the existing nodes is the easiest way. First add host keys for the new nodes to the existing machines. Run on any existing node

1 # for srcnode in gpfsnode0{1..8} ; do echo node $srcnode ; ssh $srcnode 'for ←- ,→target in gpfsnode0{9,10} ; do echo -n $target ; ssh-keygen -R $target ; ←- ,→ssh-keyscan -t rsa target >> /root/.ssh/known_hosts ; done '; done

The value gpfsnode01..8 will generate a list from gpfsnode01 to gpfsnode08, if the host names differ or are not consecutive, replace this with a space separated list of host names. The same applies to gpfsnode09,10 which are the new nodes in this example.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 83 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Then copy the root SSH key to the new news. Issue these command on one of the existing cluster nodes:

1 # scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←- ,→root@gpfsnode09:/root/.ssh/ 2 # scp /root/.ssh/authorized_keys /root/.ssh/id_rsa /root/.ssh/id_rsa.pub ←- ,→root@gpfsnode10:/root/.ssh/

On all new cluster nodes run this command

1 # for node in gpfsnode{01..10} ; do echo -n $node ; ssh-keygen -R $node ; ssh-←- ,→keyscan -t rsa $node >> /root/.ssh/known_hosts ; done

Test the SSH key exchange by runnign this command on any node

1 # for srcnode in gpfsnode{01..10} ; do echo from node $srcgpfsnode ; ssh ←- ,→$srcnode 'for target in gpsfnode{01..10} ; do echo To node $target ; ssh ←- ,→$target hostname ; done '; done

The command should run without interaction and errors. 2. Install GPFS (base package):

1 # cd /var/tmp/install/gpfs- 2 # rpm -ivh gpfs.base--0.x86_64.rpm

3. Update to the latest GPFS Maintenance Release Warning It is highly recommended to upgrade to GPFS 3.5.0-17 or higher.

Install the following three packages for the latest (X) maintenance release:

1 # rpm -ivh gpfs.docs--X.noarch.rpm 2 # rpm -ivh gpfs.gpl--X.noarch.rpm 3 # rpm -ivh gpfs.msg.en_US--X.noarch.rpm

4. Verify your GPFS installation:

1 # rpm -qa | grep gpfs

The installed packaged from above should be listed here. 5. Build the GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/src/README:

1 # cd /usr/lpp/mmfs/src 2 # make Autoconfig 3 # make World 4 # make InstallImages

6. To add the new nodes to the cluster run on any running node

1 # mmaddnode -N gpfsnode09,gpfsnode10

7. Mark the servers as licensed:

1 # mmchlicense fpo --accept -N gpfsnode09,gpfsnode10

Please use the correct licensed for the nodes. Server and FPO are just examples.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 84 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8. Start the new nodes

1 # mmstartup -N gpfsnode09,gpfsnode10

9. Create the disk descriptor files. Before adding the disks to the shared file system, you must create the disk descriptor or stanza files. You can create them on any node on the cluster, but it is preferably done on the node where the files for the initial cluster creation are located. Please see chapter 8.4.3: GPFS Disk configuration on page 77 for a description of the stanza files. You only need to create entries for the drives on the new nodes and you can omit the pool configuration entries. Let us assume the new file is /var/mmfs/config/disk.list.data.gpfsnode0910. 10. Create NSDs

1 # mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnode0910

8.6.3 HANA Backup Node Installation

Skip this for a node on the active site. For the HANA installation on the backup site, we need a temporary filesystem which must satisfy some requirements. RAM based filesystems are not sufficient, so we use the fresh created NSDs for a temporary filesystem, install the backup instance, and destroy the temporary filesystem afterwards before continuing with the installation. 1. Create a temporary filesystem

1 /usr/lpp/mmfs/bin/mmcrfs sapmnttmp -F /var/mmfs/config/disk.list.data.←- ,→gpfsnode0910 -A no -B 1M -N 3000000 -v no -m 1 -M 3 -r 1 -R 3 j hcluster ←- ,→--write-affinity-depth 1 -s failureGroupRoundRobin --block-group-factor 1←- ,→ -Q yes

Before continuing with the installation make sure that the GPFS file system sapmntdata is not mounted at /sapmnt on the new nodes. Mount this filesystem on all new backup nodes

1 mmmount sapmnttmp /sapmnt -N

2. Install HANA on backup site In order to prepare the backup site, it is necessary to do a standard HANA installation and then delete the installed content on the shared filesystem. A tool to automate this procedure is currently in development by SAP. Install SAP HANA on the backup site as described in the official SAP documentation available here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is /var/tmp/saphana. Do a single node installation on each node. Make sure to use exact the same SAP SID, SAP instance number, user names, user IDs, group names and group IDs, paths as in the original DR-HANA installation. You can use the command id to query user and group information. 3. Stop HANA and SAP Host agent on backup site Log in as adm on one node and stop SAP HANA:

1 $ HDB stop

Then log in as root and stop SAP Host agent and other services:

1 # /etc/init.d/sapinit stop

Afterwards disable the autostart of the sapinit service

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 85 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # chkconfig sapinit off

Do the last two steps on all backup nodes. 4. Delete SAP HANA shared content 5. Disable mmfsup script on backup site nodes An installation with the Recovery Image will install a mmfsup script which will automatically start SAP HANA after the file system comes up. This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.) The script resides in /var/mmfs/etc. Remove it on all cluster nodes.

1 # rm /var/mmfs/etc/mmfsup

6. Delete temporary filesystem After installing all new backup nodes, unmount temporary Filesystem on all nodes

1 mmmumount sapmnttmp -a

and delete it

1 mmdelfs sapmnttmp

This will delete all shared HANA content and will leave the node specific HANA parts installed.

8.6.4 GPFS Part 2

1. Add disks to sapmntdata filesystem

1 # mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnode0910

2. Verify NSD status Verify that all NSDs are up and running

1 # mmlsdisk sapmntdata

3. Mount GPFS on active On the new active nodes and only on these, mount the GPFS file system

1 # mmmount sapmntdata -N gpfsnode09,gpfsnode10

GPFS setup is now complete.

8.6.5 HANA

8.6.5.1 Install HANA on active site 1. Please make sure that you have mounted the shared file system on the new nodes.

1 # mmlsmount sapmntdata -L

2. If not already installed, install the SAP host agent

1 # cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X64 2 # rpm -ihv saphostagent.rpm

As recommended by the RPM installation, a password for sapadm may be set.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 86 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

3. Deactivate automatic startup through sapinit at startup. Running SAP’s startup script during system boot must be deactivated as it will will be executed by a GPFS startup script after cluster start. Execute:

1 # chkconfig sapinit off

4. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration Guide". Warning SAP HANA in this DR solution must be installed using the hostname of the HANA- internal network (usually on bond1, hostname hananodeXX). The host based routing used in the HA solution is not applicable for the DR solution.

8.7 Using Non Productive Instances on Inactive DR Site

IBM supports the installation of storage expansions in a DR scenario to allow clients to run a non- productive SAP HANA instance on idling DR-site nodes. During normal operation in a DR scenario, all nodes at one of the two sites are only receiving data from the active site and store them on their local disks. SAP is tolerating to run a non-productive SAP HANA instance on those nodes. The local disks of the nodes are used for production data. A storage expansion is used to provide enough local storage for those non-productive instances. In the event of a disaster, when the backup site becomes the active site, all non-productive SAP HANA instances have to be shut down to allow production to continue to run.

8.7.1 Architecture

This section briefly explains how IBM enables the use of idling DR-site nodes to run non-productive SAP HANA instances.

8.7.1.1 Prerequisites The use of a storage expansion is only supported in a DR scenario. No expansions can be used when running in an HA environment unless being part of the certified server models. All nodes on the DR-site must have a storage expansion connected. Having only a subset of the DR-site nodes equipped with storage expansions is not a supported environment. Furthermore, all expansions must have identical disk drives installed. If the customer considers both participating data centers to be equal (which means that after a fail-over of his production instances to the DR-site he will not manually fail production back to his site A data center), then you must have storage expansion connected also to all primary site nodes. This storage expansion will remain unused until you actually need to move data away from DR-site nodes which are now being used to host SAP HANA production instances.

8.7.1.2 Architectural overview The following illustration shows you how IBM’s solution for SAP HANA DR with storage expansions looks like: The expansion storage is visible as local storage only and connected via the SAS interface. The storage is not shared by multiple nodes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 87 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Site A Site B node1 node2 node3 node4 node5 node6 node7 node8

HDD HDD HDD HDD HDD HDD HDD HDD

a

t

c

i

s

l

r

i

p

f

e

r

d

a

n

c

i

Produc- l

o

c

p

e tion e

r file s system

a

d

c

i

r

l

i

p

h

t

e

r

a

a

t

t

e

a fio fio fio fio fio fio fio fio

d

m

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS OS OS OS OS OS

RAID Ctrl RAID Ctrl RAID Ctrl RAID Ctrl

First replica ...... Second replica Second file system spanning only expansion box drives (metadata and data)

Figure 30: SAP HANA DR using storage expansion - architectural overview

Attention The external storage can only be used to host data of non-productive SAP HANA instances. The storage must not be used to expand space of the production file system or to store backups.

8.7.1.3 Architectural comments IBM only support running GPFS with a replication factor of 2 for the non-productive instance. This means, outages of a single node can be handled and no data is lost. We do not support a replication factor of 3 because the scope of non-productive SAP HANA environments does not include disaster recovery. There will be exactly one new file system spanning all DR-site expansion box drives. While we do not support a multi SID configuration it is a valid scenario to run, e.g., on some DR-site nodes a QA environment and on other DR-site nodes development. This, however, has to be done on the same file system. IBM does not enable quotas on the new expansion box file system. Make sure to have either a valid backup procedure in place or to regularly delete old backups.

8.7.2 Setup

This section assumes that the nodes have been successfully installed with an operating system already (as required for a backup DR site).

8.7.2.1 Hardware setup Connect the EXP2524 SAS port labeled ’In’ to one of the M5120 or M5225 ports. For details, see the EXP2524 Installation Guide. Configure the drives as described in the section 6: Guided Install of the Lenovo Solution on page 41. Either reboot or rescan the SCSI bus and verify that Linux recognizes the new drives.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 88 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

8.7.2.2 GPFS configuration You reuse the existing GPFS cluster and create a second file system spanning only the expansion drives of the DR-site nodes. Even if your setup includes expansions on the primary site, execute the procedure only on the DR-site expansions. The primary site expansion drives will not be used in the beginning. 1. On each DR-site node, collect the device names of all expansion drives. When using the M5225 Controller you can get the drive names with the this command:

1 # lsscsi |grep "M5225" |grep -o -E "/dev/sd[a-z]+"

or execute following command in case M5120 Controller is used:

1 # lsscsi |grep "M5120" |grep -o -E "/dev/sd[a-z]+"

You will end up with something like:

1 /dev/sde 2 /dev/sdf 3 /dev/sdg 4 /dev/sdh

for each of DR-site node. Note: After sdz, Linux wraps around and continues with sdaa, sdab, ... 2. Create additional NSDs For all new expansion drives, create NSDs according to the following rules: (a) all NSDs will be dataAndMetadata (b) all NSDs go into the system pool (c) naming scheme is extXXgpfsnodeYY with XX being the two-digit drive number and YY being the node number (d) One failure group for all drives within one expansion box Example: three M-size nodes with 32-drive expansion (gpfsnode01-03 are primary site nodes, 04-06 are secondary site/DR-site nodes)

1 /dev/sde:gpfsnode04::dataAndMetadata:4:ext01gpfsnode04:system 2 /dev/sdf:gpfsnode04::dataAndMetadata:4:ext02gpfsnode04:system 3 /dev/sdg:gpfsnode04::dataAndMetadata:4:ext03gpfsnode04:system 4 /dev/sdh:gpfsnode04::dataAndMetadata:4:ext04gpfsnode04:system 5 /dev/sde:gpfsnode05::dataAndMetadata:5:ext01gpfsnode05:system 6 /dev/sdf:gpfsnode05::dataAndMetadata:5:ext02gpfsnode05:system 7 /dev/sdg:gpfsnode05::dataAndMetadata:5:ext03gpfsnode05:system 8 /dev/sdh:gpfsnode05::dataAndMetadata:5:ext04gpfsnode05:system 9 /dev/sde:gpfsnode06::dataAndMetadata:6:ext01gpfsnode06:system 10 /dev/sdf:gpfsnode06::dataAndMetadata:6:ext02gpfsnode06:system 11 /dev/sdg:gpfsnode06::dataAndMetadata:6:ext03gpfsnode06:system 12 /dev/sdh:gpfsnode06::dataAndMetadata:6:ext04gpfsnode06:system

Store as /tmp/nsdlistexp.txt. Then create NSDs using those disks

1 # mmcrnsd -F /tmp/nsdlistexp.txt

3. Create file system

1 # mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 512k -N 3000000 -v no -←- ,→m 2 -M 2 -r 2 -R 2 -j hcluster --write-affinity-depth 1 -s ←- ,→failureGroupRoundRobin --block-group-factor=1 -T /sapmntext

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 89 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Warning Be sure to use nsdlistexp.txt and not your list with internal drives! Using the wrong drives can destroy your production data! 4. Mount file system on DR-site nodes only.

1 # mmmount sapmntext -N [list of DR-site nodes]

5. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration Guide". Take care to install HANA on /sapmntext and not on /sapmnt. Also take care that you don’t use the UID (user id) and GID (group id) of the DR HANA instance especially when installing non-productive HANA instances before installing the DR instance. If you have expansion boxes connected also to your primary site nodes, they get activated only when you need to migrate non-productive SAP HANA instances’ data away from DR-site notes. See the Lenovo SAP HANA Appliance Operations Guide16 for details. When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration Guide".

16SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 90 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

9 Mixed eX5/X6 Environments

9.1 Mixed eX5/X6 HA Clusters

Attention This chapter only applies to hybrid clusters consisting of servers with Intel Westmere, and Intel Ivy Bridge CPUs. Hybrid clusters with a mix of Intel Westmere, and Intel Haswell CPUs must not be installed!

9.1.1 Definition & Overview

A mixed eX5/X6 cluster is a System x Solution for SAP HANA cluster consisting of eX5 based servers (Intel Westmere, MT 7143 and 7147) and X6 based servers (Intel Ivybridge, MT 3837 and 6241). Another term used is "hybrid cluster". Due to the new storage layout for X6-only installations, an X6 configuration must be slightly modified before an X6 node can be added to an eX5 cluster. Such an X6 node is considered to be configured in legacy or compatibility mode. Besides the different storage layout, there are some minor configuration changes between the older West- mere appliance releases and the first X6 appliance versions. These will be explained below. Future releases will level the differences.

9.1.2 Prerequisites & Limitations

9.1.2.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 cluster is limited by the number of eX5 servers within that cluster. The number of X6 server must always be less than the number of eX5 nodes. If you plan to use more X6 servers in a cluster, the only supported options are either to increase the number of eX5 server so that they are still the majority or to switch to a pure X6 cluster which requires a reinstallation. For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement:

eX5 T-Shirt Size X6 Server Model SSD (x3690, 7147-H3X, Generation 1) AC32S256C (2 CPUs, 256GB RAM) S (x3690, 7147-HBX, Generation 2) M (x3950, 7143-H2X or 7143-HBX) AC34S512C (4 CPUs, 512GB RAM) L (x3950, 7143-H3X or 7143-HBX) AC48S1024C (8 CPUs, 1024GB RAM)

Table 38: eX5 T-Shirt Size to X6 Model Mapping

9.1.2.2 Prerequisites Before deploying any X6 server to an eX5 cluster, the GPFS filesystem soft- ware on the eX5 servers must be updated to the same version installed on the X6 models. The minimum supported GPFS versions for the cluster are GPFS 3.5 PTF 19 (3.5.0-19) or GPFS 4.1 PTF 8 (4.1.0.8) which may require an update even on the X6 nodes. Alternatively PTF 17 (3.5.0-17) with eFix 8 can be used. Contact IBM support to obtain this eFix. Do not use plain 3.5.0-17 without eFix 8! It is required to use only eX5 servers installed with appliance version 1.6.60-7 or later, which introduced RAID5 in cluster configurations. The RAID5 setup is perceived as being more reliable and convenient than the previously used RAID0 configuration. When installing a new cluster please use appliance version 1.6.60-7 or later for the eX5 servers. Appliance versions 1.6.60-7 and later contain a helper script for calculating the necessary file system quotas. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 91 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

version. If this script is not available, please calculate the quotas manually following the instructions in the appendix of the eX5 Operations Guide. Since Appliance version 1.7.70-9 an updated quota calculation help script is installed which can detect a hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.

9.1.3 New Installation

In general, the installation and operation instructions for eX5 and X6-based servers remain valid. For eX5 servers, please use the installation description in Lenovo eX5 Systems Solution for SAP HANA - Implementation Guide. For the installation of the X6 server, please use the Lenovo X6 Systems Solution for SAP HANA - Implementation Guide for System x X6 Servers and read the instructions below. Please read these instructions before installing the new server and take care to implement them correctly. Follow the Implementation Guide until (including) the call of the script saphana-setup-saphana.sh with the Cluster (Worker) option. Do not execute the script with the Cluster (Master) option. This means the script is only called once.

9.1.3.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level. After finishing the base installation in phase 2, login to the server and run

1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←- ,→ system2 ext2 "" 1675 3350

For SSD/S sized clusters this is not necessary.

9.1.3.2 Adapting the GPFS stanza file After configuring the base system and the subsequent reboot in phase 2 of the installation, the GPFS stanza files need to be adapted to the older eX5 storage layout. For S/SSD model based cluster no change is needed as these models use only one GPFS storage pool like the new X6 models. In clusters based on x3950 models, storage is divided into two GPFS storage pools. The new X6 servers must provide these two storage pools in order to be compatible. This is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd RAID array in the external SAS enclosure (AC34S512C) resp. in the upper storage book (AC48S1024C) to the storage pool hddpool. Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on all X6 nodes and change the usage and pool parameters as shown in table 39: Stanza file for X6 servers in eX5 clusters on page 93. Please set the nsd, servers and failureGroup to their correct values. Complete the installation as described in the eX5 Implementation Guide and run phase 3 (of the cluster configuration) from any eX5 node. Do not run the cluster configuration on an X6 machine as this will result in a misconfigured cluster. It is safe to install the whole cluster including the X6 servers from any eX5 node.

9.1.3.3 Enable automatic restripe for whole cluster eX5 models up to appliance software version 1.6.60-7 installed a script which attempts to start all NSDs and restripes the GPFS filesystem if any NSD was not up. This script was installed as a GPFS callback which gets triggered upon every node start. Since appliance version 1.7.70-8 the script and the callback are no longer installed and replaced by a GPFS- internal restripe mechanism. The GPFS-internal restripe is enabled by setting the cluster configuration value restripeOnDiskFailure=yes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 92 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Model Generated File Change To

1 %nsd: device=/dev/sdb 2 nsd=data01node04 AC32S256C 3 servers=gpfsnode04 (no change required) (S/SSD) 4 usage=dataAndMetadata 5 failureGroup=1004 6 pool=system

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1004 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 AC34S512C 6 pool=system 9 servers=gpfsnode04 (M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1004 9 servers=gpfsnode04 12 pool=system 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 16 usage=dataOnly 17 failureGroup=1004 18 pool=hddpool

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1004 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 AC48S1024C 6 pool=system 9 servers=gpfsnode04 (L) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1004 9 servers=gpfsnode04 12 pool=system 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 16 usage=dataOnly 17 failureGroup=1004 18 pool=hddpool

Table 39: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 93 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

In a mixed cluster you must delete the callback and enable the new GPFS internal restripe. Deactivate the callback and enable the automatic restripe with the following commands:

1 # mmdelcallback start-disks-on-startup 2 # mmchconfig restripeOnDiskFailure=yes

Both commands need to be run only once on any active cluster node.

9.1.4 Existing Cluster Extension/Node Replacement

When expanding a mixed cluster with additional eX5 servers, please follow the instructions in the eX5 Implementation & Operations Guides. No special handling is required besides using the saphana-quota-calculation.sh script only on eX5 nodes or X6 nodes installed with appliance version 1.7.70-9 or later. Do not run the quota calculator on any X6 node installed with appliance version 1.7.70-8. When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster, please install the X6 nodes according to the X6 Implementation Guide. After Phase 2 (the basic configuration) for X6 nodes in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level. Login to the server and run

1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←- ,→ system2 ext2 "" 1675 3350

For SSD/S sized clusters this is not necessary. Afterwards adapt the generated stanza file on each node before adding these node to the cluster. Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on the X6 nodes and change the usage and pool parameters as shown in table 40: Stanza file for X6 servers in eX5 clusters on page 95. Please set the nsd, servers and failureGroup to their correct values. Follow the normal instructions given in the eX5 Operations Guide in chapter 4.2 Adding a cluster node. Afterwards either run the quota calculation script from any eX5 nodes, from any X6 node installed with appliance version 1.7.70-9 or later or do the manual calculation described in the appendix section of the eX5 Operations Guide.

9.1.5 Deviating Operation Instructions

In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers.

9.1.5.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation of HANA data volumes and log files. Each fileset is limited with a quota. X6 servers use three filesets for separating HANA data volumes, log files and the shared parts (like binaries, config, trace, backups). When using X6 servers in a eX5 cluster, the two fileset setup is used on all nodes, so for the quotas the eX5 version of the Operations Guide is applicable. The quota calculation is explained in the appendix of the guide. On any eX5 node and on X6 nodes with appliance version 1.7.73-9 you can use the quota calculation script saphana-quota-calculator.sh. The usage of this script is also documented in the quota chapter in the appendix.

9.1.5.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP HANA, SAP HANA must be installed into /sapmnt as described in the eX5 documentation.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 94 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Model Generated File Change To

1 %nsd: device=/dev/sdb 2 nsd=data01node04 AC32S256C 3 servers=gpfsnode04 (no change required) (S/SSD) 4 usage=dataAndMetadata 5 failureGroup=1004 6 pool=system

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1004 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 AC34S512C 6 pool=system 9 servers=gpfsnode04 (M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1004 9 servers=gpfsnode04 12 pool=system 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 16 usage=dataOnly 17 failureGroup=1004 18 pool=hddpool

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1004 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 AC48S1024C 6 pool=system 9 servers=gpfsnode04 (L) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1004 9 servers=gpfsnode04 12 pool=system 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 16 usage=dataOnly 17 failureGroup=1004 18 pool=hddpool

Table 40: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 95 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

9.1.5.3 Storage Device Failure For any failed storage device in a eX5 based node, the Implemen- tation & Operation Guides for eX5 are fully applicable. For X6 based nodes please use the Operation Guide for X6. The only difference in handling is that the stanza files given in 9.1.4: Existing Cluster Extension/Node Replacement on page 94 must be used. Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 96 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

9.2 Mixed eX5/X6 DR Clusters

Attention This chapter only applies to hybrid clusters consisting of servers with Intel Westmere, and Intel Ivy Bridge CPUs. Hybrid clusters with a mix of Intel Westmere, and Intel Haswell CPUs must not be installed!

9.2.1 Definition & Overview

A mixed eX5/X6 DR cluster is a Lenovo Solution DR-enabled cluster consisting of eX5 based servers (Intel Westmere, MT 7143 and 7147) and X6 based servers (Intel Ivybridge, MT 3837 and 6241). Another term used is "hybrid DR cluster". Due to the new storage layout for X6-only installations, an X6 configuration must be slightly modified before an X6 node can be added to an eX5 cluster. Such an X6 node is considered to be configured in legacy or compatibility mode. Besides the different storage layout, there are some minor configuration changes between the older West- mere appliance releases and the first X6 appliance versions. These will be explained below. Future releases will level the differences.

9.2.2 Prerequisites & Limitations

9.2.2.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 DR cluster is limited by the number of eX5 servers within that cluster. The number of X6 server must always be less than the number of eX5 nodes. If you plan to use more X6 servers in a cluster, the only supported options are either to increase the number of eX5 server so that they are still the majority or to switch to a pure X6 cluster which requires a reinstallation. For DR-clusters we require that both sites (primary & secondary) must consist only of eX5 server or only of X6 servers or of a mix of eX5 and X6 server where the eX5 servers have the majority on each site. For example these combinations are allowed: • Primary site: 6 eX5, secondary site: 6 X6 servers This is allowed as no site is mixed. • Primary site: 6 eX5, secondary site: 4 eX5 & 2 X6 servers This is allowed as the first site is not mixed and the eX5 have the majority on the secondary site. • Primary site: 4 eX5 & 3 X6, secondary site: 6 eX5 & 1 X6 While both sites are mixed, but in each site the eX5 are the majority. These combinations are not allowed: • Primary site: 3 ex5 & 3 X6, secondary site: 6 eX5 servers This is not allowed as on the first site the eX5 servers are not the majority. • Primary site: 4 eX5 & 3 X6, secondary site: 6 eX5 servers The eX5 servers are the majority on both sites, but the sites differ in size. For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 97 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

eX5 T-Shirt Size X6 Server Model SSD (x3690, 7147-H3X, Generation 1) AC32S256C (2 CPUs, 256GB RAM) S (x3690, 7147-HBX, Generation 2) M (x3950, 7143-H2X or 7143-HBX) AC34S512C (4 CPUs, 512GB RAM) L (x3950, 7143-H3X or 7143-HBX) AC48S1024C (8 CPUs, 1024GB RAM)

Table 41: eX5 T-Shirt Size to X6 Model Mapping

9.2.2.2 Prerequisites Before deploying any X6 server to an eX5 cluster, the GPFS filesystem soft- ware on the eX5 servers must be updated to the same version installed on the X6 models. The minimum supported GPFS versions for hybrid DR clusters are GPFS 3.5 PTF 19 (3.5.0-19) or GPFS 4.1 PTF 8 (4.1.0.8) which may require an update even on the X6 nodes. Alternatively PTF 17 (3.5.0-17) with eFix 8 can be used. Contact IBM support to obtain this eFix. Do not use plain 3.5.0-17 without eFix 8! It is required to use only eX5 servers installed with appliance version 1.6.60-7 or later, which introduced RAID5 in cluster configurations. The RAID5 setup is perceived as being more reliable and convenient than the previously used RAID0 configuration. When installing a new cluster please use appliance version 1.6.60-7 or later for the eX5 servers. Appliance versions 1.6.60-7 and later contain a helper script for calculating the necessary file system quotas. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance version. If this script is not available, please calculate the quotas manually following the instructions in the appendix of the eX5 Operations Guide. Since Appliance version 1.7.70-9 an updated quota calculation help script is installed which can detect a hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.

9.2.3 New Installation

In general, the installation and operation instructions for eX5 and X6-based servers remain valid. For eX5 servers, please use the installation description in Lenovo eX5 Systems Solution for SAP HANA - Implementation Guide. For the installation of the X6 server, please use the Lenovo X6 Systems Solution for SAP HANA - Implementation Guide for System x X6 Servers and read the instructions below. Please read these instructions before installing the new server and take care to implement them correctly. Follow the Implementation Guide until (including) the call of the script saphana-setup-saphana.sh with the Cluster (Worker) option. Do not execute the script with the Cluster (Master) option. This means the script is only called once.

9.2.3.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level. After finishing the base installation in phase 2, login to the server and run

1 # parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart←- ,→ system2 ext2 "" 1675 3350

For SSD/S sized clusters this is not necessary.

9.2.3.2 Adapting the GPFS stanza file After configuring the base system and the subsequent reboot in phase 2 of the installation, the GPFS stanza files need to be adapted to the older eX5 storage layout. For S/SSD model based cluster no change is needed as these models use only one GPFS storage pool like the new X6 models. In clusters based on x3950 models, storage is divided into two GPFS

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 98 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

storage pools. The new X6 servers must provide these two storage pools in order to be compatible. This is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd RAID array in the external SAS enclosure (AC34S512C) resp. in the upper storage book (AC48S1024C) to the storage pool hddpool. Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on all X6 nodes and change the usage and pool parameters as shown in table 42: Stanza file for X6 servers in eX5 clusters on page 100: Please set the nsd, servers and failureGroup to their correct values. Complete the installation as described in the chapter "Disaster Recovery" in the Implementation Guide for eX5.

9.2.4 Existing Cluster Extension/Node Replacement

When expanding a mixed cluster with additional eX5 servers, please follow the instructions in the Disaster Recovery sections of the eX5 Implementation & Operations Guides. Do not run the quota calculator on any X6 node installed with appliance version 1.7.70-8. When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster, please install the X6 nodes according to the X6 Implementation Guide. After Phase 2 (the basic configuration) adapt the generated stanza file on each node before adding these node to the cluster. Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on the X6 nodes and change the usage and pool parameters as shown in table 43: Stanza file for X6 servers in eX5 clusters on page 101. Please set the nsd, servers and failureGroup to their correct values. Follow the normal instructions given in the eX5 Operations Guide in chapter 4.2 Adding a cluster node. Afterwards either run the quota calculation script from any eX5 nodes, from any X6 node installed with appliance version 1.7.70-9 or later or do the manual calculation described in the appendix section of the eX5 Operations Guide.

9.2.5 Deviating Operation Instructions

In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers.

9.2.5.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation of HANA data volumes and log files. Each fileset is limited with a quota. X6 servers use three filesets for separating HANA data volumes, log files and the shared parts (like binaries, config, trace, backups). When using X6 servers in a eX5 cluster, the two fileset setup is used on all nodes, so for the quotas the eX5 version of the Operations Guide is applicable. The quota calculation is explained in the appendix of the guide. On any eX5 node and on X6 nodes with appliance version 1.7.73-9 you can use the quota calculation script saphana-quota-calculator.sh. The usage of this script is also documented in the quota chapter in the appendix.

9.2.5.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP HANA, SAP HANA must be installed into /sapmnt as described in the eX5 documentation. Note In the DR solution only for the hanalog fileset a quota is set.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 99 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Model Generated File Change To

1 %nsd: device=/dev/sdb 1 %nsd: device=/dev/sdb 2 nsd=data01node04 2 nsd=data01node04 AC32S256C 3 servers=gpfsnode04 3 servers=gpfsnode04 (S/SSD) 4 usage=dataAndMetadata 4 usage=dataAndMetadata 5 failureGroup=1004 5 failureGroup=1,0,4 6 pool=system 6 pool=system

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1,0,4 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 AC34S512C 6 pool=system 9 servers=gpfsnode04 (M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1,0,4 9 servers=gpfsnode04 12 pool=system 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 16 usage=dataOnly 17 failureGroup=1,0,4 18 pool=hddpool

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1,0,4 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 6 pool=system 9 servers=gpfsnode04 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1,0,4 AC48S1024C 9 servers=gpfsnode04 12 pool=system (L) 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 13 %nsd: device=/dev/sdd 16 usage=dataOnly 14 nsd=data03node04 17 failureGroup=1,0,4 15 servers=gpfsnode04 18 pool=hddpool 16 usage=dataAndMetadata19 %nsd: device=/dev/sdd 17 failureGroup=1004 20 nsd=data02node04 18 pool=system 21 servers=gpfsnode04 22 usage=dataOnly 23 failureGroup=1,0,4 24 pool=hddpool

Table 42: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 100 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Model Generated File Change To

1 %nsd: device=/dev/sdb 1 %nsd: device=/dev/sdb 2 nsd=data01node04 2 nsd=data01node04 AC32S256C 3 servers=gpfsnode04 3 servers=gpfsnode04 (S/SSD) 4 usage=dataAndMetadata 4 usage=dataAndMetadata 5 failureGroup=1004 5 failureGroup=1,0,4 6 pool=system 6 pool=system

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1,0,4 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 AC34S512C 6 pool=system 9 servers=gpfsnode04 (M) 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1,0,4 9 servers=gpfsnode04 12 pool=system 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 16 usage=dataOnly 17 failureGroup=1,0,4 18 pool=hddpool

1 %nsd: device=/dev/sdb1 2 nsd=MDdata01node04 3 servers=gpfsnode04 1 %nsd: device=/dev/sdb 4 usage=dataAndMetadata 2 nsd=data01node04 5 failureGroup=1,0,4 3 servers=gpfsnode04 6 pool=system 4 usage=dataAndMetadata 7 %nsd: device=/dev/sdb2 5 failureGroup=1004 8 nsd=MDdata02node04 6 pool=system 9 servers=gpfsnode04 7 %nsd: device=/dev/sdc 10 usage=dataAndMetadata 8 nsd=data02node04 11 failureGroup=1,0,4 AC48S1024C 9 servers=gpfsnode04 12 pool=system (L) 10 usage=dataAndMetadata13 %nsd: device=/dev/sdc 11 failureGroup=1004 14 nsd=data01node04 12 pool=system 15 servers=gpfsnode04 13 %nsd: device=/dev/sdd 16 usage=dataOnly 14 nsd=data03node04 17 failureGroup=1,0,4 15 servers=gpfsnode04 18 pool=hddpool 16 usage=dataAndMetadata19 %nsd: device=/dev/sdd 17 failureGroup=1004 20 nsd=data02node04 18 pool=system 21 servers=gpfsnode04 22 usage=dataOnly 23 failureGroup=1,0,4 24 pool=hddpool

Table 43: Stanza file for X6 servers in eX5 clusters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 101 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

9.2.5.3 Storage Device Failure For any failed storage device in a eX5 based node, the Implemen- tation & Operation Guides for eX5 are fully applicable. For X6 based nodes please use the Operation Guide for X6. The only difference in handling is that the stanza files given in 9.2.4: Existing Cluster Extension/Node Replacement on page 99 must be used. Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 102 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10 Special Single Node Installation Scenarios

This section covers installations that consist of just one single node in production and need to have HA or DR features using SAP System Replication or IBM GPFS Storage replication.

10.1 Single Node with HA Installation with Side-car Quorum Solution

A single node with high availability (HA) describes the smallest possible configuration for a highly available Lenovo solution for a SAP HANA system. In principle, this can be described as a cluster where only a single node is highly available, since there is only one SAP HANA worker node. There is no distribution of information across the nodes as there is no secondary worker node attached. Figure 31: Single Node with High Availability on page 103 shows a high level overview of the system landscape with two SAP HANA appliances and an IBM GPFS Quorum node.

Worker Node Standby Node Quorum Node

GPFS Links

SAP HANA Links

Inter-Switch Link (ISL)

G8264 switches

Figure 31: Single Node with High Availability

The major difference between a single node HA configuration and larger scale out clusters is the require- ment to have a third node to build a quorum for the IBM GPFS file system. Therefore, the smallest possible setup needs to contain three nodes. Two Lenovo Workload Optimized Systems for SAP HANA and one quorum node. The third node can e.g. be a plain Lenovo System x3550 M4 system. The described solution implements a simple 1U server as quorum node for IBM GPFS. This node does not contribute to the file system with any data disks, but does contribute to the IBM GFPS cluster. The file system layout is shown in Figure 32: File System Layout - Single Node HA on page 104.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 103 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

node1 node2 node3 Quorum

Shared File System HDD HDD

a

t

c

i

s

l

r Data

i

p

f

e

r

d

a

n

c

i

l o Data

c

p

e

e

r s HDD

r

o

t

m

p

i

e

e

l

t

r

i

s c FS Desc FS Desc FS Desc

F

y

s

e

S

D

a

a

t t Meta Meta

e

a

d m data data

LG1 LG1 LG1 FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS

Figure 32: File System Layout - Single Node HA

10.1.1 Installation of SAP HANA appliance single node with HA

To begin the installation, you need to install both Lenovo Workload Optimized Systems using the steps at the beginning of chapter 6: Guided Install of the Lenovo Solution on page 41. Configure the network interfaces (internal and external) and the NTP server(s) as described there. 1. Start the text based installer as follows on each of the two nodes:

1 saphana-setup-saphana.sh -H

The switch -H prevents SAP HANA from being installed automatically. This needs to be done manually later. Refer to the steps as stated in section 6.6.2.2: Cluster Installation on page 64 together with the steps described below. 2. Select Cluster (worker) . This does a basic installation as a cluster node. 3. Start the installer again as above with the option -H, this time only on the future master node. Select this time Cluster (Master) . See again section 6.6.2.2: Cluster Installation on page 64. 4. Then the quorum node will be manually installed and configured to include its own IBM GPFS NSD to the file system cluster.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 104 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10.1.2 Prepare quorum node

The quorum node used can be, e.g. an Lenovo System x3550 M4 with a single CPU and three local disks configured in a RAID5 configuration. It also contains an Emulex Virtual Fabric Adapter II with two 10 Gigabit Ethernet ports. We recommend the following server to be used as quorum nodes for the best price/performance of this node. Bigger systems only require a larger cost for the GPFS license and are not needed. See table 44 on page 105.

Part System x3550 M4 GPFS quorum node Qty. Number x3550 M4, Xeon 4C E5-2609 80W 2.4GHz/1066MHz/10MB, 7914B2G 1X4GB, O/Bay 2.5in & HS SAS/SATA, SR M1115, 550W p/s, 1 Rack 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 49Y1397 6 1333MHz LP RDIMM 90Y8877 IBM 300GB 2.5in SFF 10K 6Gbps HS SAS HDD 3 81Y4481 ServeRAID M5110 SAS/SATA Controller for IBM System x 1 Emulex Dual Port 10GbE SFP+ Embedded VFA III for IBM 90Y6456 1 System x ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM 81Y4487 1 System x Express IBM System x 550W High Efficiency Platinum AC Power 00D7087 1 Supply 00D8042 SLES 2 Socket Std SUSE Support 3Yr 1 IBM GPFS for x86 Architecture, GPFS Server Per 10 VUs w/1 68Y9124 28 Yr SW S&S 46M0902 IBM UltraSlim Enhanced SATA Multi-Burner 1 69Y5681 x3550 M4 ODD Cable 1 90Y3901 IBM Integrated Management Module Advanced Upgrade 1 91Y6450 3yr Essentials HW and SW Support 1 39Y7932 4.3m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable 2

Table 44: IBM System x3550 M4 GPFS quorum node

10.1.2.1 Install the Operating System You may use SLES17 11 to install the OS on this machine using the default settings. While installing Linux, please select the pattern "C/C++ Compiler and Tools" in addition to the default selection of software. If you do not do this at install time, then open the YaST software panel and install the above pattern before installing and compiling GPFS. Note SLES 11 does not contain RAID drivers for the IBM ServeRAID M5110 RAID controller (see table 44). In order to install this driver at the same time, you must prepare a USB drive with the appropriate ServeRAID device update driver (dud) file that can be found on IBM FixCentral. Install it during the install by pressing during boot splash screen. Please refer to the driver README instructions for further details.

Note We recommend to always use the latest version of SLES for the quorum node. You can download the IBM ServeRAID drivers from IBM support sites: e.g. http://ibm.com/support/ entry/portal/docdisplay?lndocid=migr-5082165. If you install using the SLES for SAP Applications

17SUSE Linux Enterprise Server

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 105 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

11 DVD, you will be able to install with this dud file, but you will not be able to reboot the system as the device driver that was used to install, is not compatible with the newer kernel delivered in the SLES for SAP Applications 11 installation media. Therefore we do not recommend to use the SLES for SAP Applications 11 installation media for this server.

10.1.2.2 Disk partitioning The SLES 11 installation media will automatically partition your hard drive if you do not remove the boot option "autoyast=usb:///" completely. Although this is not dramatic, it would mean you would have to use a tool like gparted to resize the partitions in the following manner. This is not described in this document. We recommend to remove the boot option, "autoyast=usb:///" completely and manually configure the partitions as described in Section 45: Single Node with HA OS Partitioning on page 106.

Device Size Mount point /dev/sda1 rest / /dev/sda2 10GB swap /dev/sda3 10GB not mounted - not formated - used for GPFS NSD

Table 45: Single Node with HA OS Partitioning

10.1.2.3 Firewall Disable the integrated firewall during the network configuration steps or else you won’t be able to connect to the server until the firewall has been configured correctly. This may be turned on and configured according to the SAP HANA Security Guidelines.

10.1.3 Quorum Node Network Setup

Follow information in table 46: Single Node with HA OS Networking Setup on page 106 to setup the networking during the SLES for SAP Applications OS installation.

Network Description 10GbE port 0 Connect 10GigE port to the first G8264 switch 10GbE port 1 Connect 10GigE port to the second G8264 switch Bond Port 0 and Port 1 together bond0 Set the Bonding options to: mode=4 xmit_hash_policy=layer3+4 Host Name gpfsnode99 GPFS IP address Place at the end of the range (e.g. 192.168.10.253) This is not needed as this node will not run SAP HANA IP Address HANA.

Table 46: Single Node with HA OS Networking Setup

Figure 33 on page 107 shows the typical network setup for a single node with HA cluster. Deviations are possible for the management, client access and ERP replication networks depending on the real customer requirements.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 106 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Interface SAP client Legend 0 1 GigE Inter Switch Bonded SAP HANA GPFS Interface 1GbE IMM Links 6 Optional 10 GigE 10GbE 10 GbE 10 GbE 1 GbE 40 GbE Interface 8

Customer Customer Interface Zone SAP HANA Single Node with HA Appliance Interface Zone SAP HANA Single Node with HA Appliance

IMM 1 IMM 1 IMM Node1 0 Node2 0 Quorum 0 1 Node 10 GbE 6 6 Customer HANA HANA Switch Choice 8 8 10GigE 1

System management

SAP 10GigE 2 Business Suite 7 7 2 GPFS GPFS GPFS 9 9 3

Customer Switch Choice

3 5 10 3 5 10

2 4 11 2 4 11

Figure 33: Network Switch Setup for Single Node with HA

10.1.3.1 Switch configuration The network switches need to be configured in the standard scale- out configuration, described in section 5.6.7: Network Configurations in a Clustered Environment on page 30. The 10GigE connections of the additional quorum node will be configured as an extension to the existing vLAG configuration. The ports of the new network links need to be added to the correct VLANs and the vLAG and LACP settings need to be made.

Description G8264 Switch #1 G8264 Switch #2 ports 22 22 vLAG - LACP key 1002 1002 PVID 101 101

Table 47: Single Node with HA Network Switch Definitions

10.1.4 Adapt hosts file

The host file /etc/hosts on all three cluster nodes needs to have the following entries. Change the IP addresses to the ones used in your scenario. Add entries that are missing like for instance external hostnames.

1 192.168.10.101 gpfsnode01 gpfsnode01 2 192.168.10.102 gpfsnode02 gpfsnode02 3 192.168.10.253 gpfsnode99 gpfsnode99

10.1.5 SSH configuration

The ssh configuration also needs to be extended to the third node. Each node needs to have the public ssh keys of each other node so that the communication between the GPFS nodes is guaranteed.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 107 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10.1.5.1 Generate the ssh key on the quorum node Run the following command to generate the set of ssh keys on quorum node

1 ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''

The key needs to be copied to all cluster nodes. Run the following command on quorum node for each host.

1 ssh-copy-id gpfsnode01 2 ssh-copy-id gpfsnode02

Run the following command on each of the first two nodes with the GPFS private network hostname of the new quorum node:

1 ssh-copy-id gpfsnode99

10.1.6 Quorum Node IBM GPFS setup

Update the file /var/mmfs/config/nodes.cluster on the first node (gpfsnode01) to the following con- tent, as it may be needed later:

1 gpfsnode01:quorum 2 gpfsnode02:quorum 3 gpfsnode99:quorum

Besides the necessary number of quorum nodes it is also required to have a quorum on the file system descriptor. The number of copies of the file system descriptor depends on the number of disk in different failure groups. To maintain file system operations GPFS requires a quorum of the majority of the replicas of the file system descriptor. For a two node HA cluster it is therefore necessary to also have a copy of the descriptor on the quorum node. A disk needs to made available to GPFS on the additional quorum node which will only hold a copy of the file system descriptor. It does not have any data or metadata.

10.1.7 Quorum Node IBM GPFS installation

Perform the following commands as user root. Copy the GPFS installer files from the master node:

1 mkdir -p /var/tmp/install/gpfs-4.1 2 scp gpfsnode01:/var/tmp/install/gpfs-4.1/GPFS-4.1* /var/tmp/install/gpfs-4.1 3 scp gpfsnode01:/var/tmp/install/gpfs-4.1/GPFS_4.1* /var/tmp/install/gpfs-4.1

This should give you the base installer archive GPFS_4.1_STD_LSX_QSG.tar.gz and the PTF GPFS-4.1. 0.-x86_64-Linux.standard.tar.gz. Extract the IBM GPFS archives and start the installer:

1 cd /var/tmp/install/gpfs-4.1 2 tar xvf GPFS_4.1_STD_LSX_QSG.tar.gz 3 tar xvf GPFS-4.1.0.1-x86_64-Linux.standard.tar.gz 4 ./gpfs_install-4.1.0-0_x86_64 --dir . --text-only

Accept the license by pressing "1". Then install the RPMs:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 108 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 gpfs_release=$(ls gpfs.base-*.x86_64.rpm | cut -d- -f2) 2 gpfs_update_fixpack=$(ls gpfs.base-*.x86_64.update.rpm | cut -d- -f3 | cut -d. -f1) 3 4 rpm -ivh gpfs.base-${gpfs_release}-0.x86_64.rpm 5 rpm -ivh gpfs.gpl-${gpfs_release}-0.noarch.rpm 6 rpm -ivh gpfs.msg.en_US-${gpfs_release}-0.noarch.rpm 7 rpm -ivh gpfs.docs-${gpfs_release}-0.noarch.rpm 8 rpm -ivh gpfs.gskit-*.x86_64.rpm 9 rpm -ivh gpfs.ext-${gpfs_release}-0.x86_64.rpm 10 11 rpm -Uvh gpfs.base-${gpfs_release}-${gpfs_update_fixpack}.x86_64.update.rpm 12 rpm -Uvh gpfs.ext-${gpfs_release}-${gpfs_update_fixpack}.x86_64.update.rpm 13 rpm -Uvh gpfs.gpl-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm 14 rpm -Uvh gpfs.msg.en_US-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm 15 rpm -Uvh gpfs.docs-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm

Copy the license:

1 mkdir -p /usr/lpp/mmfs/4.1/ 2 cp -pr license /usr/lpp/mmfs/4.1/

10.1.7.1 Build the IBM GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/ src/README. In general, you may build the IBM GPFS libraries as follows:

1 cd /usr/lpp/mmfs/src 2 make Autoconfig 3 make World 4 make InstallImages

10.1.7.2 Change SUSE Linux local settings 1. Create /etc/profile.d/saphana-profile.sh;

1 PATH=$PATH:/usr/lpp/mmfs/bin

2. Change file permissions:

1 chmod 644 /etc/profile.d/saphana-profile.sh

3. Activate the new PATH variable

1 source /etc/profile.d/saphana-profile.sh

4. Create a dump-directory for IBM GPFS

1 mkdir /tmp/GPFSdump

5. Create a configuration-directory for IBM GPFS

1 mkdir /var/mmfs/config

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 109 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10.1.8 Add quorum node

Execute the next commands on the primary node: 1. Add the additional node to the cluster.

1 mmaddnode gpfsnode99

2. Mark new node as correct licensed

1 mmchlicense server --accept -N gpfsnode99

3. Mark backup and quorum node as quorum nodes for the cluster

1 mmchnode --quorum -N gpfsnode02,gpfsnode99

4. Start IBM GPFS on the new node:

1 mmstartup

10.1.9 Create descriptor disk

Create a disk descriptor file in the configuration directory of the quorum node /var/mmfs/config/disk. list.quorum.gpfsnode99. It should contain the following line which defines the disk partition on the quorum node as an NSD with the explicit function to hold the file system descriptor:

1 /dev/sda3:gpfsnode99::descOnly:1099:quorum01node99

Create the NSD by running the mmcrnsd command on the quorum node:

1 mmcrnsd -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no

10.1.10 Add disk to file system

After creating the NSD the disk needs to be added to the file system by running the mmadddisk command:

1 mmadddisk sapmntdata -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no

10.1.11 Verify Cluster Setup

Execute the command mmlsclusteron one of the cluster nodes. The output should look similar to this:

1 GPFS cluster information 2 ======3 GPFS cluster name: HANAcluster.gpfsnode01 4 GPFS cluster id: 12394192078945061775 5 GPFS UID domain: HANAcluster.gpfsnode01 6 Remote shell command: /usr/bin/ssh 7 Remote file copy command: /usr/bin/scp 8 9 GPFS cluster configuration servers: 10 ------11 Primary server: gpfsnode01 12 Secondary server: gpfsnode02 13 14 Node Daemon node name IP address Admin node name Designation

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 110 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

15 ------16 1 gpfsnode01 192.168.10.101 gpfsnode01 quorum 17 2 gpfsnode02 192.168.10.102 gpfsnode02 quorum 18 3 gpfsnode99 192.168.10.253 gpfsnode99 quorum

10.1.11.1 List the IBM GPFS Disks Check the disks in the cluster. There are 2 devices on each of the NSD server and none on the quorum node. The listing of the command mmlsdisksapmntdata-L shows that there is one disk per failure group which contains a file system descriptor. This ensures that a quorum may be reached if a node fails.

1 disk driver sector failure holds holds storage disk pool remarks 2 name type size group metadata data status availability id 3 ------4 data01node01 nsd 512 1001 yes yes ready up 1 system desc 5 data02node01 nsd 512 1001 yes yes ready up 2 system 6 data01node02 nsd 512 1002 yes yes ready up 3 system desc 7 data02node02 nsd 512 1002 yes yes ready up 4 system 8 quorum01node99 nsd 512 1003 no no ready up 5 system desc 9 Number of quorum disks: 3 10 Read quorum value: 2 11 Write quorum value: 2

10.1.12 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is /var/tmp/saphana.

10.2 Single Node with stretched HA Installation

This solution is designed to provide improved high-availability capabilities for a single node SAP HANA installation. It can be applied to any SAP HANA configuration size. There is one active SAP HANA instance running on the primary node and database data gets replicated by IBM GPFS to the secondary node. The secondary node is running in hot-standby, ready to take over operation if the primary node experiences any failure. In such a 1+1 stretched HA scenario the secondary node usually is distant to the primary node. Examples are a different fire compartment zone or the other end of the campus. Depending on distances it can also be on a different campus in the same city. No non-production SAP HANA instance is allowed to run in this scenario. Because of the importance of the quorum node it is recommended to place it at a third site. We understand, however, that this is not always feasible. This leads to the following two designs. In the first figure 34: Single Node with stretched HA - Two Site Approach on page 112 the quorum node is placed at the primary site. This ensures that IBM GPFS on the primary site node stays up and running even if the link to the DR-site node gets interrupted.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 111 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Site B

Worker Node Standby Node Quorum Node

GPFS Links SAP HANA Links Inter-Switch Link (ISL)

G8264 switches

Figure 34: Single Node with stretched HA - Two Site Approach

The second approach places the quorum node at a third site. The network architecture can be seen in figure 35: Single Node with stretched HA - Three Site Approach on page 112.

Site C Site B

Quorum Node

Worker Node Standby Node

GPFS Links G8264 switches SAP HANA Links Inter-Switch Link (ISL)

Figure 35: Single Node with stretched HA - Three Site Approach

10.2.1 Installation and configuration of SLES and IBM GPFS

This scenario must be installed like a conventional 1+1 HA scenario as shown above in 10.1.1: Installation of SAP HANA appliance single node with HA on page 104. The major difference is the network setup. It can be either routed or switched, depending on the client’s environment (in conventional 1+1 HA scenarios there is only one IBM-provided switch between the hops). Usually, clients have different types of links spanning the two sites and they use different network equipment technologies. The client is allowed to use his own network equipment (i.e. switches) on the secondary site. Ensure that the separation of

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 112 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

network interfaces is kept across both nodes (distinct switches or VLAN18s for each IBM GPFS and HANA network port per node). This is to guarantee high-availability of the solution. The file system layout is shown in Figure 36: File System Layout - Single Node stretched HA on page 113.

node1 node2 node3 Quorum

Shared File System HDD HDD

a

t

c

i

s

l

r Data

i

p

f

e

r

d

a

n

c

i

l o Data

c

p

e

e

r s HDD

r

o

t

m

p

i

e

e

l

t

r

i

s c FS Desc FS Desc FS Desc

F

y

s

e

S

D

a

a

t t Meta Meta

e

a

d m data data

LG1 LG1 LG1 FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS

Figure 36: File System Layout - Single Node stretched HA

10.2.2 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is /var/tmp/saphana.

10.3 Single Node with DR Installation

This solution is designed to provide disaster recovery capabilities for a single node SAP HANA instal- lation. It can be applied to any SAP HANA machine size. There is one active SAP HANA instance running on the primary site node and a standby node on the backup site is ready to take over operation in case of a disaster. The difference between a single node with stretched HA and a single node with DR installation is the fact that automatic failover is sacrificed for the possibility to run a non-production SAP HANA instance on the DR-site node. Otherwise, the two setups are identical. The setup of this solution is a manual process after SLES has been installed. Because of the importance of the quorum node it is recommended to place it at a third site. We understand, however, that this is not always feasible. This leads to the following two designs. In the first figure 37: Single Node with Disaster Recovery - Two Site Approach on page 114 the quorum node is

18Virtual Local Area Network

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 113 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

placed at the primary site. This ensures that IBM GPFS on the primary site node stays up and running even if the link to the DR-site node gets interrupted.

Site B Storage expansion for non-prod DB instance

Worker Node DR Node Quorum Node

GPFS Links SAP HANA Links Inter-Switch Link (ISL)

G8264 switches

Figure 37: Single Node with Disaster Recovery - Two Site Approach

The second approach places the quorum node at a third site. The network architecture can be seen in figure 38: Single Node with Disaster Recovery - Three Site Approach on page 114.

Site C Site B

Quorum Node

Worker Node Standby Node

GPFS Links G8264 switches SAP HANA Links Inter-Switch Link (ISL)

Figure 38: Single Node with Disaster Recovery - Three Site Approach

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 114 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10.3.1 Installation and configuration of SLES and IBM GPFS

This scenario has to be installed in the exact same way as described in 10.1.1: Installation of SAP HANA appliance single node with HA on page 104. IBM GPFS replicates data to the backup site node. The difference is in the configuration of SAP HANA.

10.3.2 Optional: Expansion Storage Setup for Non-Production Instance

This solution supports the additional use of the DR-site node to host a non-production SAP HANA instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance on page 126 to setup the additional disk drives. The overall file system architecture is illustrated in figure 39: File System Layout - Single Node with DR with Storage Expansion on page 115.

node1 node2 node3 Quorum Shared File System HDD HDD

a

t

c

i

s

l

r Data

i

p

f

e

r

d

a

n

c

i

l o Data

c

p

e

e

r s HDD

r

o

t

m

p

i

e

e

l

t r

i

s c FS Desc FS Desc FS Desc

F

y s

e

S

D

a

a Meta Meta

t

t

e a data data

d

m

LG1 LG1 LG1 FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS

M5120

...

Second file system for non-prod

Figure 39: File System Layout - Single Node with DR with Storage Expansion

10.4 Single Node with HA and DR Installation

This solution is designed to provide the maximum level of redundancy for a single node SAP HANA installation. It can be applied to any SAP HANA configuration size. High availability concepts ensure that the database stays up if the primary node has an issue. Disaster recovery concepts ensure that the database stays up if the first two SAP HANA nodes (residing in the primary customer data center)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 115 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

become unavailable. Figure 40: Single Node with HADR using IBM GPFS Storage Replication on page 116 illustrates the overall architecture of the solution.

Site B Storage expansion for non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links G8264 switches SAP HANA Links

Inter-Switch Link (ISL)

Figure 40: Single Node with HADR using IBM GPFS Storage Replication

10.4.1 Installation and configuration of SLES and IBM GPFS

Install the latest supported IBM Systems Solution for SAP HANA on all three nodes by using the latest supported SLES for SAP Applications DVD and the latest non-OS component DVD. The procedure is similar as described in Installation of SAP HANA appliance single node with HA. The final file system layout is shown in figure 41 on page 117.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 116 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

node1 node2 node3

Shared File System HDD HDD HDD

a

t

c

i

s

l

r Data

i

p

f

e

r

d

a

n

c

i

l o Data

c

p

e

e

r

s

a

d

c

i

r

l i Data

p

h

t

e

r

r

o

t

m

p

i

e

e

l

t

r

i

s c FS Desc FS Desc FS Desc

F

y

s

e

S

D

Meta Meta Meta

a

a

t

t

e

a data data data

d

m

LG1 LG1 LG1 FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS

Figure 41: File System Layout - Single Node HADR

To begin the installation, you need to install both IBM Workload Optimized Systems using the steps at the beginning of chapter 6: Guided Install of the Lenovo Solution on page 41. Configure the network interfaces (internal and external) and the NTP server(s) as described there. The IP addresses can be in different subnets as long as proper routing between the subnets is in place. Make sure that all three SAP HANA nodes can ping each other on all interfaces. 1. Start the text based installer as follows on each of the two nodes:

1 saphana-setup-saphana.sh -H

The switch -H prevents SAP HANA from being installed automatically. This needs to be done manually later. Refer to the steps as stated in section 6.6.2.2: Cluster Installation on page 64 together with the steps described below. 2. Select Cluster (worker) . This does a basic installation as a cluster node. 3. Start the installer again as above with the option -H, this time only on the future master node. Select this time Cluster (Master) . Enter details for SID, Instance ID and a HANA password. Enter number of nodes 3, number of standby nodes 1 (this does not matter, it would be used only for HANA which is not installed automatically anyway). Assure that the IP addresses for the IBM GPFS and HANA network are correct. Accept the IBM GPFS license and wait for the installation process to continue successfully. See again section 6.6.2.2: Cluster Installation on page 64. 4. Change the replication level for the IBM GPFS fileset:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 117 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 mmchfs sapmntdata -m 3 -r 3

5. Check the replication level set:

1 mmlsfs sapmntdata 2 ... 3 -m 3 Default number of metadata replicas 4 -M 3 Maximum number of metadata replicas 5 -r 3 Default number of data replicas 6 -R 3 Maximum number of data replicas 7 ...

6. Restripe the data on the IBM GPFS filesystem to all have the required three replicas:

1 mmrestripefs sapmntdata -R

7. Set the following IBM GPFS configuration parameters:

1 mmchconfig unmountOnDiskFail=meta 2 mmchconfig panicOnDiskFail=meta

8. Adjust the quotas on the file system. The log quota is set to 1 TB regardless of memory size.

1 mmsetquota -j hanalog -h 1024G -s 1024G /dev/sapmntdata

The data quota for this HADR scenario is set to 9 * RAM. In case of a 1 TB server this means a quota of 9 TB.

1 mmsetquota -j hanadata -h 9216G -s 9216G /dev/sapmntdata

Allocate the remaining space to HANA shared and execute mmsetquote accordingly.

1 mmsetquota -j hanashared -h G -s G /dev/sapmntdata

9. Install SAP HANA similarly as described in section 8.4.5: SAP HANA appliance installation on page 79.

10.4.2 Optional: Expansion Storage Setup for Non-Production Instance

This solution supports the additional use of the DR-site node to host a non-production SAP HANA instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance on page 126 to setup the additional disk drives. The overall file system architecture is illustrated in figure 42: File System Layout - Single Node HADR with Storage Expansion on page 119.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 118 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

node1 node2 node3

Shared File System HDD HDD HDD

a

t

c

i

s

l

r Data

i

p

f

e

r

d

a

n

c

i

l o Data

c

p

e

e

r

s

a

d

c

i

r

l i Data

p

h

t

e

r

r

o

t

m

p

i

e

e

l

t

r

i

s c FS Desc FS Desc FS Desc

F

y

s

e

S

D

a

a

t

t Meta Meta Meta

e

a

d

m data data data

LG1 LG1 LG1 FG1 FG2 FG3

sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS

M5120

...

Figure 42: File System Layout - Single Node HADR with Storage Expansion

10.5 Single Node DR Installation with SAP HANA System Replication

This solution provides redundancy at the application layer. It can be applied to any SAP HANA config- uration size. For details, see official SAP HANA documentation on http://help.sap.com/hana. There are two ways how to design the network for such a DR solution based on System Replication. As the IBM GPFS interfaces on the DR-site node are not connected to the primary site a set of redundant switches is optional. This leads to one architecture with switches and one architecture without switches between the SAP HANA nodes. Figure 43: Single Node DR with SAP System Replication on page 120 shows the solution with switches.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 119 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Site B Storage expansion for non-prod DB instance

Worker Node DR Node

SAP HANA Links Inter-Switch Link (ISL)

G8264 switches

Figure 43: Single Node DR with SAP System Replication

Because the two SAP HANA nodes do not use their IBM GPFS network interfaces you can also opt for a solution without intermediate network switches. In this case you have to connect the two 10 Gbit interfaces used for SAP HANA communication on the two nodes directly without an intermediate switch. This architecture is illustrated in figure 44: Single Node DR with SAP System Replication on page 120.

Site B Storage expansion for non-prod DB instance

Worker Node DR Node

SAP HANA Links

Figure 44: Single Node DR with SAP System Replication

10.5.1 Installation and configuration of SLES and IBM GPFS

Each site is considered to be a single node, as far as SLES and IBM GPFS are concerned. The final file system layout can be seen in figure 45: File System Layout of Single Node DR with SAP System Replication on page 121.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 120 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

SAP HANA System Replication

node1 node2

File system A File system B HDD HDD

a

a

t

t

c c

i

i

s

s

l l

r Data r Data

i

i

p p

f

f

e e

r r

r r

o o

t

t

m

m

p p

i i

e

e

e e

l

l

t r t r

i

i

s s c FS Desc c FS Desc

F F

y s y

s

e

e

S S

D

D

a a

a a

t t

t Meta t Meta

e e

a a

d d

m data m data

LG1 LG1 FG1 FG1

sda1 sda2 sda1 sda2 OS OS

GPFS Cluster A GPFS Cluster B

Figure 45: File System Layout of Single Node DR with SAP System Replication

Perform a single node installation on both nodes as described in 6.6.2.1: Single Node Installation on page 63 but start the installer with the -H option:

1 saphana-setup-saphana.sh -H

In the option list select Single Node . The switch -H prevents HANA from being installed automatically. This needs to be done manually later. Data replication will be taken care of by SAP HANA application level. Replication can happen syn- chronously or asynchronously. Configure the network connection for SAP HANA and ensure the connec- tivity.

10.5.2 Installation of SAP HANA

Please refer to the official SAP documentation available here: http://help.sap.com/hana_appliance. The location of the SAP HANA installation files is /var/tmp/saphana.

10.5.3 Optional: Expansion Storage Setup for Non-Production Instance

This setup supports the additional use of the DR-site node to host a non-production SAP HANA instance. The layout of the two file systems (production and non-production) is illustrated in figure 46: File System Layout of Single Node DR with SAP System Replication with Storage Expansion on page 122.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 121 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

SAP HANA System Replication

node1 node2

File system A File system B HDD HDD

a

a

t

t

c

c

i

i

s

s

l

l

r Data r Data

i

i

p

p

f

f

e

e

r

r

r

r

o

o

t t

m

m

p

p

i

i

e

e

e e

l

l

r

t r t

i

i

s c s c FS Desc FS Desc

F F

y s y s

e e

S S

D D

a a

a a

t t

t Meta t Meta

e e

a a

d d

m data m data

LG1 LG1 FG1 FG1

sda1 sda2 sda1 sda2 OS OS

M5120

GPFS Cluster A ...

GPFS Cluster B

Figure 46: File System Layout of Single Node DR with SAP System Replication with Storage Expansion

On the remote site node (receiving the replication data from primary SAP HANA instance) you will have two file systems configured. The primary file systems spans local disks only and is to be configured in the exact same way as the primary site file system. This file system will host the replicated data coming in from the active production SAP HANA instance. The second file system only consists of storage expansion box drives attached to the remote site node. This file system will host the data of the non-production SAP HANA instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance on page 126 to setup these additional disk drives.

10.6 Single Node with HA using IBM GPFS Storage Replication and DR using System Replication

This approach also provides maximum redundancy for single node SAP HANA installations. We use the term 1+1/1 to describe this style of single node installation. It can be applied to any SAP HANA configuration size. 1+1/1 uses the IBM GPFS storage replication feature and SAP HANA System Replication feature. For HA (1+1) it uses IBM GPFS storage replication. To achieve this, the active and the standby node are in the same IBM GPFS cluster and have access to the system file system. Whenever the active node writes data to disk IBM GPFS replicates it to the standby node. In addition to that, SAP HANA System Replication transfers data to a DR node on a remote site. In case of a disaster in the primary site data center the DR node can be used to host SAP HANA. SAP

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 122 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

HANA System Replication can either run in synchronous or in asynchronous replication mode. The DR node creates a separate IBM GPFS cluster consisting just of itself. It has its own file system on local disk. There is no logical connection to the primary site IBM GPFS cluster. As a consequence, the IBM GPFS network adapter on the DR node is to be left unconnected. This leads to two possible network architectures. The first one provides redundant switches on both sites. Figure 47: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication on page 123 shows this design.

Quorum Node Site B Storage expansion for non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links G8264 switches SAP HANA Links Inter-Switch Link (ISL)

Figure 47: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication

The second architecture drops the switches on the DR site and instead connects the only required network interfaces (the 10 Gbit connection for SAP HANA communication) directly to the primary site switches. This is illustrated in figure 48: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication without remote site Switches on page 124.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 123 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Quorum Node Site B Storage expansion for non-prod DB instance

Worker Node Standby Node DR Node

GPFS Links G8264 switches SAP HANA Links Inter-Switch Link (ISL)

Figure 48: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication without remote site Switches

10.6.1 Installation and configuration of SLES and IBM GPFS

The two nodes on the primary site are to be installed in the exact same way as a 1+1 HA environment described in 10.1.1: Installation of SAP HANA appliance single node with HA on page 104. There is one IBM GPFS cluster and one file system spanning both nodes with IBM GPFS taking care of replicating the data to the standby node (r=2, m=2). To install the DR node follow all steps of a standard SAP HANA single node installation apart from installing SAP HANA itself (use the -H option). Please refer to 10.5: Single Node DR Installation with SAP HANA System Replication on page 119 for details. The OS and IBM GPFS have no logical dependency on the primary site node. This will be achieved on application level with SAP HANA in the next step. The final file system layout is shown in figure 49: File System of Single Node with HA and DR with System Replication on page 125 and it illustrates the use of the two technologies, IBM GPFS storage replication and SAP HANA system replication.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 124 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

SAP HANA System Replication

node1 node2 node3 node3 Quorum

Shared File System A File system B HDD HDD HDD

a

a

t

t

c c

i

i

s

s

l

l

r Data r Data

i

i

p

p

f

f

e

e

r

r

d

a

n

c

i

l o Data

c

p

e

e

r s HDD

r

r

o

o

t

t

m m

p p

i

i

e

e

e e

l

l

t

r t

r

i i

s s

c FS Desc FS Desc FS Desc c FS Desc

F F

y

s y

s

e

e

S

S

D

D

a

a

a a

t t

t Meta Meta t Meta

e e

a a

d data data d data

m m

LG1 LG1 LG1 LG1 FG1 FG2 FG3 FG1

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS OS

GPFS Cluster A GPFS Cluster B

Figure 49: File System of Single Node with HA and DR with System Replication

10.6.2 Installation of SAP HANA

Install two separate instances of SAP HANA, one in each site. For the primary site please follow the according steps for a clustered HA installation. On the DR node you have to follow all steps of a standard SAP HANA single node installation. This includes installing all components of SAP HANA and making sure that it runs self contained. You then have to follow official SAP HANA documentation to enable SAP HANA System Replication between the instance on the primary site node and the instance on the DR node.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 125 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

SAP HANA System Replication node1 node2 node3 node3 Quorum

Shared File System A File system B HDD HDD HDD

a

a

t

t

c c

i

i

s s

l l

r Data r Data

i i

p p

f f

e e

r r

d

a

n

c

i

l o Data

c

p

e

e

r s HDD

r

r

o

o

t t

m m

p

p

i

i

e e

e

e

l l

t r

r t

i i

s s c c FS Desc FS Desc FS Desc FS Desc

F

F

y

s y s

e e

S

S

D D

a a

a a

t t

t Meta Meta t Meta

e e

a a

d data data d data

m m

LG1 LG1 LG1 LG1 FG1 FG2 FG3 FG1

sda1 sda2 sda1 sda2 sda1 sda2 sda1 sda2 OS OS OS OS M5120

GPFS Cluster A ...

GPFS Cluster B

Figure 50: File System of Single Node with HA and DR with System Replication and Storage Expansion

10.7 Expansion Storage Setup for Non-productive SAP HANA Instance

This sections describes how to setup the disks in an expansion storage that hosts a non-productive SAP HANA instance. Expansions storage is supported in environments where the nodes at a DR site would be idle otherwise. Depending on the memory size of the nodes you have a different number of drives in the expansions. Create as many (8+p) RAID 5 arrays as possible. Declare remaining drives as hot spare. For details on how to use the RAID configuration utility see 6.2.1: Storage Configuration – RAID Setup on page 48. Each RAID 5 device will be given to IBM GPFS as an NSD. Collect the device names of all newly created virtual drives. Then create NSDs on them according to the following rules: 1. all NSDs will be dataAndMetadata 2. all NSDs go into the system pool 3. naming scheme is extXXnodeYY with XX being the two-digit drive number and YY the node number 4. one single failure group for all expansion box drives, make sure it is unique within you cluster Store a disk descriptor file similar to the following as /tmp/nsdlistexp.txt:

1 %nsd: device=/dev/sdd 2 nsd=ext01node02 3 servers=gpfsnode02 4 usage=dataAndMetadata 5 failureGroup=2 6 pool=system 7 %nsd: device=/dev/sde 8 nsd=ext02node02 9 servers=gpfsnode02

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 126 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

10 usage=dataAndMetadata 11 failureGroup=2 12 pool=system 13 %pool: 14 pool=system 15 blockSize=1M 16 usage=dataAndMetadata 17 layoutMap=cluster 18 allowWriteAffinity=yes 19 writeAffinityDepth=1 20 blockGroupFactor=1

Create NSDs

1 # mmcrnsd -F /tmp/nsdlistexp.txt

Create the file system

1 # mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 1M -N 3000000 -v no -m 1 -M ←- ,→2 -r 1 -R 2 -s failureGroupRoundRobin -T /sapmntext

Mount the file system on the backup site node

1 # mmmount sapmntext

If your client has a storage expansion connected to both nodes, primary site and backup site, then you need to apply above procedure two times, one for each node. Each expansion box file system is to be handled separately. Do not create a single file system that spans over both expansion box disks! This scenario is used if both data centers – thus both nodes – are to be considered equal and you want to be able to run production SAP HANA out of both data centers. In this case non-production SAP HANA instances must also be able to run on both nodes hence the need for a dedicated /sapmntext file system on both sides.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 127 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

11 Virtualization

The Lenovo Solution can be installed inside of a VMware virtual machine starting with Support Pack- ackage Stack (SPS) 05. Currently SAP supports following virtualization solutions: • VMware vSphere 5.1 and SAP HANA SPS05 (or later releases) for non-production use cases • VMware vSphere 5.5 and SAP HANA SPS07 (or later releases) for production and non-production use cases. For non-production use multiple virtual machines may be deployed. For production use only single node installations are supported. See SAP Note 1788665– SAP HANA Support for VMware Virtualized Environments. For VMware vSphere configuration please see SAP Note 1122388– Linux: VMware vSphere configuration Guidelines Attention For Lenovo Servers with Intel Haswell EX Processor the minimum supported Version of VMware vSphere is 5.5U2. The sizing of a virtual machine has to be done according to the existing SAP HANA sizing guidelines for single node installations. The CPU/RAM ratio has to be met. In general SAP HANA virtualized with VMware vSphere is sized the same as non-virtualized SAP HANA deployments. In other words, for sizing the virtual machine (VM) the CPU/memory ratio as used for bare-metal sizing is taken into account to ensure locality of memory access on the underlying hardware resources.

Virtual Total Lenovo Total HDD vCPUs memory Ratio HDD Name for GPFS (GB) for OS VM1 10 64 1 128 416 VM2 20 128 2 128 736 VM3 30 192 3 128 1056 VM4 40 256 4 128 1376 VM5 50 320 5 128 1696 VM6 60 384 6 128 2016

Table 48: SAP HANA Virtual Machine Sizes by Lenovo

This document covers the installation of one VM on 2 or 4 socket System x3850 X6 Workload Optimized solutions. The installation on 8 socket System x3950 X6 systems is not supported. For installation of multiple VMs on System x3850 X6 machines please consult the SAP documentation.

11.1 Getting Started

11.1.1 Memory Overhead

CPU and memory overcommitment is not allowed in virtual HANA environments. For this reason memory has to be spared for the ESXi hypervisor to run and manage the virtual machines. A very conservative estimate for the amount of memory that needs to be unassigned the SAP HANA virtual machines for overhead is 3 to 4 percent. For example, on a system having 1 TB of RAM, approximately 30 to 40 GB would need to be left unassigned to the virtual machines.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 128 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

In a system with 1TB of RAM a single VM6 machine with 384GB RAM could be installed, leaving the rest of the system unused. Two VM6 machine would still leave enough unassigned memory for the hypervisor and virtual machine memory overhead.

11.1.2 Configure UEFI

Apply the UEFI configuration as described in section 6: Guided Install of the Lenovo Solution on page 41.

11.1.3 Start Embedded VMware ESXi Hypervisor

The VMware ESXi 5.5 hypervisor is to be installed on an USB pen drive. The drive is located at an internal USB port in the server. This prevents an unintended removal of the USB pen drive. Boot the server with attached USB pen drive. Enter BIOS and select Boot Manager Boot from embedded hypervisor . VMware ESXi 5.5 does not boot from a USB-Drive when the BIOS is in legacy mode. It must be in UEFI mode.

11.1.4 Configure Management Network of ESXi Hypervisor

To be able to connect to the ESXi Hypervisor you have to configure the Management network. Per default the ESXi connects to the first available network adapter via dhcp. This is not always desired. 1. At the direct console of the ESXi host, press F2 and provide credentials when prompted.

Figure 51: login to ESXi

2. Scroll to Configure Management Network and press .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 129 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 52: configure management network

3. In the first row you see Network Adapters.

Figure 53: display network adapters

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 130 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 54: display network adapters 2

4. Scroll to IP Configuration and press .

Figure 55: IP configuration

5. Set IP-Address, Subnet Mask and Default Gateway

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 131 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 56: Set IP,NETMASK,GW

6. press and scroll to DNS Configuration and press . 7. Set primary and secondary DNS Server and Hostname and press .

Figure 57: Set DNS and Hostname

8. Scroll to Custom DNS Suffix and press . 9. Set the DNS Suffix

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 132 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 58: Set DNS suffix

11.1.5 Enable SSH on VMware ESXi Hypervisor

By default, remote command execution is disabled on an ESXi host, and you cannot log in to the host using a remote shell. You can enable remote command execution from the direct console or from the vSphere Client. To enable SSH access in the direct console 1. At the direct console of the ESXi host, press F2 and provide credentials when prompted. 2. Scroll to Troubleshooting Options and press . 3. Choose "Enable SSH" and press . On the left, "Enable SSH" changes to "Disable SSH". On the right, "SSH is Disabled" changes to "SSH is Enabled". 4. Press Esc until you return to the main direct console screen.

11.1.6 StorCLI on VMware ESXi 5.5

To be able to use the storage on an X6 machine you have to configure the RAID adapters. You can install the StorCLI tool directly under VMware ESXi 5.5. As a prerequisite SSH has to be enabled on the VMware ESXi 5.5. You can download the latest StorCLI version from http://www-947.ibm.com/support/entry/portal/ docdisplay?lndocid=migr-5092951. Copy the files VMware ESXi via SCP.

11.1.6.1 Installation Follow these steps to install the StorCLI utility: 1. Unzip file and change into support directory. 2. Copy the VIB to the ESXi server. The file can be placed anywhere where it is accessible to the ESXi console shell. In the following examples the file is located in /tmp. 3. Issue the following command:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 133 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 esxcli software vib install -v=/tmp/ --no-sig-check

4. disable native driver for Megaraid_sas:

1 esxcfg-module -d

5. A reboot is required to apply the configuration changes.

11.1.6.2 Configure RAID and CacheCade with StorCLI You must configure the RAID setup and integration of the CacheCade VDs before you format the disks. To see if the StorCLI works correctly apply following command:

1 storcli show all

You should see a list of the installed RAID adapters and an overview. Counting of the adapters starts with 0. To see the setup of the first adapter use

1 storcli /c0 show

If you do not see any adapter although there is at least one installed, you must change the SCSI driver in VMware ESXi. Decide with the list below for every controller in the machine, which RAID levels, and which number of RAID VDs you have to configure: • 6 HDDs: 1 RAID5 • 9 HDDs: 1 RAID5 • 10 HDDs: 1 RAID6 • 18 HDDs: 2 RAID5 • 20 HDDs: 2 RAID6 Create a RAID5 array, where 252:0-7 is an example list of drives used, /cX the controller, and rX is the RAID level:

1 storcli /c0 add vd type=r5 drives=252:0-7 wb ra cached strip=64 cachevd

All SSDs on a controller are used to create the CacheCade VD. There can only be one CacheCade VD per controller. Create the CacheCade VD, where 252:8-9 is an example list of the SSDs used, and /cX the controller. The parameter assignvds=X needs the VD ID of the RAID array created before. If you created two RAIDs on the controller, you can specify assignvds=X,Y.

1 storcli /c0 add vd cc type=r1 drives=252:8-9 wb assignvds=1

Adjust the settings of the CacheCade VD, where /c0 is the RAID controller and /vX is the ID of the newly created CacheCade VD.

1 storcli /c0/v1 set rdcache=ra iopolicy=cached wrcache=wb

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 134 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

11.1.7 Setting up ESXi Storage in CLI

Since the ESXi Hypervisor runs on standard System x HANA Hardware there is no external storage attached. You have to open a SSH session on the ESXi hypervisor. To list the installed storage devices execute:

1 esxcli storage core device list

To list all filesystems known to the ESXi hypervisor call

1 esxcli storage filesystem list

Figure 59: ESXi 5.x filesystems on a System x3850 X6. The VFAT filesystems belong to the USB device

Create a VMFS5 filesystem on a partition. Example VMFS5 creation on a System x3850 X6:

1 vmkfstools --createfs vmfs5 --setfsname "hana 38 - HDD" /vmfs/devices/disks/naa←- ,→.600605b0038acb6018f17abe32a77168

This creates a VMFS5 filesystem on the CacheCade accelerated RAID5. The device names will vary on your setup. Repeat the steps with every disk in the system you want to use for VMware. Attention These steps delete all data on the disks. Create a backup if necessary.

11.1.8 setting up vswitches

The core of VMware vSphere networking are virtual switches. vSphere supports two types of switches. The standard switch (VSS) and the distributed switch (VDS). The latter is needed for vMotion. However, since vMotion is not supported in this solution we will only describe standard switches. Virtual switches are necessary for the virtual machines to connect to each other or the outside world. The VMs contact the physical adapters through vswitches. The communication uplink, that is the network interface you applied the IP-Adress for the management network is always vswitch0. If you create a standard vswitch it does not have connection to physical interface per default. This can be useful if you want to have an isolated VM.

1 ## adding Switches 2 esxcli network vswitch standard add --vswitch-name=vSwitchGPFS --ports=24 3 esxcli network vswitch standard add --vswitch-name=vSwitchHANA --ports=24 4 esxcli network vswitch standard add --vswitch-name=vSwitchKOM --ports=24 5 6 # changing MTU

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 135 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

7 esxcli network vswitch standard set --mtu=9000 --vswitch-name=vSwitchGPFS 8 esxcli network vswitch standard set --mtu=9000 --vswitch-name=vSwitchHANA 9 10 #adding portgroup 11 12 esxcli network vswitch standard portgroup add --portgroup-name=GPFS_Network --←- ,→vswitch-name=vSwitchGPFS 13 esxcli network vswitch standard portgroup add --portgroup-name=HANA_Network --←- ,→vswitch-name=vSwitchHANA 14 esxcli network vswitch standard portgroup add --portgroup-name=KOM_Network --vswitch←- ,→-name=vSwitchKOM

11.1.9 setting up nic bonding(teaming)

Teaming must be set up on ESXi Hypervisor. Setting up teaming in the VMs is useless. Teaming is always a HA teaming. To set up teaming you add a NIC to a vwsitch.

1 esxcli network vswitch standard uplink add -u -v 2 esxcli network vswitch standard policy failover get -v TEAM_network 3 esxcli network vswitch standard policy failover set -l policy -v vswitchX 4

To see and set the failover policy in a vswitch you need following commands

1 esxcli network vswitch standard policy failover get -v vswitchX 2 esxcli network vswitch standard policy failover set -l policy -v vswitchX 3

11.1.10 Setting Storage for SLES and HANA ISO

There are two ways to provide the needed ISOs for the virtual machines. One is a NFS connected storage from an external source or a datastore on the server.

11.1.10.1 Setting up NFS datastore It is easier to store the SLES for SAP 11 and the non-OS components ISOs on a separate filesystem and mount it via NFS on the ESXi hypervisor. To create an NFS mount login to the hypervisor via SSH and execute:

1 esxcli storage nfs add --host= --=/ --volume-name=<←- ,→create_volume_name>

To see the mounted NFS volumes execute:

1 esxcli storage nfs list

11.1.10.2 Setting up a local datastore A datastore is a directory on the ESXi hypervisor in which you copy the SLES and non-OS component ISOs. Therefore the filesystems must be created first. Connect via SSH to the ESXi hypervisor. All mounted volumes are available at /vmfs/volumes.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 136 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 60: ESXi5.5 Storage Path

Create a datastore named ISO on a VMFS5 volume. • Change to a VMFS5 volume (cd). • Create a datastore directory (mkdir ISO). • Copy the SLES and non-OS components ISO to the datastore via SCP.

11.1.11 Restart VMware ESXi Hypervisor

To restart the ESXi 5.5 hypervisor press F12 at the ESXi prompt. You have to authenticate before you can actually restart the hypervisor.

11.1.12 Installing VMware vSphere Client

VMware vSphere Client is required to perform many of the tasks described in this document. Complete the following steps to install VMware vSphere Client on a suitable system in your network. Note To avoid any unexpected behavior, it is strongly recommended that you use the VMware vSphere Client that matches the version of the SAP HANA system hardware’s VMware ESXi 5 hypervisor. If you already have an appropriate version of the VMware vSphere Client installed, skip to the next section 11.2: Configuring and Starting VMs with vSphere Client on page 138. 1. Boot the system hardware to the VMware ESXi 5 hypervisor. The IP address of the VMware ESXi 5 hypervisor is displayed on the console. Note If you have already added a host name to your DNS, you can use the host name instead of the IP address. 2. On the Microsoft Windows system where VMware vSphere Client will be installed, open a secure web connection (HTTPS) and enter the IP address of VMware ESXi 5 hypervisor in the browser address bar. The VMware ESXi 5 welcome screen is displayed. 3. Download the vSphere client and follow the on-screen instructions to install the client. Note: If a security warning window opens, click the Ignore button.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 137 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 61: ESXi 5.1 WEB Welcome

Note VMware vCenter server also provides a web based vSphere Client that can be used. Open a secure web connection (HTTPS) to the vCenter server to the address https://

/vsphere-client/

11.2 Configuring and Starting VMs with vSphere Client

To configure and start the virtual machines, complete the following steps. Note The illustrations in this document might differ slightly from what you see on your screen. 1. Log in to the VMware vSphere Client. Type the IP address or host name of the host system, and your user name and password and click the Login button. (a) If a security warning window opens, ignore the warning and install the certificate. (b) On a new server, you might also see a warning that there is no datastore; ignore this warning, too. 2. The virtual machine is created with the aid of the vCenter GUI. You can use the WEB-GUI as well, if you prefer it.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 138 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 62: Create new virtual machine

3. Choose Custom .

Figure 63: Choose custom configuration

4. Choose a name.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 139 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 64: Choose a name

5. Choose a datastore for the VM files.

Figure 65: Choose disk storage for VM files

6. Choose the newest virtual machine version.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 140 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 66: Newest virtual machine hardware version

(a) Windows Based Client (Version 8) If you use the Vmware vSphere Microsoft Windows client, you will only be presented with the possibility to choose a VM Hardware version of 6, 7 or 8. In order to run a virtual machine above 32 vCPUs, you must upgrade the VM hardware at the end or use the vCenter’s vSphere web client. See step 6b: Configuring and Starting VMs with vSphere Client on page 141 for more details on upgrading the version using the Windows client. (b) Web Based Client (Version 9)

Figure 67: Configure the use of more than 32 CPUs

7. Choose SUSE Linux Enterprise 11 (64-bit) .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 141 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 68: Choose Operating System

8. Choose number of virtual CPUs according to table 48: SAP HANA Virtual Machine Sizes by Lenovo on page 128. It is important to note that if you are using the vSphere Microsoft Windows Client, you will not be able to configure a virtual machine over 32 vCPUs until you upgrade the VM hardware. If you wish to create a virtual machine using more that 32 vCPUs, first select the maximum of 32 now and change it following the directions in step 19: Configuring and Starting VMs with vSphere Client on page 148.

Figure 69: Choose number of CPUs

9. Choose memory according to table 48: SAP HANA Virtual Machine Sizes by Lenovo on page 128.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 142 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 70: Choose Memory

10. Select the network cards.

Figure 71: Choose Network Cards

11. Choose the SCSI controller.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 143 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 72: Choose SCSI controller

12. Disk layout for virtual machines. Two disks needed for a VM. One for the OS, and one for GPFS. Please see table 48 for required disk sizes. 13. Create a new virtual disk.

Figure 73: Create new HANA datastore

(a) Choose the OS size according to table 48: SAP HANA Virtual Machine Sizes by Lenovo on page 128.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 144 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 74: Choose datastore size

(b) Choose a datastore for the OS.

Figure 75: Choose datastore

(c) Choose the correct SCSI node. The first virtual disk you create is assigned to "SCSI (0:0)", the second to "SCSI (0:1)", and so on.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 145 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 76: Choose SCSI Node

(d) Finish the virtual drive creation. 14. Repeat steps 13 to 13d for virtual disks for GPFS. Select Edit the virtual machine settings before completion to do this. In the case that your virtual machine requires a drive size that is larger than the capacity of a single available device, you must repeat steps 13 through 13d to include the total amount of storage across multiple devices. 15. Add a new CD/DVD device.

Figure 77: Add a new CD/DVD device

(a) Select Datastore ISO File .

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 146 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 78: Select ISO image

(b) Select Connect at power on . (c) Select Browse... and look for the "SLES for SAP ISO (NFS Mounted Datastore)". You need two CD/DVD drives for the installation. One for the SLES DVD ISO and one for the non-OS components ISO. (d) Select IDE (0:0) .

Figure 79: Select IDE device 0:0

(e) Finish the creation of the SLES for SAP DVD.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 147 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 80: Finish creation of SLES ISO mount

16. Create the non-OS components DVD. Repeat step 15 for a second CD/DVD and include the Lenovo HANA ISO. Both ISOs are best put into an NFS datastore that has been attached previously in the server settings of the VMware ESX server. 17. Change the boot options to Boot to BIOS at the next reboot. 18. Press OK to create the virtual machine. 19. Upgrading the Virtual Machine to VM Version 9 using Windows Client. If it is required to use more than 32 vCPUs in your SAP HANA virtual machine (sizes larger than 3 slots), you must use the version 9 of the VMware virtual hardware. This is not possible during a virtual machine creation using the Microsoft Windows client. After creating a virtual machine, right mouse click on the newly created virtual machine in the vSphere client and select "Upgrade Virtual Hardware". A pop-up will show asking you to confirm the upgrade. Press "Yes" and continue.

Figure 81: Upgrade virtual hardware

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 148 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 82: Confirm upgrade

(a) Increasing the number of virtual CPUs for larger VMs. If you are installing a virtual machine larger than 3 slots, you will need to update the number of vCPUs required for this system. Right mouse click on the newly created virtual machine in the left-hand side of the vSphere client and select Edit Settings . (b) Select the CPUs. Select the CPUs and increase the number of virtual sockets and CPUs as required. We rec- ommend to use 10 CPUs per socket for the SAP HANA virtual machine.

Figure 83: Upgrade virtual hardware

20. Upgrading the VM to VM version 10 using command line. This describes the upgrade of the virtual hardware, CPU and RAM if a vCenter is not available. You may be in need to do this, if you want to run the VM with large RAM, e.g. more than 256GB RAM. To accomplish this, it is mandatory to have SSH to ESXi hypervisor enabled. Every virtual machine has a VMX file, which contains all configuration data for the machine. Usually the format is .vmx. You can find the VMX file for your VM with the command

1 ~ # find . -name '*.vmx'

This will list all available VMs. Choose the one you need and change into the directory. Open the VMX file with an editor (e.g. vi). The VM has to be shut down to do this. Edit and change following lines:

1 virtualHW.version = "10" 2 memsize = "" 3 numvcpus = ""

To take the changes in effect you must reload the VM

1 ~# vim-cmd vmsvc/reload

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 149 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

11.3 Operating System (SLES for SAP 11 SP3) Installation

After starting the virtual machine the installation prompt appears. Use to select the line "SLES for SAP Applications - Installation with external profile". Move the cursor ( + ) to the boot options and change the autoyast parameter to autoyast=device://sr1/. See figure 84: Changing the autoyast parameter for installation on page 150.

Figure 84: Changing the autoyast parameter for installation

After that press . The installation will continue automatically.

Note Please continue with the installation instructions in section 6.3: Phase 2 – SLES for SAP on page 53.

11.4 Operating System (Red Hat Enterprise Server 6.5 and 6.6) Installation

After starting the virtual machine the Installation prompt appears. Press for the kernel boot options. Add "ks=cdrom://ks.cfg". See figure 85: Adding kickstart parameter for install on page 151.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 150 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 85: Adding kickstart parameter for install

After that press . The installation will continue automatically.

Note Please continue with the installation instructions in section 6.4: Phase 2 – RHEL on page 58, but also execute the steps in the following section.

11.4.1 Changes after Red Hat Installation

After the installation you need to login as root and perform following tasks: • Remove the file /etc/modprobe.d/bonding.conf. • Remove the files ifcfg-bond0, ifcfg-bond1, and ifcfg-eth3 in /etc/sysconfig/network-scripts. • Edit ifcfg-eth0 and remove the lines MASTER=bond0 and slave=yes. • The file ifcfg-eth0 file should look like this:

1 DEVICE=eth0 2 TYPE=Ethernet 3 USERCTL=no 4 ONBOOT=yes 5 BOOTPROTO=none 6 NM_CONTROLLED=no 7 IPADDR=[IPADDR of Server] 8 NETMASK=[netmask] 9 IPV6INIT=no

• The configuration for eth1 and eth2 is similar. Please keep in mind that eth1 is the GPFS network interface (gpfsnode01) and eth2 ist the HANA network interface (hananode01). • Edit /etc/hosts and add IP address and full name of your server, IP and name of gpfsnode01, and IP and name of hananode01. • Reboot the VM. • After reboot continue with installation as described in section 6.6: Phase 3 on page 62.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 151 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

11.5 Tuning of Operating System and VM

11.5.1 Tuning of OS

After Installation the scheduler should be NOOP. To check the scheduler for the running system run this command:

1 # cat /sys/block/sdb/queue/scheduler

This checks for drive sdb. If noop is not the scheduler, change the scheduler in a running system with this command

1 # echo noop > /sys/block/sdb/queue/scheduler

Harddisk IO tuning Read/write operations on the HDDs can be improved if you adjust the device level read ahead and increase the number of io requests that get buffered

1 echo 4096 > /sys/block/sdb/queue/read_ahead_kb 2 echo 4096 > /sys/block/sdb/queue/nr_requests

The values are not permanent and will be lost after a reboot. To make the changes permanent you have to add them to a boot script. Increase the percentage of memory, that can be filled with dirty pages. Be careful with that, because most of the memory is occupied by SAP HANA.

1 echo 5 > /proc/sys/vm/dirty_background_ratio 2 echo 10 > /proc/sys/vm/dirty_ratio

In order to optimize and increase the queue depth of the pvSCSI driver inside the Linux-OS on which SAP HANA runs, add the Linux kernel boot options below to /boot/grub/menu.lst:

1 vmw_pvscsi.cmd_per_lun=1024 2 vmw_pvscsi.ring_pages=32

The complete kernel line in /boot/grub/menu.lst will look like this:

1 title Lenovo Systems Solution for SAP HANA 2 root (hd0,1) 3 kernel /boot/vmlinuz-3.0.76-0.11-default root=/dev/sda2 resume=/dev/sda3 splash=←- ,→silent transparent_hugepage=never intel_idle.max_cstate=0 processor.max_cstate←- ,→=0 instmode=cd showopts vga=0x314 vmw_pvscsi.cmd_per_lun=1024 vmw_pvscsi.←- ,→ring_pages=32 4 initrd /boot/initrd-3.0.76-0.11-default

For low latency networking it is recommended to use the vmxnet3 network adapter and driver. The vmxnet3 driver is available after the Installation of the VMware tools. This is mandatory.

11.5.2 Tuning of ESXi and VM

paragraphparameters to the *.vmx file Memory prealloccation. It is sensible to allocate all memory at boot time. This is done with the sched.mem.prealloc parameter set to TRUE. It is manadatory to set the sched.mem.min parameter as well. If you do not do it the VM will fail to start. Usually the sched.mem.min parameter equalls the amount of memory in MB set to the VM

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 152 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 sched.mem.min = "xxx" 2 sched.mem.prealloc = "TRUE" 3 sched.swap.vmxSwapEnabled = "FALSE"

If the parameter sched.mem.prealloc is set, it takes a little longer for the VM to start. This is not a bug. All system X servers are multicore servers. This can cause latency issues if the needed memory segments are not in the near storage area. To resolve this NUMA (non uniform memory access) problem VMware has developed sophisticated NUMA aware schedulers. A VM with more than 8 vCPUs is considered a wide virtual machine. However, to reduce latency is may be sensible to bind a virtual machine to a cpu. These are the numa.* parameters in the *.vmx file of the VM. The numa.autosize.vcpu.maxPerVirtualNode is set automatically if the number of vcpu is more than 8.

1 numa.autosize.vcpu.maxPerVirtualNode = "20" 2 numa.autosize.cookie = "200001" 3 numa.nodeAffinity = "0" 4 numa.vcpu.preferHT = "TRUE"

1 sched.cpu.latencySensitivity = "HIGH"

NIC Optimization For perfomance and latency-sensitive VMs it is recommended to use the vmxnet3 vNIC driver. At the *.vmx configuration file for the VM you change the driver for the GPFS and HANA Ethernet card to vmxnet3

1 ethernet1.virtualDev = "vmxnet3" 2 ethernet1.present = "TRUE" 3 ethernet1.networkName = "GPFS Network"

The VM must be down to be able to change the parameters. Leave the eth0 device to e1000. On the Linux side you have to install the vmware tools, because these provide the vmxnet3 kernel driver.

1 esxcli system module parameters set -m igb -p "InterruptThrottleRate=0" 2 esxcli network nic list 3 esxcli system module parameters list -m

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 153 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

12 Upgrading the Hardware Configuration

Note Please note that this chapter may differ for special setups like DR. This chapter is about standard appliance configurations. There are several possibilities to upgrade IBM appliances. You can either upgrade the RAM of your appliance (scale-up) or you can add servers to create or increase the size of a cluster (scale-out). Table 49: RAID array and RAID controller overview on page 155 lists defined models according to number of CPUs, memory, and number of RAID arrays. An upgrade from 4U chassis (x3850 X6) to 8U chassis (x3950 X6) is possible – with some extra efforts. Upgrades from 2 CPU sockets to 4, and 4 to 8 sockets are possible. Please note that the PCI-e slot assignment changes (section 4.4: Card Placement on page 15) are required. When scaling out a stand-alone installation (single server) to a cluster without changing the RAM it might be necessary to add additional storage to the servers. Please note the different lines for stand-alone and scale-out that might list different numbers of RAID arrays. Additional storage can mean either to add 9 HDDs to an existing storage expansion, or to add a new storage expansion, or (only for 8U chassis) to add a second internal M5210 RAID controller. If your upgrade path requires new RAID controllers please follow the instructions in section 4.4: Card Placement on page 15).

12.1 Power Policy Configuration

Unless specified to manufacturing, systems shipped from the factory have default settings that may not meet customer desired settings. It is strongly recommended that during pre-installation setup, or after installing additional hardware options, the power policy and power management selections should be checked to insure: • Sufficient power is available for the configuration • The desired correct power redundancy and throttling settings have been selected Note Failure to properly set values can prevent the system from booting or log error events. For more information on how to perform this task, refer to section ’Setting power supply power policy and system power configurations’ of the System x3850 X6 and x3950 X6 Installation and Service Guide19.

12.2 Reboot Behavior

When installing or performing upgrades, the operator should be prepared to expect multiple reboots during the POST process as the system performs the required configuration and setting changes. A lack of understanding reboot behavior could cause the operator to suspect bad or misbehaving hardware or firmware and result in interrupting the required process. Interrupting the process will result in increased time to complete the installation and may require service depending on what actions the operator has performed improperly. The number of reboots will vary depending upon the type (HW vs FW) and number of changes. Firmware changes (primary bank, secondary bank, both, option) has most effect and may be as high as seven. The number and size of installed memory DIMMs affects the time between reboots, not the number.

19http://publib.boulder.ibm.com/infocenter/systemx/documentation/topic/com.ibm.sysx.3837.doc/nn1hu_ install_and_service_guide.pdf

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 154 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Chassis CPUs Usage Memory IA* EA** M5120/M5225 Note Standalone 128-512GB 1 0 0 2 256GB 1 0 0 [1] Scaleout 512GB 1 1 0 [1] 256-512GB 1 0 0 768-1024GB 1 1 1 Standalone 1.5-2TB 1 1 1 [2] 3-4TB 1 2 1 [4] x3850 X6 6TB 1 3 2 [3] 4 512-1024GB 1 1 1 1.5TB 1 2 1 [4] 2TB 1 2 1 [4] Scaleout 3TB 1 3 2 [3] 4TB 1 4 2 [3] 6TB 1 5 3 [3] 256-512GB 1 0 0 Standalone 768-1024GB 2 0 0 4 1.5-2TB 2 0 0 [2] Scaleout 512-1024GB 2 0 0 512GB 1 0 0 1-2TB 2 0 0 3-4TB 2 1 1 [2] Standalone 6TB 2 2 1 [2] x3950 X6 8TB 2 2 1 [3] 12TB 2 5 3 [3] 8 1TB 2 0 0 2TB 2 1 1 4TB 2 3 2 [4] Scaleout 6TB 2 5 3 [2] 8TB 2 6 3 [3] 12TB 2 10 5 [3]

Table 49: RAID array and RAID controller overview * IA = Number of RAID arrays on Internal M5210 RAID controllers (excluding the RAID array for the OS). ** EA = Number of RAID arrays on External M5120/M5225 RAID controllers. [1] = up to 4 nodes only [2] = For Suite on HANA only, not for Datamart and BW. [3] = Not approved with SAP HANA [4] = For non-productive use only under relaxed HW requirements.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 155 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note Before adding or removing any hardware, remove AC power and wait for the LCD display and all Light Emitting Diodes (LEDs) to turn off. For more information on this topic and to see a reboot guideline chart, refer to RETAIN tip MIGR- 509687320.

12.3 Adding storage

12.3.1 Adding storage via EXP2524

Depending on your upgrade path, you have the following options: • Add 9 HDDs to an already attached EXP. • Attach a new EXP to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2 SSDs. • Attach a new EXP to the server and insert 9 (for 1 RAID5) or 18 HDDs (for 2 RAID5s) and 2 SSDs or install 2 additional SSDs into 1st EXP for CacheCade RAID121. Please note: you can also configure RAID6 on the EXPs, you then need 1 HDD more per RAID array, i.e. 10 respectively 20 HDDs per EXP).

Note All steps – except the installation of a new RAID controller – can be executed without downtime. 1. Install the M5120/M5225 in the server. (Skip this step when just adding storage to an existing EXP.) 2. Install the HDDs and SSDs in the EXP. (When just adding storage, you will only add HDDs and no SSDs). 3. Connect the EXP to power and via SAS cable to the RAID controller. (Skip this step when just adding storage to an existing EXP.) 4. 12.3.3: Configure RAID array(s) on page 157. 5. 12.3.8: Configuring GPFS on page 159.

12.3.2 Adding storage on second internal M5210 controller

The second M5210 will be connected to 6 HDDs for a RAID5 and 2 SSDs for CacheCade. 1. Install the M5210 in the server. 2. Install the HDDs and SSDs. 3. 12.3.3: Configure RAID array(s) on page 157. 4. 12.3.8: Configuring GPFS on page 159.

20http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5096873 21For details on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1 Configuration

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 156 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

12.3.3 Configure RAID array(s)

Note Appliance version 1.8.80-12 (and later) come with the tool saphana-raid-config.py. Use the following three commands instead of the manual configuration described in the next chapters. Execute this command to adjust the CacheCade settings: saphana-raid-config.py -c Execute this command to configure the unconfigured HDDs into RAID arrays: saphana-raid-config.py -u Execute this command to activate the CacheCade also on the newly created RAID arrays: saphana-raid-config.py -c Now continue with 12.3.8: Configuring GPFS on page 159. The command line tool storcli is installed on your appliance. It will be used to configure the RAIDs. Note All commands were tested with storcli version 1.07.07. Other versions’ syntax may vary.

Look in the output of storcli64 /call show for the controller with the unconfigured drives (UGood). The actual enclosure IDs (EID), slot numbers (Slt), and ID of the controller may vary in your setup.

1 : 2 Controller = 1 3 Status = Success 4 Description = None 5 6 Product Name = ServeRAID M5120 7 : 8 ------9 EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp 10 ------11 8:1 18 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U 12 8:2 19 UGood - 371.597 GB SAS SSD N Y 512B TXA2D20400GA6I U 13 8:3 9 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 14 8:4 10 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U 15 8:5 11 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U 16 8:6 12 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 17 8:7 13 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U 18 8:8 14 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 19 8:9 15 UGood - 1.089 TB SAS HDD N Y 512B ST1200MM0007 U 20 8:10 16 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 21 8:11 17 UGood - 1.089 TB SAS HDD N Y 512B HUC101212CSS60 U 22 ------23 :

Create the RAID5, where 8:3-11 is an example list of the HDDs used. It is following the scheme :. /c1 stands for controller 1.

1 storcli64 /c1 add vd type=raid5 drives=8:3-11 wb ra cached pdcache=off strip=64

If you have to configure a second RAID5 array, configure it accordingly.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 157 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

12.3.4 Deciding for a CacheCade RAID Level

You can configure the CacheCade RAID arrays either with RAID1 or RAID0. Depending on the hardware setup you have to decide which RAID level you have to configure. • 1 M5210: only RAID0 • 1 M5210 + 1 M5120/M5225 (with 2 SSDs): only RAID0 • 1 M5210 + 1 M5120/M5225 (with 4 SSDs): RAID0 or RAID1 • 1 M5210 + 2 or more M5120/M5225: RAID0 or RAID1 • 2 M5210: RAID0 or RAID1 Please keep in mind that all CacheCade VDs must have the same RAID level. This means that you have to recreate existing CacheCade arrays that have the wrong RAID level.

12.3.5 Configuring RAID array when CacheCade is not yet configured

Create the CacheCade device, where assignvds=X is the RAID 5 (with X as the Logical/Virtual Drive ID). If you created 2 RAID5 arrays, use assignvds=X,Y to assign the CacheCade VD to both arrays. 8:1-2 is an example list of SSDs used. To decide for the RAID level (raidX) see the previous section.

1 storcli64 /c1 add vd cachecade type=raidX drives=8:1-2 wb assignvds=0

Adjust settings of the CacheCade device, where /vX is the CacheCade VD (with X as the Logical/Virtual Drive ID):

1 storcli64 /c1/v1 set rdcache=ra iopolicy=cached wrcache=wb

12.3.6 Configuring RAID array with existing CacheCade

When you added storage to an existing EXP the CacheCade VD is already configured. Assign the CacheCade VD to the newly created RAID5 array, where /cX is the controller, and /vX the RAID5 array:

1 storcli64 /c1/v2 set ssdcaching=on

12.3.7 Changing the CacheCade RAID Level

To change the RAID level of an existing CacheCade VD you have to delete and recreate the CacheCade VD. At first, find the CacheCade VD ID and the slots of the SSDs. Use the following command, where /cX is the RAID controller.

1 storcli64 /c0 show

Now delete the CacheCade VD, where /cX is the RAID controller and /vX is the ID of the CacheCade VD.

1 storcli64 /c0/v1 delete cachecade

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 158 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Create the deleted CacheCade again, where /cX is the RAID controller and drives=12:1-2 is an example list of SSD drives used.

1 storcli64 /c2 add vd cachecade type=raid1 drives=12:1-2 wb

Adjust the settings of the CacheCade VD, where /cX is the RAID controller and /vX is the ID of the newly created CacheCade VD.

1 storcli64 /c2/v1 set rdcache=ra iopolicy=cached wrcache=wb

12.3.8 Configuring GPFS

Find the block device that belongs to the newly created RAID array. mmlsnsd -X, lsscsi, and lsblk may be helpful. Find the name of the new NSD(s). For example: If you are on gpfsnode01, execute mmlsnsd | grep gpfsnode01 to find out the names that are already in use for the existing NSDs. Create a stanza file (/var/mmfs/config/disk.list.data.gpfsnodeZZ.new) containing the information about the new GPFS NSD(s). Repeat this block for all newly created RAID arrays accordingly. ZZ is the node number (e.g. 01 in gpfsnode01).

1 %nsd: device=/dev/sdX 2 nsd=dataYYnodeZZ 3 servers=gpfsnodeZZ 4 usage=dataAndMetadata 5 failureGroup=10ZZ 6 pool=system

Execute

1 mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no 2 mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no

Attention The following command must only be executed on stand-alone configurations. Do not execute it in a cluster environment! mmrestripefs sapmntdata -b This will balance the data between the used and unused disks equally. Change the GPFS quotas to match the new requirements. Run the quota calculator and you will see a result like this:

1 # saphana-quota-calculator.sh 2 Please set the Shared quota to 8187 GB 3 Please set the Data quota to 3072 GB 4 Please set the Log quota to 1024 GB 5 6 Use the following command(s) to set the quota(s) 7 mmsetquota sapmntdata:hanadata --block 3072G:3072G 8 mmsetquota sapmntdata:hanalog --block 1024G:1024G 9 mmsetquota sapmntdata:hanashared --block 8187G:8187G

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 159 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

12.4 Adding memory

Note The installation of additional memory requires a system downtime. When the customer decides for a scale-up, i.e. adding RAM to the server(s), you have to follow the memory DIMM placement rules for IBM X6 servers to get the best performance. The DIMMs must be placed equally over all CPU books – each CPU book must contain the same amount of DIMMs in the same slots. Tables 50: x3850 X6 Memory DIMM Placement on page 160, and 51: x3950 X6 Memory DIMM Place- ment on page 160 show which slots must be populated for specific configurations. The number of memory DIMMs can be computed by "RAM size"/"DIMM size".

2 Sockets 4 Sockets DIMMs per server 8 16 24 32 48 16 32 48 64 96 DIMM Slots 9, 6           DIMM Slots 1, 10           DIMM Slots 15, 24           DIMM Slots 19, 16           DIMM Slots 8, 5           DIMM Slots 2, 11           DIMM Slots 14, 23           DIMM Slots 20, 17           DIMM Slots 7, 4           DIMM Slots 3, 12           DIMM Slots 13, 22           DIMM Slots 21, 18          

Table 50: x3850 X6 Memory DIMM Placement

4 Sockets 8 Sockets DIMMs per server 16 32 48 64 96 32 64 96 128 192 DIMM Slots 9, 6           DIMM Slots 1, 10           DIMM Slots 15, 24           DIMM Slots 19, 16           DIMM Slots 8, 5           DIMM Slots 2, 11           DIMM Slots 14, 23           DIMM Slots 20, 17           DIMM Slots 7, 4           DIMM Slots 3, 12           DIMM Slots 13, 22           DIMM Slots 21, 18          

Table 51: x3950 X6 Memory DIMM Placement

After the installation of additional memory, the SAP HANA’s global allocation limit must be reconfigured.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 160 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

12.5 Adding CPU Books

Note The installation of additional CPU books requires a system downtime. The following upgrade paths are possible: • x3850 X6, 2 sockets → x3850 X6, 4 sockets • x3950 X6, 4 sockets → x3950 X6, 8 sockets • x3850 X6, 4 sockets → x3950 X6, 8 sockets, including the exchange of the 4U chassis to a 8U chassis • x3850 X6, 4 sockets → x3950 X6, 4 sockets, including the exchange of the 4U chassis to a 8U chassis Follow these steps to add additional CPU books to a server: 1. Disable the GPFS auto-mount for your GPFS filesystems. If you only have the standard GPFS filesystem the following command is enough. If you have more GPFS filesystems, change the configuration for them accordingly.

1 mmchfs sapmntdata -A no

2. Power off the machine. 3. Place the new CPU books in the server. Please make sure that the memory DIMMs are placed correctly in the CPU books. (See 12.4: Adding memory on page 160.) 4. Adopt the PCI-e card placement according to the tables in section 4.4: Card Placement on page 15. 5. Power on the machine. 6. On SLES for SAP: Save the file /etc/udev/rules.d/71-ibm-saphana-persistent-net.rules to another location. On RHEL: Save the file /etc/udev/rules.d/99-ibm-saphana-persistent-net.rules to another location. 7. Execute

1 saphana-udev-config.sh -sw

8. Reboot the machine. 9. Review the network settings. 10. Enable the GPFS auto-mount option for you GPFS filesystems again.

1 mmchfs sapmntdata -A yes

11. Mount the GPFS filesystem by hand.

1 mmmount sapmntdata

12. At last start the HANA database.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 161 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

13 Software Updates

Note Starting with appliance version 1.9.96-13 the mount point for the GPFS file system sapmntdata is user configurable during installation. SAP HANA will be also installed into this path. Lenovo currently recommends to use /sapmnt, while SAP promotes /hana. The following commands and code snippets use /sapmnt. For any other path please replace /sapmnt with the chosen path.

13.1 Warning

Please be careful with updates of the software stack. Please update the software and driver components only with a good reason, either because you are affected by a bug or have a security concern and only after Lenovo or SAP support advised you to upgrade or after requesting approval from support via the SAP OSS Ticket System on the queue BC-OP-LNX-LENOVO. Be defensive with updates as updates may affect the proper operation of your SAP HANA appliance and the System x SAP HANA Development team does not test every released patch or update.

13.2 Update Variants

This subsection gives an overview of the procedure in general, how updates should be applied. Then there are two ways presented, how one could update in a cluster environment, either disruptive with a downtime, or rolling, where one node is updated at a time and then re-added to the cluster. Before performing a rolling update (non-disruptive one node at a time update) in a cluster environment make sure that your cluster is in good health and all server nodes and storage devices are running.

13.2.1 General per node update procedure

This is the generic version for any kind of updates which require a system restart. 1. (on the target node) Check GPFS cluster health Before performing any updates on any node, verify that the cluster is in a sane state. First check that all nodes are running with the command

1 # mmgetstate -a

and check that all nodes are active, then verify that all disks are active

1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that all other disks are up. Warning If disks of more than one server node are down, the file system will be shut down causing all other SAP HANA nodes to fail. 2. (on the target node) Shutdown SAP HANA Shutdown the SAP HANA and the sapstartsrv daemon via

1 # service sapinit stop

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 162 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Verify that SAP HANA and sapstartsrv are not running anymore:

1 # ps ax | grep sapstart 2 # ps ax | grep hdb

No processes should be found, if any processes are found please retry stopping SAP HANA. 3. (on the target node) Unmount the GPFS file system Unmount locally the shared file system

1 # mmumount sapmntdata

and take care that no open process is preventing the file system from unmounting. If that happens use

1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, adm, etc.). Other nodes within the cluster can still mount the shared file system. 4. Shutdown GPFS

1 # mmshutdown

5. Perform upgrades Do now the necessary updates. 6. Restart the system Restart the server if necessary. GPFS and SAP HANA should start automatically during reboot. Skip step 7. 7. Restart GPFS If you did not restart the whole server in step 6, start GPFS

1 # mmstartup

8. Mount the file system if not already mounted. You may mount the file system after starting GPFS

1 # mmmount sapmntdata

9. Start SAP HANA

1 # service sapinit start

10. (on any node) Verify GPFS disks Verify all GPFS disks are active again

1 # mmlsdisk sapmntdata -e

If any disks are down, restart them with the command

1 # mmchdisk sapmntdata start -a

If disks are suspended you can resume them all with the command

1 # mmchdisk sapmntdata resume -a

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 163 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Afterwards check the disk status again. 11. (on any node) GPFS Restripe Start a restripe so that all data is replicated proper again

1 # mmrestripefs sapmntdata -r

Warning Currently the FPO feature used in the appliance is not compatible with file system rebalancing. Do not use the -b parameter! 12. Continue with the next node 13. Restore accurate usage count If a file system was ill-replicated the used block count results from mmcheckquota may not be accurate. Therefore it is recommended that you run mmcheckquota to restore the accurate usage count after the file system is no longer ill-replicated.

1 # mmcheckquota -a

13.2.2 Disruptive Cluster Update

In the disruptive cluster update scenario, one would shutdown the whole cluster an apply all updates. This will cause a downtime.

13.2.3 Full Cluster Rolling Update

This update procedure applies when you are performing updates which either need a server restart like Linux kernel update or need a restart of specific server software (e.g. GPFS) on affected nodes. The idea of a rolling update is to update only one server at a time and after the server is back online in the cluster, proceed with the next node in the same way. By doing so, you can avoid having downtimes. For updating the SAP HANA software in a SAP HANA cluster, please refer to the SAP HANA Technical Operations Manual. This can be done independent of other updates.

13.3 RHEL versionlock

RHEL has a mechanism to lock the versions of specified packages. Without this mechanism you would update from RHEL 6.5 to RHEL 6.6 doing a ’yum update’ without further notice. SAP HANA is only released for dedicated RHEL versions. Therefore it is advisable to restrict the update for the kernel-version. You can find examples for RHEL 6.5 and for RHEL 6.6 below. If it is not already done this mechanism can be activated by installing two packages and creating a file /etc/yum/pluginconf.d/versionlock.list in the following way:

1 yum -y install yum-versionlock yum-security

For RHEL 6.5 the file /etc/yum/pluginconf.d/versionlock.list should look like this

1 # Keep packages for RHEL 6.5 (begin) 2 # https://css.wdf.sap.corp/sap/support/notes/2013638 3 libssh2-1.4.2-1.el6.x86_64 4 kernel-2.6.32-431.*

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 164 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

5 kernel-firmware-2.6.32-431.* 6 kernel-headers-2.6.32-431.* 7 kernel-devel-2.6.32-431.* 8 redhat-release-* 9 # Keep packages for RHEL 6.5 (end)

or for RHEL 6.6 like this

1 # Keep packages for RHEL 6.6 (begin) 2 # https://css.wdf.sap.corp/sap/support/notes/2136965 3 libssh2-1.4.2-1.el6.x86_64 4 kernel-2.6.32-504* 5 kernel-firmware-2.6.32-504.* 6 kernel-headers-2.6.32-504.* 7 kernel-devel-2.6.32-504.* 8 redhat-release-* 9 # Keep packages for RHEL 6.6 (end)

To allow later updates (like kernel-updates), you have to delete all lines containing restrictions for that update case in the file versionlock.list. After the update it is necessary to create similar restrictions for the updated packages using the new package versions. Please refer also to the following SAP notes 2013638– SAP HANA DB: Recommended OS settings for RHEL 6.5 and 2136965– SAP HANA DB: Recommended OS settings for RHEL 6.6

13.4 Linux Kernel Update

At the time this document is created, kernel version 3.0.101-0.47.52.1 is mandatory for SLES for SAP 11 SP3. Please consult SAP if there is now a higher version recommended. Warning If the Linux kernel is updated, it is mandatory to recompile the GPFS portability layer kernel module. Otherwise the system will not work anymore!

13.4.1 SLES Kernel Update Methods

There are multiple methods to update a SLES for SAP installation. Possible update sources include updating by using kernel RPMs copied onto the target server, using a corporate-internal installed SLES update server/repository or by using Novell’s update server via the Internet (requires registration of the installation). Possible methods include command line based tools like rpm -Uvh or CLI/X11 based GUI tools like SUSE’s YaST2. Please refer to Novell’s official SLES documentation. A good starting point is the chapter "Installing or Removing Software" in the SLES 11 Deployment guides obtainable from https://www.suse.com/ documentation/sles11/. If you decide to update from RPM files, you need to update at least the following files: • kernel-default-.x86_64.rpm • kernel-default-base-.x86_64.rpm • kernel-default-devel-.x86_64.rpm • kernel-source-.x86_64.rpm • kernel-syms-.x86_64.rpm

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 165 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

• kernel-trace-devel-.x86_64.rpm • kernel-xen-devel-.x86_64.rpm Updating using YAST is recommended over updating from files.

13.4.2 RHEL Kernel Update Methods

There are multiple methods to update a RHEL installation. Possible update sources including updating by using kernel RPMs copied onto the target server, using a corporate-internal installed RHEL update server/repository or by using Red Hat’s update server via the Internet (requires registration of the installation). Please refer to Red Hat’s official RHEL documentation. A good starting point is the Red Hat Deployment Guide22 (chapter 27 "Manually Upgrading The Kernel"). If you decide to update from RPM files, you need to update at least the following files • kernel-.el6.x86_64.rpm • kernel-devel-.el6.x86_64.rpm • kernel-firmware-.el6.noarch.rpm • kernel-headers-.el6.x86_64.rpm4 There are two sources for Kernel upgrades on Red Hat Linux: http://www.redhat.com/security/ updates/, and http://www.redhat.com/docs/manuals/RHNetwork/ Download the kernel RPMs necessary for your system. Red Hat recommends to keep the old kernel packages as a fallback in case there are problems with the new kernel. Updating using repositories is recommended over updating from files. Please refer to chapter 13.3: RHEL versionlock on page 164 how to check, if a versionlock mechanism is implemented and how to allow kernel updates.

13.4.3 Kernel Update Procedure

Step Title  1 Stop SAP HANA 2 Unmount GPFS file systems, stop GPFS 3 Update Kernel Packages 4 Build new GPFS portability layer 5 Restart GPFS & check GPFS status 6 Start SAP HANA

Table 52: Upgrade GPFS Portability Layer Checklist

1. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node cleanly. Login in as root on each node and execute

1 # service sapinit stop

22https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index. html

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 166 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP software manually. Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal23 or SAP Service Marketplace24. Make sure no process has files open on /sapmnt, you can test that with the command:

1 # lsof /sapmnt

2. Unmount GPFS file systems, stop GPFS

1 # mmumount all 2 # mmshutdown

3. Update Kernel Packages Please update now the kernel by your preferred method. 4. Build new portability layer

1 # cd /usr/lpp/mmfs/src/ 2 # make Autoconfig 3 # make World 4 # make InstallImages

5. Restart GPFS & check GPFS status

1 # mmstartup 2 # mmmount all 3 # mmgetstate 4 # mmlsmount all

6. Start SAP HANA using

1 # service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal or SAP Service Marketplace.

13.5 Updating GPFS

Note Upgrading GPFS requires a rebuild of the portability layer. The same applies if the Linux kernel was upgraded.

23https://help.sap.com/hana 24https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 167 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

13.5.1 Disruptive GPFS Cluster Update

Step Title  1 Stop SAP HANA 2 Unmount GPFS file systems, stop GPFS 3 Upgrade to new GPFS Version 4 Build new GPFS portability layer 5 Update cluster and file system information 6 Restart GPFS, mount GPFS file systems 7 Check Status of GPFS 8 Start SAP HANA

Table 53: Upgrade GPFS Portability Layer Checklist

1. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node cleanly. Login in as root on each node and execute

1 # service sapinit stop

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP software manually. Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal25 or SAP Service Marketplace26. Make sure no process has files open on /sapmnt, you can test that with the command:

1 # lsof /sapmnt

2. Unmount GPFS file systems, stop GPFS

1 # mmumount all -a 2 # mmshutdown -a

3. Upgrade to new GPFS version. This step may be skipped if only the portability layer needs to be re-compiled due to a Linux kernel update. (Replace with GPFS version number of the update.)

1 # rpm -Uvh gpfs.base-.x86_64.update.rpm 2 # rpm -Uvh gpfs.docs-.noarch.rpm 3 # rpm -Uvh gpfs.gpl-.gpl.noarch.rpm 4 # rpm -Uvh gpfs.msg.en_US-.noarch.rpm

4. Build new portability layer

1 # cd /usr/lpp/mmfs/src/ 2 # make Autoconfig 3 # make World 4 # make InstallImages

5. Update cluster and file system information to current GPFS version

25https://help.sap.com/hana 26https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 168 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # mmchconfig release=LATEST 2 # mmstartup -a 3 # mmchfs sapmntdata -V full 4 # mmmount all -a

6. Check Status of GPFS

1 # mmgetstate -a 2 # mmlsmount all -L 3 # mmlsconfig | grep minReleaseLevel

7. Start SAP HANA using

1 # service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal or SAP Service Marketplace.

13.5.1.1 Rolling GPFS Upgrade per node procedure

To minimize downtimes, please distribute the GPFS update package (GPFS-3.X.0.xx-x86_64-Linux. tar.gz) on all nodes and extract the tar-ball before starting. 1. Check GPFS cluster health Before performing any updates on any node, verify that the cluster is in a sane state. First check that all nodes are running with the command

1 # mmgetstate -a

and check that all nodes are active, then verify that all disks are active:

1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that all other disks are up. Warning If disks of more than one server node are down, the file system will be shut down causing all other SAP HANA nodes to fail. 2. Shutdown SAP HANA Shutdown the SAP HANA and the sapstartsrv daemon via

1 # service sapinit stop

Verify that SAP HANA, sapstartsrv and any other process accessing /sapmnt are not running anymore:

1 # lsof /sapmnt

No processes should be found, if any processes are found please retry stopping SAP HANA and any other process accessing /sapmnt. 3. Unmount the GPFS file system Unmount locally the shared file system

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 169 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # mmumount sapmntdata

and take care that no open process is preventing the file system from unmounting. If that happens use

1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, adm, etc.) close them and retry. Other Nodes within the cluster can still mount the shared file system. 4. Shutdown GPFS

1 # mmshutdown

GPFS should unload its kernel modules during its shutdown, so check the output of this command. 5. Update GPFS Software Change to the directory where you extracted the GPFS Update package GPFS-3.X.0.xx-x86_ 64-Linux.tar.gz where X and xx denote the desired target GPFS version. Execute the following commands

1 # rpm -Uvh gpfs.base-3.X.0-xx.x86_64.update.rpm 2 # rpm -Uvh gpfs.docs-3.X.0-xx.noarch.rpm 3 # rpm -Uvh gpfs.gpl-3.X.0-xx.gpl.noarch.rpm 4 # rpm -Uvh gpfs.msg.en_US-3.X.0-xx.noarch.rpm

Afterwards the GPFS Linux kernel module must be recompiled:

1 # cd /usr/lpp/mmfs/src/ 2 # make Autoconfig 3 # make World 4 # make InstallImages

6. Restart GPFS

1 # mmstartup

Verify that the node started up correctly

1 # mmgetstate

During the startup phase the node is shown in the state arbitrating, this changes to active when GPFS completed startup. 7. Mount file systems Mount the file system after starting GPFS:

1 # mmmount sapmntdata

8. (on the target node) Start SAP HANA

1 # service sapinit start

9. (on any node) Verify GPFS disks Verify all GPFS disks are active again:

1 # mmlsdisk sapmntdata -e

If any disks are shown as down, restart them with the command

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 170 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # mmchdisk sapmntdata start -a

If disks are suspended you can resume them all with the command

1 # mmchdisk sapmntdata resume -a

Afterwards check the disk status again. 10. (on any node) GPFS Restripe Start a restripe so that all data is replicated proper again

1 # mmrestripefs sapmntdata -r

Warning Currently the FPO feature used in the appliance is not compatible with file system rebalancing. Do not use the -b parameter! 11. Continue with the next node 12. Restore accurate usage count If a file system was ill-replicated the used block count results from mmcheckquota may not be accurate. Therefore it is recommended that you run mmcheckquota to restore the accurate usage count after the file system is no longer ill-replicated.

1 # mmcheckquota -a

After all nodes are updated you can update the GPFS cluster configuration and the GPFS "on disk format" (the data structures written to disk) to the newer version. Not all updates require this update steps but it is safe to do them in any case. This update is non-disruptive and can be performed while the cluster is active. 1. Update the cluster configuration with the newest settings

1 # mmchconfig release=LATEST

2. Update the file system’s on disk format to activate new functionality

1 # mmchfs sapmntdata -V full

Notice that a successful upgrade of the GPFS on disk format to a newer version will make a downgrade to previous GPFS versions impossible. You can verify the minimum needed GPFS version with the command

1 # mmlsfs sapmntdata -V

13.6 Upgrading from GPFS 3.5 to 4.1

This section applies to single node and cluster installations. For single node installations only a disruptive upgrade can be done. Cluster installations can be upgraded either all at once (disruptive) or node-by-node (rolling). DR installations can also be upgraded either all at once (disruptive) or node-by-node (rolling). Addi- tionally, it is possible to upgrade the DR site first and the primary site at a later point. If the DR site hosts a non-productive SAP HANA instance this approach can be used to verify the new code level in pre-production.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 171 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note GPFS 4.1 is only supported with PTF 8 or higher (that is 4.1.0-8). Make sure you have the required GPFS packages before continuing. GPFS has introduced three editions with different content. GPFS 4.1 Standard Edition is required (Express is not sufficient). If you have a gpfs.ext RPM file then you have Standard Edition. Existing GPFS 3.5 clients are entitled to GPFS 4.1 Standard Edition. For further information, including how to migrate licenses, see GPFS FAQ27

13.6.1 Disruptive Upgrade from GPFS 3.5 to 4.1

Step Title  1 Stop SAP HANA 2 Unmount GPFS file systems, stop GPFS 3 Remove GPFS 3.5 packages, install GPFS 4.1 packages 4 Build new GPFS portability layer 5 Update cluster and file system information 6 Restart GPFS, mount GPFS file systems 7 Check Status of GPFS 8 Start SAP HANA

Table 54: GPFS Upgrade Checklist

1. Stop SAP HANA and all other SAP software running in the whole cluster or on the single node cleanly. Login in as root on each node and execute

1 # service sapinit stop

Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP software manually. Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help Portal28 or SAP Service Marketplace29. Make sure no process has files open on /sapmnt, you can test that with the command:

1 # lsof /sapmnt

2. Unmount GPFS file systems, stop GPFS processes

1 # mmumount all -a 2 # mmshutdown -a

3. Remove all GPFS 3.5 packages and install new 4.1 packages. Get a list of all installed GPFS 3.5 packages

1 # rpm -qa | grep gpfs

Remove all GPFS 3.5 packages returned from above command

27http://www-01.ibm.com/support/knowledgecenter/api/content/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/ gpfsclustersfaq.html#migto41 28https://help.sap.com/hana 29https://service.sap.com/hana

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 172 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # rpm -e gpfs.base gpfs.docs gpfs.gpl gpfs.msg.en_US

Optionally, also remove a gpfs.gplbin package if you have that installed. Install GPFS 4.1 packages

1 # rpm -ivh gpfs.base-4.1.0-0.x86_64.rpm 2 # rpm -ivh gpfs.docs-4.1.0-0.noarch.rpm 3 # rpm -ivh gpfs.ext-4.1.0-0.x86_64.rpm 4 # rpm -ivh gpfs.gpl-4.1.0-0.noarch.rpm 5 # rpm -ivh gpfs.gskit-8.0.50-16.x86_64.rpm 6 # rpm -ivh gpfs.msg.en_US-4.1.0-0.noarch.rpm

Update to GPFS 4.1 PTF 8 This is just an example. Please update to the PTF recommended at this point in time.

1 # rpm -Uvh gpfs.base-4.1.0-8.x86_64.update.rpm 2 # rpm -Uvh gpfs.docs-4.1.0-8.noarch.rpm 3 # rpm -Uvh gpfs.ext-4.1.0-8.x86_64.update.rpm 4 # rpm -Uvh gpfs.gpl-4.1.0-8.noarch.rpm 5 # rpm -Uvh gpfs.msg.en_US-4.1.0-8.noarch.rpm

4. Build new portability layer

1 # cd /usr/lpp/mmfs/src/ 2 # make Autoconfig 3 # make World 4 # make InstallImages 5 (optionally) # make rpm

5. Update cluster and file system information to current GPFS version. Activate new cluster config- uration repository (CCR) feature.

1 # mmstartup -a 2 # mmchconfig release=LATEST 3 # mmchcluster --ccr-enable 4 # mmchfs sapmntdata -V full 5 # mmmount all -a

6. Check Status of GPFS

1 # mmgetstate -a 2 # mmlsmount all -L 3 # mmlsconfig | grep minReleaseLevel

7. Start SAP HANA using

1 # service sapinit start

Older versions of the appliance may not have this script, so please start SAP HANA and other SAP software manually as documented in the SAP HANA administration guidelines at the SAP Help Portal or SAP Service Marketplace.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 173 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

13.6.2 Rolling upgrade per node from GPFS 3.5 to 4.1

To minimize downtime distribute the GPFS 4.1 packages on all nodes before starting. 1. Check GPFS cluster health Before performing any updates on any node, verify that the cluster is in a sane state. First check that all nodes are running and active with the command

1 # mmgetstate -a

Then verify that all disks are active

1 # mmlsdisk -e

The disks on the node to be taken down do not need to be in the up state, but make sure that all other disks are up. Warning If disks of more than one server node are down, the file system will be shut down causing all other SAP HANA nodes to fail. 2. Shutdown SAP HANA Shutdown the SAP HANA and the sapstartsrv daemon via

1 # service sapinit stop

Verify that SAP HANA, sapstartsrv and any other process accessing /sapmnt are not running anymore:

1 # lsof /sapmnt

No processes should be found. If any processes are found please retry stopping SAP HANA and all other processes accessing /sapmnt. 3. Unmount the file system on the node to be upgraded

1 # mmumount sapmntdata

and take care that no open process is preventing the file system from unmounting. If that happens use

1 # lsof /sapmnt

to find processes still accessing the file system, e.g. running shells (root, adm, etc.) close them and retry. Other Nodes within the cluster still have /sapmnt mounted. 4. Shutdown GPFS processes on the node to be upgraded

1 # mmshutdown

GPFS unloads its kernel modules during its shutdown, so check the output of this command care- fully. 5. Upgrade GPFS to 4.1 Change to the directory where you extracted the GPFS 4.1 packages. Get a list of all installed GPFS 3.5 packages

1 # rpm -qa | grep gpfs

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 174 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Remove all GPFS 3.5 packages returned from above command

1 # rpm -e gpfs.base gpfs.docs gpfs.gpl gpfs.msg.en_US

Optionally, also remove a gpfs.gplbin package if you have that installed. Install GPFS 4.1 packages:

1 # rpm -ivh gpfs.base-4.1.0-0.x86_64.rpm 2 # rpm -ivh gpfs.docs-4.1.0-0.noarch.rpm 3 # rpm -ivh gpfs.ext-4.1.0-0.x86_64.rpm 4 # rpm -ivh gpfs.gpl-4.1.0-0.noarch.rpm 5 # rpm -ivh gpfs.gskit-8.0.50-16.x86_64.rpm 6 # rpm -ivh gpfs.msg.en_US-4.1.0-0.noarch.rpm

Update to GPFS 4.1 PTF 8: This is just an example. Please update to the PTF recommended at this point in time.

1 # rpm -Uvh gpfs.base-4.1.0-8.x86_64.update.rpm 2 # rpm -Uvh gpfs.docs-4.1.0-8.noarch.rpm 3 # rpm -Uvh gpfs.ext-4.1.0-8.x86_64.update.rpm 4 # rpm -Uvh gpfs.gpl-4.1.0-8.noarch.rpm 5 # rpm -Uvh gpfs.msg.en_US-4.1.0-8.noarch.rpm

Afterwards, the GPFS compatibility layer must be recompiled:

1 # cd /usr/lpp/mmfs/src/ 2 # make Autoconfig 3 # make World 4 # make InstallImages 5 (optional) # make rpm

6. Restart GPFS

1 # mmstartup

Verify that the node started up correctly

1 # mmgetstate

During the startup phase the node is shown in state arbitrating for a short period of time. This changes to active when GPFS completed startup successfully. 7. Mount file system

1 # mmmount sapmntdata

8. Start SAP HANA

1 # service sapinit start

9. Verify GPFS disks are active again (this command can be executed on any node)

1 # mmlsdisk sapmntdata -e

If any disks are shown as down, restart them with the command

1 # mmchdisk sapmntdata start -a

If disks are suspended you can resume them all with the command

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 175 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # mmchdisk sapmntdata resume -a

Afterwards check disk status again. 10. Restore correct replication level (this command can be executed on any node) Start a restripe so that all data is properly replicated again

1 # mmrestripefs sapmntdata -r

Warning Do not use the -b parameter! 11. Continue on the next node with step 2 of this procedure 12. Restore accurate usage count If a file system was ill-replicated the used block count results from mmcheckquota may not be accurate. Therefore it is recommended that you run mmcheckquota to restore the accurate usage count after the file system is no longer ill-replicated.

1 # mmcheckquota -a

After all nodes have been updated successfully you can update the GPFS cluster configuration and the GPFS "on disk format" (the data structures written to disk) to the newer version. This update is non-disruptive and can be performed while the cluster is active. 1. Update the cluster configuration to the newest version

1 # mmchconfig release=LATEST

2. Active new method of cluster configuration repository (CCR)

1 # mmchcluster --ccr-enable

3. Update the file system’s on disk format to activate new functionality

1 # mmchfs sapmntdata -V full

Notice that a successful upgrade of the GPFS on disk format to a newer version will make a downgrade to previous GPFS versions impossible. You can verify the minimum required GPFS version for a file system with the command

1 # mmlsfs sapmntdata -V

13.7 Update Mellanox Network Cards

You should have received a binary update package, e.g. mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_ x86-64.bin. Please note, that the version number given here might differ. This packages needs to be copied to all nodes you wish to update. It might be necessary to make the file executable:

1 chmod +x mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin

Then you can start the installation with:

1 ./mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin --enable-affinity

If this step fails, you may have to install the python-devel package from the official SLES or RHEL repositories. This will upgrade your driver and firmware of the Mellanox network cards. Please review the output of the above program for possible errors. After a successful upgrade, a reboot will be necessary.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 176 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

13.8 SAP HANA

Warning Make sure that the packages listed in Appendix F.5: FAQ #5: Missing RPMs on page 217 are installed on your appliance. An upgrade may fail without them. Please refer to the official SAP HANA documentation for further steps.

13.9 Upgrade VMware ESXi 5.5 to 5.5U2

For a detailed description for upgrades of VMware vSphere ESXi 5.5 to 5.5U2 please consult the VMware vsphere-esxi-vcenter-server-552-upgrade-guide. If your ESXi host is connected to a vCenter Server you can performe an online update of the ESXi host. The procedure is described in the vsphere-esxi-vcenter- server-552-upgrade-guide. In this section we describe the update with a reboot of the ESXi host. You need to be able to log into the IMM and start a remote console. Mount the update ISO from ESXi 5.5U2 at the remote console. 1. Shutdown all running VMs 2. reboot ESXi host 3. boot from the ESXi 5.5U2 ISO 4. choose the the USB Storage device for your update 5. choose upgrade 6. confirm upgrade 7. reboot 8. boot from USB stick Todo after reboot All of the shown commands are for the CLI. Please do a ssh login as root to the ESXi host to be able to perform the commands 1. check license 2. check firewall setting 3. check disks 4. check installed VMs 5. check vswitches 6. check RAID controller

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 177 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

14 Operating System Upgrade

This section describes the steps needed to perform an upgrade of RHEL 6.5 to RHEL 6.6.

14.1 Upgrade RHEL 6.5 to 6.6

For the upgrade a maintenance downtime is needed with a least one reboot of the servers. If you have installed software that was not part of the initial installation from Lenovo, please make sure that this software is compatible with RHEL 6.6. Note Testing in a non-productive environment before upgrading productive systems is highly rec- ommended. As always backing up the system before performing changes is also highly rec- ommended.

14.2 Rolling Upgrade

In a cluster environment a rolling upgrade (one node at a time) is possible as long as you are running a HA environment with IBM GPFS 3.5 and with at least a standby node. See section 13.5: Updating GPFS on page 167 for information on the IBM GPFS upgrade. In any case you can perform a non-rolling upgrade, taking all nodes down for maintenance.

14.3 Upgrade Overview

The following tested and recommended upgrade steps require one reboot. The tasks are mostly the same for cluster and single node systems, if there is an operational difference between these two types, it will be noted. This list shows the upgrade steps. 1. Stop IBM GPFS & HANA. 2. Upgrade IBM GPFS if necessary. 3. Update Mellanox Drivers 4. Upgrade from RHEL 6.5 to RHEL 6.6. 5. Kernel upgrade if necessary 6. Install Compability Pack 7. Recompile kernel module for IBM GPFS 8. Adapt Configuration 9. Upgrade complete: Start IBM GPFS & HANA. When doing a rolling upgrade or the upgrade of a single node, do the steps described in this section only on the server currently being updated. When updating all nodes in a cluster at the same time, do you can perform the steps on all nodes in parallel: step 1 on all nodes, then step 2 on all nodes and then step 3 on all nodes and so on.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 178 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

14.4 Prerequisites

You are running Lenovo Systems Solution for SAP HANA appliance system and want to to upgrade the RHEL 6.5 operating system to RHEL 6.6. You should run at least IBM GPFS version 4.1.0-8. If your system is running a IBM GPFS version below that, you should upgrade IBM GPFS. You can find out your IBM GPFS version with the command

1 # rpm -q gpfs.base

For RHEL 6.6 version 2.4-1.0.0 of the Mellanox-Drivers is needed. You can check the version using:

1 # ethtool -i eth0

For the Upgrade the following DVDs or images are needed: • RHEL 6.6-DVD • nss-softokn packages – nss-softokn-freebl-3.14.3-19.el6.x86_64 – nss-softokn-freebl-3.14.3-19.el6.i686 e.g as part of the RHEL 6.6 compability pack Other ways of providing the images to the Server (e.g. locally, FTP, SFTP, etc) are possible but not explained as part of this guide. Also other upgrade mechanism like e.g. using a satellite-server are out of scope of this guide.

14.5 Shutting down services

1. Shutdown HANA Shutdown HANA and all other SAP software running in the whole cluster or on the single node cleanly. Login in as root on each node and execute

1 # service sapinit stop

Make sure no process has files open on /sapmnt, you can test that with the command:

1 # lsof /sapmnt

2. Unmount the IBM GPFS file system Unmount the IBM GPFS file system /sapmnt by issuing

1 # mmumount all

3. Shutdown IBM GPFS

1 # mmshutdown -a

to shutdown the IBM GPFS software on all cluster nodes.

14.6 Upgrade of IBM GPFS

You should run at least IBM GPFS version 4.1.0-8. If your system is running a IBM GPFS version below that, you should upgrade IBM GPFS first, see 13.5: Updating GPFS on page 167.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 179 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

14.7 Update Mellanox Drivers

For RHEL 6.6 at least version 2.4-1.0.0 of the Mellanox-Drivers is needed. If you have a version below that, you should upgrade the Mellanox drivers first, see 13.7: Update Mellanox Network Cards on page 176.

14.8 Upgrading Red Hat

1. Allow updates from RHEL 6.5 to RHEL 6.6 To allow these updates, you have to delete all lines containing restrictions in the file versionlock. list if this file exists:

1 # vi /etc/yum/pluginconf.d/versionlock.list 2 3 # Keep packages for RHEL 6.5 (begin) 4 # https://css.wdf.sap.corp/sap/support/notes/2013638 5 libssh2-1.4.2-1.el6.x86_64 6 kernel-2.6.32-431.* 7 kernel-firmware-2.6.32-431.* 8 kernel-headers-2.6.32-431.* 9 kernel-devel-2.6.32-431.* 10 redhat-release-* 11 # Keep packages for RHEL 6.5 (end)

changed to

1 # Keep packages for RHEL 6.5 (begin) 2 # Keep packages for RHEL 6.5 (end)

2. Create a repository from your RHEL 6.6 DVD Check where the RHEL 6.6 DVD is mounted:

1 # ls /media/ 2 RHEL-6.6 Server.x86_64

This information is needed for the baseurl-part below. Now create a repository file rhel-dvd66. repo in /etc/yum.repos.d

1 # vi /etc/yum.repos.d/rhel-dvd66.repo

with the following content:

1 [dvd66] 2 name=Red Hat Enterprise Linux Installation DVD 3 baseurl=file:///media/RHEL-6.6\ Server.x86_64/ 4 gpgcheck=0 5 enabled=0

3. Upgrade to RHEL 6.6

1 # yum update --enablerepo=dvd66

Check, if the upgrade was successful:

1 # cat /etc/redhat-release 2 Red Hat Enterprise Linux Server release 6.6 (Santiago)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 180 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

4. Prevent further upgrade from RHEL 6.6 to higher versions If the file /etc/yum/pluginconf.d/versionlock.list existed in step one, you have just to do the following changes. If not, please install also the package yum-versionlock.

1 yum -y install yum-versionlock yum-security --enablerepo=dvd66 2 3 vi /etc/yum/pluginconf.d/versionlock.list 4 5 # Keep packages for RHEL 6.6 (begin) 6 # https://css.wdf.sap.corp/sap/support/notes/2136965 7 libssh2-1.4.2-1.el6.x86_64 8 kernel-2.6.32-504* 9 kernel-firmware-2.6.32-504.* 10 kernel-headers-2.6.32-504.* 11 kernel-devel-2.6.32-504.* 12 redhat-release-* 13 # Keep packages for RHEL 6.6 (end)

14.9 Mandatory Kernel Update

Please consult SAP if there is now a higher version of the kernel recommended. Please check also chapter 13.4.2: RHEL Kernel Update Methods on page 166.

14.10 Update of nss-softokn packages

A update of the nss-softokn packages is mandatory. More information can be found in: • SAP Note 2001528– Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 • Why can I not install or start SAP HANA after a system upgrade?30

1 yum -y install [path to packages]/nss-softokn-freebl-3.14.3-19*.rpm

14.11 Recompile Linux Kernel Modules

IBM GPFS need self-compiled (so called "out-of-tree" drivers) Linux kernel modules to operate properly. To compile IBM GPFS kernel module execute the following commands

1 # cd /usr/lpp/mmfs/src 2 # make Autoconfig 3 # make World 4 # make InstallImages

14.12 Adapting Configuration

Please review the performance settings in D: Performance Settings on page 211 because they might have changed.

30https://access.redhat.com/solutions/1236813

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 181 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

14.13 Start IBM GPFS and HANA

Start IBM GPFS and HANA by either rebooting the machine (recommended) or starting the daemons manually: 1. Restart GPFS

1 # mmstartup 2 # mmmount all

Verify status of IBM GPFS and if the file system is mounted:

1 # mmgetstate 2 # mmlsmount all

2. Start HANA

1 # service sapinit start

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 182 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

15 System Check and Support

This chapter describes different steps to check the appliance’s health status. The script described here should be updated and executed in regular intervals by a system administrator. The other sections present additional information and give deeper insight into the system. Note SAP Note 1661146– Lenovo/IBM Check Tool for SAP HANA appliances provides details for downloading and using the following scripts to catalog the hardware and software configura- tions and create a set of information to assist service and support of the machine by SAP and Lenovo. We highly recommend that a SAP HANA system administrator regularly downloads and updates these scripts to ensure to obtain the latest support information for the servers.

15.1 System Login

The latest version of the Lenovo Solution installation also adds a message of the day that shows the current status of the GPFS filesystems, and memory usage. This will pop up once each login for every user. The message is created by a cron job that runs once an hour, this means that the information is not real time and the system status may have changed in the meantime.

1 ______2 / ___| / \ | _ \ | | | | / \ | \ | | / \ 3 \___ \ / _ \ | |_) | | |_| | / _ \ | \| | / _ \ 4 ___) / ___ \| __/ | _ |/ ___ \| |\ |/ ___ \ 5 |____/_/ \_\_| |_| |_/_/ \_\_| \_/_/ \_\ 6 7 Lenovo Systems Solution for SAP HANA appliance 8 9 See SAP Note 1650046 for maintenance and administration information: 10 https://service.sap.com/sap/support/notes/1650046 11 12 _Regularly_ check the system health! 13 ______14 15 ! INFO: Last hourly update on Mon Jun 22 14:45:02 CEST 2015. 16 ! NOTICE: Memory usage is 3%. 17 ! NOTICE: All quota usages below 90%. 18 ! NOTICE: All GPFS NSDs up and ready. Listing 1: SSH login screen

15.2 Basic System Check

Included with the installation is a script that will inform you and the customer that all the hardware requirements and basic operating system requirements have been met. Using the option -h, you can see the various ways to call the saphana-support-lenovo.sh script. Note It is highly recommended to work with the latest version of the system check script. You can find it in SAP Note 1661146– Lenovo/IBM Check Tool for SAP HANA appliances.

1 # saphana-support-lenovo.sh -h 2 Usage: saphana-support-lenovo [OPTIONS]

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 183 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

3 Lenovo Systems solution for SAP HANA appliance System Checking Tool 4 to check hardware system configuration for Lenovo and SAP Support teams. 5 6 Options: 7 -c Check system (no log file, default). 8 -s Print out the support information for SAP support. 9 (-s replaces the --support option.) 10 -h Print this information 11 12 Check extensions (only valid in conjunction with -c) 13 -v Verbose. Do not hide messages during check. 14 Recommended after installation. 15 -e Do exhaustive testing with longer running tests. 16 May impact HANA performance during check. 17 Implies -v. 18 19 If using the Advanced Settings Utility (ASU) from a Virtual Machine 20 -i host The host name of the Integrated Management Module (IMM) 21 22 Report bugs to . Listing 2: Support script usage

An output similar to the following should be reported when you use the options -c (check, which is the default option). If for any reason you receive warnings or errors that you do not understand, please first try this again with the option -v (verbose) and then open with the customer an SAP OSS customer message with the output from the -s (support) option in Section 15.3: System Support on page 186 attached.

1 # saphana-support-lenovo.sh -c 2 ======3 # LENOVO SUPPORT TOOL Version 1.9.96-13.2406.2b5da57 -- 2015-06-15 4 # (C) Copyright IBM Corporation 2011-2014 5 # (C) Copyright Lenovo 2015 6 # Analysis taken on: 20150622-1522 7 ======8 9 ------10 Lenovo Systems solution for SAP HANA appliance Hardware Analysis 11 ------12 13 Machine analysis for IBM x3850 X6 -[6241FT1]- [...... ] 14 Lenovo Systems solution for SAP HANA - Model "AC34S1024" OK 15 ------16 17 Appliance Solution analysis: 18 ------19 Information from /etc/lenovo/appliance-version: 20 Lenovo System x3850 X6: Workload Optimized System for SAP HANA Model AC34S1024 21 Installed appliance version: 1.9.96-13.2406.2b5da57 22 Installed on: Mon Jun 22 15:17:04 CEST 2015 23 24 Operating System SUSE Linux Enterprise Server 11 (x86_64) 25 VERSION = 11 26 PATCHLEVEL = 3

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 184 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

27 28 Installation configuration: 29 ------30 Parameter clustered is standby 31 Parameter exthostname is ...... 32 Parameter cluster_ha_nodes is 1 33 Parameter cluster_nr_nodes is 2 34 Parameter hanainstnr is 12 35 Parameter hanasid is FLO 36 Parameter hanauid is 1100 37 Parameter hanagid is 111 38 Parameter shared_fs_mountpoint is /sapmnt 39 Parameter gpfs_node1 is gpfsnode01 192.168.212.101 40 Parameter gpfs_node2 is gpfsnode02 192.168.212.102 41 Parameter hana_node1 is hananode01 192.168.213.201 42 Parameter hana_node2 is hananode02 192.168.213.202 43 Parameter step is 11 44 ------45 46 Hardware analysis: 47 ------48 CPU Type: Pentium 4 Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz OK 49 # of CPUs: 4, threads: 144 OK 50 51 Memory: 1024 GB / Free Memory: 978 GB OK 52 53 ServeRAID: 2 adapters OK 54 55 IBM General Parallel File System (GPFS): 56 ------57 GPFS with replication [4.1.0-7] Cluster HANAcluster.gpfsnode01 is active 58 GPFS device /dev/sapmntdata mounted on /sapmnt of size 24566GB 59

60 61 SAP Host Agent Information 62 ======63 /usr/sap/hostctrl/exe/saphostctrl: 720, patch 715, changelist 1546327, linuxx86_64, ←- ,→opt (Dec 20 2014, 01:27:21) 64 65 SAP Host Agent known SAP instances 66 ------67 Inst Info : FLO - 12 - x615 - 742, patch 29, changelist 1540948 68

69 70 SAP HANA Instances 71 ======72

73 74 SAP HANA Instance FLO/12 75 ------76 SAP HANA 1.00.096.00 Build 1432206182-1530 Revision 096 is installed OK 77 78 General Health checks:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 185 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

79 ------80 NOTE: The following checks are for known problems of the system. 81 See the FAQ section of the Lenovo - SAP HANA Operations Guide 82 SAP Note 1661146 found at https://service.sap.com/notes 83 84 Only issues will be shown. If there is no output, no check failed. 85 To show succeeded checks, add the parameter -v. Recommended on first run. 86 ------87 [ERROR] The global_allocation_limit for FLO is not set. (See FAQ #1.) 88 [ERROR] Upgrade to kernel version 3.0.101-0.35.1 or higher. (SAP Note #1557506) 89 [ERROR] GLIBC must be updated to version 2.11.3-17.56.2 or higher (SAP Note ←- ,→#1954788 / 2015-02-11, found 2.11.3-17.54.1) 90 ------91 ENDOFLENOVODATAANALYSIS 92 ------93 Removing support script dump files older than 7 days. Listing 3: Support script output

15.3 System Support

In case of a problem with the Lenovo Systems Solution for SAP HANA Platform Edition, you should always direct the customer to open an OSS Message, whether or not it is an obvious problem with the hardware. Lenovo, IBM, and SAP have an agreement that all problems with the Lenovo Solution are to come first through SAP Support process, where there are Lenovo L3 Support members who will help the customer determine what the root cause of the problem is. If it is determined that there is a problem with an Lenovo Solution, then the Lenovo L3 support person will instruct and guide the customer in opening the correct IBM PMR and help ensure that the appropriate attention has been given to the problem. In order to make this process for all involved easier, Lenovo delivers a special program that can gather much of the data necessary in an initial support call. Using this script the customer can help streamline the support process in order to obtain the fastest and most competent support available. This script is found in the directory /opt/lenovo/saphana/bin and is called saphana-support-lenovo. sh. In order to collect support data, the customer should run this command from the shell as follows:

1 # saphana-support-lenovo.sh -s

Note -[1.8.80-12]: These appliances were shipped with the script /opt/ibm/saphana/bin/ saphana-support-ibm.sh. When installing the latest support script version you will get the new script saphana-support-lenovo.sh. Do not remove the script saphana-support-ibm. sh. This script, along with the Linux SAP System Information Tool, can be found in the SAP OSS Notes 1661146 and 618104 respectively. When the SAP System Information Tool is placed in /opt/lenovo/ saphana/bin, it will be automatically called from this script and its input will be also collected.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 186 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

15.4 Additional Tools for System Checks

15.4.1 Lenovo Advanced Settings Utility

Note X6 based servers and later technology come preinstalled with this utility. In some cases it might be useful to check the UEFI settings of the HANA servers. Therefore, the saphana-support-lenovo.sh script uses the Lenovo Advanced Settings Utility (ASU), if it is installed, and prints out warnings, if there is a misconfiguration. This check can be enabled via the -e parameter. Download the latest Linux 64-bit RPM from https://www-947.ibm.com/support/entry/myportal/ docdisplay?lndocid=LNVO-ASU and install the RPM. Before upgrading the ASU tool remove the old version. Find the installed version via rpm -qa | grep asu.

15.4.2 ServeRAID StorCLI Utility for Storage Management

Note X6 based servers come preinstalled with this utility. The saphana-support-lenovo.sh script also analyzes the status of the ServeRAID controllers and the controller-internal batteries to check whether the controllers are in a working and performing state. For activation of this feature the StorCLI (Command Line) Utility for Storage Management software must be installed. Go to https://www-947.ibm.com/support/entry/myportal/docdisplay?lndocid= migr-5092950 and download the file locally and install the RPMs. Before upgrading the StorCLI tool remove the old version. Find the installed version via rpm -qa | grep storcli. Warning [1.6.60-7]+ With the change to RAID5 based storage configuration, installing the MegaCLI Utility is even more important as a HDD/SSD failure is not directly visible with standard GPFS commands until a whole RAID array has failed.

15.4.3 SSD Wear Gauge CLI utility

Note X6 based servers come preinstalled with this utility. For models of the Lenovo Solution that come with SSDs it might be useful to check the state of the SSDs. This includes all x3850 X6 and x3890 X6 servers, and eX5 SSD, XS, and S models. Go to http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5090923 and down- load the latest binary of the SSD Wear Gauge CLI utility (lnvgy_utl_ssd_-_linux_32-64. bin). Copy it to the machine to be checked. When upgrading the tool remove existing binaries from /opt/ibm/ssd_cli/ and/or /opt/lenovo/ssd_ cli/. Copy the bin file into /opt/lenovo/ssd_cli/:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 187 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # mkdir -p /opt/lenovo/ssd_cli/ 2 # cp lnvgy_utl_ssd_-*_linux_32-64.bin /opt/lenovo/ssd_cli/ 3 # chmod u+x /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64.bin

Execute the binary:

1 # /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64.bin -u

Sample output:

1 1 PN:...... -...... SN:...... FW:...... 2 Percentage of cell erase cycles remaining: 100% 3 Percentage of remaining spare cells: 100% 4 Life Remaining Gauge: 100%

15.5 Getting Support (IBM PMR, SAP OSS)

In case of a failure follow these instructions: 1. Check for hardware failure: The server’s IMM will report hardware incidents. You may also control the IMM’s Virtual Light Path or the LEDs on the physical server. • If only a hardware replacement is necessary, take the according steps with IBM. 2. Control the software status: Execute saphana-support-ibm.sh -cv with the latest version of the support script (see section 15: System Check and Support on page 183). The script will check for common root causes of failures. Consult the Lenovo SAP HANA Appliance Operations Guide31. • Try to apply suggested solutions by the support script and the Operations Guide. 3. If you could not determine the root cause of the failure or there is no solution provided by the support script or the Operations Guide, open an SAP OSS ticket. See the Quick Start Guide32, section Getting help and technical assistance for more information.

31SAP Note 1650046 (SAP Service Marketplace ID required) 32http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5087035

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 188 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

16 Backup and Restore of the Primary Partition

This section provides enough instructions necessary to create a simple system copy of the base operating system found on the first hard drive, or primary partition. This image can then be used for a basic backup/ restore solution of the primary partition. This image, once copied initially, should also be transferred to offline storage to ensure that data does not get lost due to irreparable hard drive failures. The intent of this section is that the user can have a simple backup and restore solution using the tools available within Linux to protect their system. For enterprise backup and restore solutions, we recommend to use an enterprise backup and restore option to ensure backup/restore operations for the operating system as well as the IBM General Parallel File System and SAP HANA file systems. What follows is a description, how to create a backup of the operating system. We also describe how to restore these items in case of a planned or unplanned disaster with the original Operating System (OS). This is valid for systems installed with at least the version 1.8.80-10 of the System x automated installer. Earlier Systems may require extra effort for OS backup partition creation. The following System x server models can be used: • System x3950 eX5 Workload Optimized System (7143) for SAP HANA Platform Edition, • System x3690 eX5 Workload Optimized System (7147) for SAP HANA Platform Edition, • System x3850/x3950 X6 Workload Optimized System (3837) for SAP HANA Platform Edition. • System x3850/x3950 X6 Workload Optimized System (6241) for SAP HANA Platform Edition.

Warning Do not go into production without verifying a full backup and a full restore of the operating system!

16.1 Description

In order to perform a simple backup and restoration of the OS, excluding the SAP HANA executables, configuration, data or logs, you need to run a few commands in Linux in order to set up a working copy of the OS. What we will explain here is a method of copying the Linux file system that is contained on two partitions of the first hard drive. Using the Linux command rsync, you are able to intelligently copy a file system from one partition to another quickly and with little effort. This tool can also be set up in nightly cron schedules to happen automatically and semi-automate the process of taking a backup image of the OS. As seen in 86: Overview of Backup/Restore Operations on page 190, the general concept is that the user uses rsync to copy the contents of the root (/) and boot (/boot/efi) directories from their original partitions onto two newly created partitions on the same hard drive.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 189 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Normal Operation ) ) t t 3 4 o o a a o o d d b b

s /dev/sda2 /dev/sda5 s / / k a rsync v v c n (hanaroot) (backroot) e e a a d d b h / / ( (

Restore: Boot Backup Partition ) ) t t 4 3 o o a a o o d d b b s

/dev/sda5 /dev/sda2 s / / k rsync a v v c (backroot) (hanaroot) n e e a a d d b h / / ( (

Return to Normal Operation

Figure 86: Overview of Backup/Restore Operations

This is not highly available due to the possibility of a hard drive failure of the device used for both the primary and backup partitions, yet it does provide the reliability of a stable and usable backup method. In order to obtain high availability of the backed-up image, we strongly recommend to copy the images saved to the local partitions onto another external storage system. With the rsync command, it is possible to take these snapshots over the network which can improve the availability of the saved image. This document will not cover that yet you may refer to the rsync man page for more details.

16.1.1 Boot Loader

The server can use two different methods to boot. For X6 based systems, the default method is using the Unified Extensible Firmware Interface, or UEFI. According to Wikipedia33, the Unified Extensible Firmware Interface is a specification that defines a software interface between an operating system and platform firmware. The second method is using the legacy method of BIOS, which was a typical way to boot SAP HANA on eX5 based systems. Linux requires a boot loader that understands the specific boot method. Two options are available: Grand Unified Bootloader (GRUB) and Linux Loader (LILO) . The way a server boots; and, subsequently, installs the boot loader determines some of the system partitioning and file system layout of the installed server. Although it is possible to use both methods to boot and install the Lenovo Solution server, this document will only cover the steps necessary to create a restore image using the UEFI boot mechanism with either the GRUB or LILO boot loader. If you are using the Legacy Boot option, you will need to become familiar with how each distribution handles the boot procedure with the Legacy BIOS boot option as this is not part of this documentation. EFI Linux Loader (ELILO) is the interface the Lenovo System x UEFI uses to talk to the LILO boot loader. The Linux installation will place the boot loader under the directory /boot/efi. The configuration file for ELILO can be found in /etc/elilo.conf. Using GRUB, the Linux installation will place the boot loader under the directory /boot/efi. The configuration file for GRUB can be found in /boot/grub/menu.conf or /boot/grub/grub.conf, depending on the version of GRUB

33http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 190 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

16.1.2 Drive Partitions

Starting with version 1.8.80-10 of the the Lenovo Solution installation media, it will create five (5) partitions on the first drive (sda). Each partition has a specific label and purpose for the system backup and restore. The labels are: hanaboot, hanaroot, hanaswap, backboot and backroot. The correlation of these labels to the appropriate devices can be found by listing the symbolic links in the directory /dev/disk/by-label. An example partition layout for systems is shown below. The first device is partitioned into several physical and logical partitions and named with a label, a simple identifier, and a Universally Unique Identifier (UUID) . Only the UUID is promised to remain connected to the proper partition as it was created.

Partition /dev/disk/by-label /dev/disk/by-id /dev/disk/by-uuid /dev/sda1 hanaboot scsi-{33-hexadecimal-number}-part1 hexadecimal number /dev/sda2 hanaroot scsi-{33-hexadecimal-number}-part2 hexadecimal number /dev/sda3 swap scsi-{33-hexadecimal-number}-part3 hexadecimal number /dev/sda4 backboot scsi-{33-hexadecimal-number}-part4 hexadecimal number /dev/sda5 backroot scsi-{33-hexadecimal-number}-part5 hexadecimal number

Attention Pay special attention on systems installed earlier than version 1.8.80-10. These systems may have been installed with extra partitions that are used for other auxillary file systems unrelated to SAP HANA. If this is the case, then you should be certain to first create enough free space in order to create new backup partitions and also determine a way to backup and save off the data in these auxillary partitions. The backup and recovery of these drives is not part of this document, but similar rules can be applied.

16.2 Prerequisites

The Lenovo Solution server should also have been installed using the included automatic installer program. If not, some of the names of the partitions might be different and these directions may not work correctly.

16.2.0.1 SUSE Linux Enterprise Server Partition Labels In a system installed with the SUSE Linux Enterprise Server OS, not all partitions are labeled. This seems to be an issue with how SLES handles the creation of labels for VFAT file system partitions. By default, SLES uses the values found under the /dev/disk/by-id directory when describing specific partitions. This document will continue to use the /dev/disk/by-label values, and it will be expected that these are translated to /dev/disk/ by-id values when implementing this backup solution on SLES.

16.2.0.2 Create entries in /etc/fstab for new mounts Before you start with the OS portion of this procedure, you should ensure that the backboot and backroot devices are mounted to the file system as /var/backup/root and /var/backup/boot/efi. These mount points should already exist in the file /etc/fstab similar to the example (for SLES) below:

1 ## Sample SLES entries for HANA System Backup 2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /var/backup/boot/efi vfat umask=0002,utf8=true 0 0 3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /var/backup/root ext3 acl,user_xattr 1 1 Listing 4: Example SUSE fstab entries

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 191 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Note The hexadecimal portion of the value of /dev/disk/by-id/ scsi-3600605b0038ac2601a9a1f01cc74cf23-partx will be different for every individ- ual drive and installation. We recommend to read the contents of /etc/fstab before and copy only the value for the stated partitions for all new backup partitions. Pay particular notice to rename the partition to the correct partition created!

1 ## Sample RHEL entries for HANA System Backup 2 UUID=c605201a-04bc-47a8-bbc4-b6808ee98fe1 /var/backup/root ext4 defaults 1 2 3 UUID=FF50-7B37 /var/backup/boot/efi vfat umask=0077,shortname=winnt 0 0 Listing 5: Example Red Hat fstab entries

Note The hexadecimal portion of the value of /dev/disk/by-uuid/ will be different for every individual drive and installation. We recommend to read the contents of /etc/fstab before and copy only the value for the stated partitions for all new backup partitions. Pay particular notice to rename the partition to the correct partition created!

16.2.1 Correcting the backup fstab

After each time the rsync command has completed, the root file system has now been copied exactly from / into /var/backup/. In order to boot from the backup partition backroot, we want to switch the partition labels (or ids) from hana* to the back* labelled partitions. The hana* partitions should now be mounted as the file system /var/backup in order to restore from the backed up image in the case of a recovery. We recommend to slightly modify the message of the day (motd) so that you can visually see that you are using the backup image. Since this is also copied on top of any previous images, it is best to use a symbolic link to keep both the backup and original motd file.

1 touch /etc/motd.{bak,orig} 2 echo "## !!!!! T H E B A C K U P M E S S A G E !!!!! ##" > /etc/motd.bak 3 cat /etc/motd >> /etc/motd.{bak,orig} 4 rm /etc/motd 5 ln -s /etc/motd.orig /etc/motd Listing 6: Creating a copy of the motd file

After every rsync run, the fstab needs to be adopted as shown here. We recommend to create a copy of the origial and backup so that you can easily switch between the two after a call to rsync. You can copy the original file /etc/fstab to /etc/fstab.orig and create a new copy called /etc/fstab.bak.

1 touch /etc/fstab.{bak,orig} 2 echo "## !!!!! T H E B A C K U P F S T A B F I L E !!!!! ##" > /etc/fstab.bak 3 cat /etc/fstab >> /etc/fstab.{bak,orig} 4 rm /etc/fstab 5 ln -s /etc/fstab.orig /etc/fstab Listing 7: Example SLES primary fstab file

Thereafter, you can change the SUSE Linux Enterprise Server entries in /etc/fstab.bak to:

1 ## Adding entries for HANA System Backup 2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 /var/backup ext3 acl,user_xattr 1 1 3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /var/backup/boot/efi vfat umask=0002,utf8=true 0 0 4 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 swap swap defaults 0 0 5 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /boot/efi vfat umask=0002,utf8=true 0 0

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 192 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

6 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 / ext3 acl,user_xattr 1 1 Listing 8: Example SLES backup fstab file

Notice that the entries /dev/disk/by-id will be different on your system. The mountpoints need to be changed as shown above. On Red Hat Enterprise Linux the entries in /etc/fstab.bak should be changed to:

1 ## Adding entries for HANA System Backup 2 LABEL=backroot / ext3 acl,user_xattr 1 1 3 LABEL=backboot /boot/efi vfat umask=0002,utf8=true 0 0 4 LABEL=hanaswap swap swap defaults 0 0 5 LABEL=hanaroot /var/backup ext3 acl,user_xattr 1 1 6 LABEL=hanaboot /var/backup/boot/efi vfat umask=0002,utf8=true 0 0 Listing 9: Example RHEL backup fstab file

Notice that only the labels have changed. After these files have been created, you will also need to recreate a symbolic link under the /var/backup directory to point the files to their backup representations as follows:

1 rm /var/backup/etc/fstab 2 cd /var/backup/etc 3 ln -s fstab.bak fstab Listing 10: Changing files for backup partition

16.2.2 Add boot loader entry for backup partition

ELILO installed systems

After the fstab file has been modified, create a backup entry in the ELILO boot menu (/etc/elilo.conf) by copying the whole subsection identified by the label=linux statement. On RHEL replacing the label and root values with the value backup and backroot partition ID. On SLES the according scsi--part has to be changed to fit the and partition on the given system. It is important to modify the string

###Don’t change this comment - YaST2 identifier: Original name: name###

on these installs. Otherwise, YaST will not see this option in the boot list for ELILO and may not present it to you during boot.

1 ## Adding Restore entry to UEFI Boot menu 2 image = vmlinuz-3.0.76-0.11-default 3 ###Don't change this comment - YaST2 identifier: Original name: backup### 4 label = backup 5 append = "resume=/dev/sda3 splash=silent transparent_hugepage=never 6 intel_idle.max_cstate=0 processor.max_cstate=0 showopts " 7 description = "Backup of SAP HANA Platform Edition Image" 8 initrd = initrd-3.0.76-0.11-default 9 root = /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 Listing 11: Example UEFI Configuration for Primary Partition

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 193 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

If you update the kernel, you will also need to update the lines image = and initrd = in this file for the backup entry. After changing the elilo.conf run

1 elilo --verbose

to update the boot loader. The intention is that you will be able to start up the backup partition in order to copy the saved state in the backup partition over top of the primary partition.

Grub installed systems

In systems installed using the GRUB boot loader (by default all Red Hat based installs and SUSE installs on System eX5 hardware), edit the contents of /boot/grub/grub.cfg (RHEL), or /boot/grub/menu.lst (SLES), and copy the section for the primary partition to edit it as the new backup partition. This is a copy of the default boot line with the title, root and kernel lines changes to match the partition used for the backup partition.On RHEL replacing the label and root values with the value backup and backroot partition ID. On SLES the according scsi--part has to be changed to fit the and partition on the given system.

1 title Backup of SAP HANA Platform Edition Image 2 root (hd0,) 3 kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=LABEL=backroot 4 KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 crashkernel=auto 5 processor.max_cstate=0 intel_idle.max_cstate=0 6 transparent_hugepage=never SYSFONT=latarcyrheb-sun16 7 rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet 8 initrd /boot/initramfs-2.6.32-431.el6.x86_64.img Listing 12: Example GRUB Configuration for Primary Partition

On SLES use the command:

1 yast2 bootloader

to update the boot loader, on RHEL:

1 grub-install /dev/sda

Note The partition number for a GRUB installed partition is based on the device syntax of (device[,partmap-name1part-num1[,partmap-name2part-num2[,...]]]). The syntax (hd0) represents using the entire disk of the first device, for example sda, while the syntax (hd0,1) represents using the second partition of the device, for example sda2. Notice that GRUB identifies the first partition on the first device as (hd0,0) or (hd0) for short.

Note In our example, we presume that the hanaroot partition is (hd0,1) and the backroot par- tition is (hd0,4). Append or change these lines in /var/backup/etc/grub.conf. Here, we exchange the meanings of the hanaroot and backroot partitions. When booting into this kernel, the hanaroot is the partition to be restored, and the backroot is the default partition to be booted. The title, root and kernel lines are changed to match the partition used for the backroot partition. We should also change the parameter default in the header subsection to point to the Restore image (usually the subsection number 2) rather than the original SAP HANA image.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 194 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 default=2 2 title Restore from SAP HANA Platform Edition Backup Image 3 root (hd0,) 4 kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=LABEL=backroot 5 KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 crashkernel=auto 6 processor.max_cstate=0 intel_idle.max_cstate=0 7 transparent_hugepage=never SYSFONT=latarcyrheb-sun16 8 rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet 9 initrd /boot/initramfs-2.6.32-431.el6.x86_64.img Listing 13: Example GRUB Configuration for Backup Partition

16.3 Backup of the Linux operating system

In order to perform an initial backup run as root the following commands. The initial backup will take a long time as it is copying the entire file system under the hanaroot partition into the backroot partition. Subsequent executions of the rsync command will be shorter as it is intelligent enough to only copy what has changed between calls of the command. As the system administrator (root) run:

1 start_stamp=$(date +%s) 2 # Begin backup of root file system 3 rsync -aAXxv --delete / /var/backup --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/boot/efi*,/run/*,/mnt/*,/media/*,\ 4 /lost+found,/var/backup/*,/sapmnt/*,/var/lib/ntp/proc/*,/etc/fstab} 5 middle_stamp=$(date +%s) 6 echo "Root file system completed in $( echo \ 7 "(${middle_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${middle_stamp}-${start_stamp})%60"| bc ) seconds" 8 # Begin backup of /boot/efi file system 9 rsync -aAXxv --delete /boot/efi/ /var/backup/boot/efi/ 10 end_stamp=$(date +%s) 11 echo "Boot file system completed in $( echo \ 12 "(${end_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds" Listing 14: Example rsync command

16.4 Restoring the operating system

In case of a planned or unplanned system outage, the wish to recover the last known good backup of the root and boot file system partitions that have been copied on to the backup partitions is possible. In the case of a hard drive failure where the backup partitions have been lost, the copies stored on an external storage must be recopied into the backup partitions after the hard drive failure has been resolved by the hardware support team. After that, the restore can take place as described here. Restart the machine and boot the backup OS. While booting, select the created boot option for the backup partition from the list given by the ELILO boot loader menu. By default there is no menu congigured, but if you press the TAB key while you see the text ELILO Booting:... you will be given the options you can choose. The newly created option of "backup" should be visible. If not, rerun the elilo –verbose command in the original OS and restart. The GRUB boot loader menu is shown by default (see 87: Sample GRUB boot loader screen on page 196. You can use the arrow keys to select the newly created option "backup". This should be done only after checking that the boot loader menu in the backup partition has been properly updated according to the directions in 16.2.1: Correcting the backup fstab on page 192 above.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 195 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Figure 87: Sample GRUB boot loader screen

Once the backup partition is booted, run the following command to transfer the backup to the original root partition:

1 # Begin restore of root file system 2 rsync -aAXxv --delete / /var/backup --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/boot/efi*,/run/*,/mnt/*,/media/*,\ 3 /lost+found,/var/backup/*,/sapmnt/*,/var/lib/ntp/proc/*,/etc/fstab} 4 middle_stamp=$(date +%s) 5 echo "Root file system completed in $( echo \ 6 "(${middle_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${middle_stamp}-${start_stamp})%60"| bc ) seconds" 7 # Begin restore of /boot/efi file system 8 rsync -aAXxv --delete /boot/efi/ /var/backup/boot/efi/ 9 end_stamp=$(date +%s) 10 echo "Boot file system completed in $( echo \ 11 "(${end_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds" Listing 15: Example rsync command

Then you need to revert the changes made in 16.3: Backup of the Linux operating system on page 195: Swap the mountpoints of \ and \boot\efi with the original root partition in the /var/backup/etc/ fstab. 1 ## Adding entries for HANA System Backup 2 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part1 /boot/efi vfat umask=0002,utf8=true 0 0 3 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part2 / ext3 acl,user_xattr 1 1 4 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part3 swap swap defaults 0 0 5 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part4 /var/backup/boot/efi vfat umask=0002,utf8=true 0 0 6 /dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /var/backup/root ext3 acl,user_xattr 1 1 Listing 16: Example SLES primary fstab file

1 ## Adding entries for HANA System Backup 2 LABEL=hanaboot /boot/efi vfat umask=0002,utf8=true 0 0 3 LABEL=hanaroot / ext3 acl,user_xattr 1 1 4 LABEL=hanaswap swap swap defaults 0 0 5 LABEL=backboot /var/backup/boot/efi vfat umask=0002,utf8=true 0 0 6 LABEL=backroot /var/backup/root ext3 acl,user_xattr 1 1 Listing 17: Example RHEL primary fstab file

Warning Be careful after using the rsync command to pay attention to the files /var/backup/etc/ fstab and the boot loaders /var/backup/boot/grub/grub.cfg or /var/backup/etc/elilo. conf. Ensure that they have the reverse meaning to that described in the previous section. On the primary partition, you should now be able to boot into the primary partition using the boot loaders default menu item.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 196 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

17 SAP HANA Backup and Recovery

Warning The snapshot restore functionality in SAP HANA Revision 80 is broken. The described procedure could be done successfully with SAP HANA Revision 91. This section provides instructions necessary to create a simple SAP HANA Platform Edition backup and restore procedure. These images can then be used for a basic backup/restore solution. Initially, they are copied locally and must be transferred to an offline storage for any real use. The intent is that the user can have a simple backup and restore solution using the tools available with IBM GPFS and SAP HANA. For advanced backup and restore solutions, we recommend to use an enterprise backup solution to ensure backup/restore operations for IBM GPFS and SAP HANA. What follows is a description how to take snapshots of the IBM GPFS file system and the SAP HANA database. We also describe how to restore SAP HANA in case of a planned or unplanned disaster. This enables the administrator to take backups of the SAP HANA data without interrupting the database service (so called online backups of the database). The time it takes to actually backup the data afterwards to a secure place does not affect SAP HANA operation. Note Features from SAP HANA Studio for snapshot generation are described as well. Identical results can be achieved using the command-line SQL interface found in the SAP HANA guide books.

17.1 Description

The procedure to backup SAP HANA and IBM GPFS only applies to SAP HANA 1.0 SPS 07 and later. These instructions are also included in the SAP HANA Operations Guide. All screenshots were taken with this release. The GUI may change with newer releases. • This procedure can restore data: – on the very same environment the snapshot was taken from, – on an environment that copies the landscape of the original system. • A change in landscape (m–to–n copy) is not supported. • Make sure to always check the following locations for latest information: – http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdfSAP HANA Admin- istration Guide, Chapter: Backup and Recovery, – http://www.saphana.com/docs/DOC-1220SAP HANA Backup and Recovery Overview, – IBM GPFS snapshot http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/ com.ibm.cluster.IBMGPFS.v4r1.IBMGPFS200.doc/bl1adv_logcopy.htmdocumentation. Warning Do not go into production without verifying a full backup and restore procedure!

17.2 Backup of SAP HANA

Open SAP HANA Studio.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 197 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

In SAP HANA Studio either right-click on Backup and choose "Manage Storage Snapshots" from the context menu or click on "Storage Snapshots" on the right. This allows to generate a snapshot. The following dialog opens:

Click on “Prepare”. You are then asked to give this snapshot a name. This name will be stored in the SAP HANA backup catalog. It does not appear outside of SAP HANA.

After clicking the OK button the snapshot is generated. Any log entries are merged into the data area so that it has a consistent state that can be recovered from.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 198 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

While the snapshot is active you can not have further snapshots or backups taken from this SAP HANA instance. Notice file snapshot_databackup_0_1 in /sapmnt/data//mnt00001/hdb00001 – this file indicates that the content of this directory is a valid SAP HANA snapshot and can be used to recover from. The next step is to take a IBM GPFS snapshot. Login to any server of the SAP HANA installation. It does not matter on which server you issue the IBM GPFS snapshot commands.

1 mmcrsnapshot sapmntdata 2 Writing dirty data to disk 3 Quiescing all file system operations 4 Writing dirty data to disk again 5 Resuming operations. 6 Snapshot created with id 2.

We recommend to include the current data and/or time in the snapshot name. You can do this via

1 mmcrsnapshot sapmntdata `date +%F--%T`

After this command has finished you have a new folder in /sapmnt/.snapshots This subfolder contains all files that you can then use to copy to a safe place. The IBM GPFS snapshot is taken from the entire GPFS file system.

If the IBM GPFS snapshot has finished successfully confirm this fact and release the SAP HANA snap- shot. In SAP HANA Studio click on “Confirm”. This opens the following dialog:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 199 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

We recommend to use the name given to the IBM GPFS snapshot as part of the mmcrsnapshot command. After you acknowledge this window the wizard finishes and you can leave the storage snapshot dialog. If the IBM GPFS snapshot did not finish successfully or was manually aborted, click on the “Abandon” button and act accordingly. Copy the IBM GPFS snapshot data to a safe place on an external storage device. E.g. this could be an NFS export on a storage server. For instance, this can be done with the following tools: simple Linux copy (cp), secure copy (scp) or rsync command. On the other hand, integration into IBM Tivoli Storage Manager or other automated file backup tools is also possible. This depends highly on the customer demands and availabilities regarding hardware and backup requirements. See table 55 for the files and directories which need to be copied to an external storage in order to have a full SAP HANA backup.

Path Exclude /sapmnt/.snapshot//shared/ /sapmnt/.snapshot//shared /HDB/backup /sapmnt/.snapshot//data/

Table 55: Required SAP HANA directories for restore

After data is successfully copied you need to delete the IBM GPFS snapshot:

1 mmdelsnapshot sapmntdata

Having more than one active snapshots at a time is supported by IBM GPFS. The maximum number of snapshots in sapmntdata is 256 (this applies to IBM GPFS 3.5 and 4.1). You can list all existing IBM GPFS snapshots with mmlssnapshot sapmntdata

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 200 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

However, keep in mind that all IBM GPFS snapshots still remain on the same physical disks as your production SAP HANA data. This does by no means represent a valid backup location! Moreover, having IBM GPFS snapshots will lead to a slightly decreased file system performance. Therefore it is essential to move and archive such backup to a remote device and to delete the snapshot.

17.3 Restore of SAP HANA

To prepare for a restore: • SAP HANA instance must be stopped. • Copy from the backup data data/ to /sapmnt/data/ (see screenshot of the terminal window below). • In case that you want to restore SAP HANA data on an new instance you need to restore the profiles. • Ensure correct file permissions on the snapshot data. File owner must be the database administra- tor. The terminal screenshot below visualizes a snapshot (plus subfolders and files) that has been copied back at the correct location. Simple tools like cp, scp or rsync can be used to copy back the data. This data can then be used for restoration.

There are two ways to restore the SAP HANA snapshot. Either with SAP HANA Studio or with a command line statement.

Restore with SAP HANA Studio

In SAP HANA Studio right-click on the SAP HANA instance you want to recover to and select “Recover”. The recovery wizard appears.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 201 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Specify “Snapshot” as the type of backup to recover from. This disables the location box.

If you restore on the same system from which the snapshot was taken you can skip the license key question. If you are restoring to a different system you need to provide a license key. If you do not specify a valid key the restore still completes successfully but the database instance will be locked afterwards. It is possible to specify a valid license key later on.

The final screen summarizes the restore parameters.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 202 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

In the next step, restore takes places. Restore time depends on the amount of data being recovered and the number of servers involved.

Restore via command line

In order to restore the SAP HANA snapshot, execute the following commands as adm:

1 su - nktadm 2 ./HDBSettings.sh recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG"

After the restore completes successfully the procedure automatically starts the SAP HANA instance. The file snapshot_databackup_0_1 in /sapmnt/data//mnt00001/hdb00001 is automatically removed upon a successful restore.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 203 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

18 Troubleshooting

For the Lenovo Systems Solution for SAP HANA Platform Edition the installation of SLES for SAP as well as the installation and configuration of IBM GPFS, and SAP HANA has been greatly simplified by an installation process with an accompanying guided installation. This process automatically installs and configures the base OS components necessary for the SAP HANA appliance software. It is no longer supported to install the OS manually for the Lenovo Solution.

18.1 Adding SAP HANA Worker/Standby Nodes in a Cluster

When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as described in the Lenovo SAP HANA Appliance Operations Guide34 (Section 4.3 Cluster Operations → Adding a cluster node).

18.2 GPFS mount points missing after Kernel Update

If you updated the Linux kernel, you will have to update the portability layers for GPFS before starting SAP HANA. After a kernel reboot, you will not see the GPFS mount points available. Follow the directions above in section regarding updating both portability layers.

18.3 Degrading disk I/O throughput

One possible reason for degrading disk I/O on the HDDs or SSDs could be a discharged or disconnected battery on the RAID controller. In that case the cache policy is changed from "WriteBack" (default) to WriteThrough, meaning that the data is written to disk instead to the cache. This will have a significant I/O performance impact. To verify, please proceed as follows: 1. The StorCLI tool (see section 15.4.2: ServeRAID StorCLI Utility for Storage Management on page 187) is installed during HANA setup. The path is /opt/MegaRAID/storcli/. If you have been using the MegaCli64 client before, you don’t have to learn new commands. The commands are the same. 2. Determine current cache policy:

1 # /opt/MegaRAID/storcli/storcli64 -LdPdInfo -aAll | grep "Cache Policy:"

3. Depending on the model there is a varying number of output lines. Sample output:

1 Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU 2 Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU 3 Default Cache Policy: WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU 4 Current Cache Policy: WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU

If the output contains "WriteThrough" for the "Current Cache Policy" while the previous "Default Cache Policy" defines "WriteBack", the cache policy has been switched from the "WriteBack" default due to some issue. You can then check each battery’s status. For example, with the sample output above you would check the status of the first two adapter’s batteries (the third one is OK).

34SAP Note 1650046 (SAP Service Marketplace ID required)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 204 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

1 # /opt/MegaRAID/MegaCli/storcli64 /c0/bbu show all 2 # /opt/MegaRAID/MegaCli/storcli64 /c1/bbu show all

If the output contains "Get BBU Capacity Info Failed", the battery is most likely bad or disconnected and needs to be replaced or reconnected to the adapter. If the output indicates a state of charge that is significantly smaller than 100%, then the battery is most likely bad and should be replaced. If any of the above issues occurs, a hardware support call with IBM should be opened.

18.4 SAP HANA will not install after a system board exchange

When a IBM Certified Engineer exchanges a system board, he is required only to reset the Manufacturer Type and Model (MTM) and serial number of the machine inside of the EEPROM Settings. SAP HANA hardware checker (before revision 27) looks at the description of the string instead of the MTM. To workaround this issue a Lenovo services person can use the Lenovo Advanced Settings Utility (ASU) tool (see section 15.4.1: Lenovo Advanced Settings Utility on page 187) to reset the system product data to the correct data for the SAP installer to work. ASU is installed under /opt/lenovo/toolscenter/asu. The tool can then be used to view or set the firmware settings of the IMM from the command line. For example to show and subsequently reset the System Product Identifier required by SAP HANA, you can use the following commands:

1 # asu64 show SYSTEM_PROD_DATA.SysInfoProdIdentifier --host

(--host can be omitted if the command is run on the actual system)

1 # asu64 set SYSTEM_PROD_DATA.SysInfoProdIdentifier "System x3850 X6"

Then dmidecode should return the correct system name after a system reboot.

18.5 Known Kernel Updates

18.6 Important SAP Notes (SAP Service Marketplace ID required)

You can find a list of SAP Notes in Appendix G.4: SAP Notes (SAP Service Marketplace ID required) on page 227. This chapter is to describe some of these SAP Notes in more detail.

18.6.1 SAP Note 1641148 HANA server hang caused by GPFS issue

https://service.sap.com/sap/support/notes/1641148

18.6.1.1 Symptom You are running a SAP HANA scale out landscape and see different time zone settings for the sidadm user.

18.6.1.2 Reason and Prerequisites Your SAP HANA scale out landscape shows different time zone settings for at least one server, i.e. the master node shows time zone UTC and all other nodes show time zone CET. This may be caused by an inconsistency in the installation process and should be corrected.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 205 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

18.6.1.3 Solution To change the time zone settings of the sidadm user: go to the home directory /usr/sap/

1 .sapenv.csh: setenv TZ

Make sure this is done for all HANA nodes. Additionally, for a scale out installation a NTP server should be configured. You may either use your corporate NTP or ask your hardware partner to setup a NTP server for you, i.e. on the management node of the appliance. If you see different time settings for the sidadm and the root user check /etc/adjtime. If you see quite big values check your NTP and do a re-sync. If the time setting is done login as sidadm user again and restart the database.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 206 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Appendices

A GPFS Disk Descriptor Files

GPFS 3.5 introduced a new disk descriptor format called stanzas. The old disk descriptor format is deprecated since GPFS 3.5. This stanza format is also valid for GPFS 4.1 (introduced with release 1.8). Create the file /var/mmfs/config/disk.list.data.gpfsnode01 by concatenating the following parts: 1. Always add %nsd: device=/dev/sdb nsd=data01node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1001 pool=system 2. When having one RAID array in the SAS expansion unit %nsd: device=/dev/sdc nsd=data02node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1001 pool=system 3. When having two RAID arrays in SAS expansion unit, add also %nsd: device=/dev/sdd nsd=data03node01 servers=gpfsnode01 usage=dataAndMetadata failureGroup=1001 pool=system 4. Always add these line at the end %pool: pool=system blockSize=1M usage=dataAndMetadata layoutMap=cluster allowWriteAffinity=yes writeAffinityDepth=1 blockGroupFactor=1

B Topology Vectors (GPFS 3.5 failure groups)

This is currently valid only for the DR-enabled clusters, for standard HA-enabled clusters use the plain single number failure groups as described in the instructions above. With GPFS 3.5 TL2 (the base version for DR) a new failure group (FG) format called "Topology vectors" was introduced which is being used for the DR solution. A more detailed description for topology vectors can be found in the GPFS 3.5 Advanced Administration Guide chapter "GPFS File Placement Optimizer".

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 207 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

In short, the topology vector is a replacement for the old FGs, storing more information on the infras- tructure of the cluster. Topology vectors are used for NSDs, but as the same topology vector is used for all disks of a server node it will be explained in the context of a server node. In a standard DR cluster setup all nodes are grouped evenly into four FGs (five when using the Tiebreaker- Node) with two FGs on every site. A topology vector consists of three numbers divided by commas. The first of the three numbers is either 1 or 2 (for all the SAP HANA nodes) or 3 for the tiebreaker node. The second number is 0 (zero) for all site A nodes and 1 for all site B nodes. The third number enumerates the nodes in each of the failure groups starting from 1. In a standard eight node DR-cluster (4 nodes per site) we would have these topology vectors:

Site Failure Group Topology Vector Node Failure group 1 1,0,1 gpfsnode01 / hananode01 (1,0,x) 1,0,2 gpfsnode02 / hananode02 Site A Failure group 2 2,0,1 gpfsnode03 / hananode03 (2,0,x) 2,0,2 gpfsnode04 / hananode04 Failure group 3 1,1,1 gpfsnode05 / hananode01 (1,1,x) 1,1,2 gpfsnode06 / hananode02 Site B Failure group 4 2,1,1 gpfsnode07 / hananode03 (2,1,x) 2,1,2 gpfsnode08 / hananode04 Failure group 5 Site C 3,0,1 gpfsnode99 (tiebreaker) (3,0,x)

Table 56: Topology Vectors in a 8 node DR-cluster

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 208 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

C Quotas

C.1 Quota Calculation

Note This section is only for information purposes. Please use the quota calculator in the next section C.2: Quota Calculation Script on page 209. The quota calculation for this and the following appliance releases is more complex than the quota calculations in previous releases. An utility script is provided to make the calculation easier. In general the quota calculations follows SAP recommendations for HANA 1.4 and later. For HANA single nodes and HA-enabled clusters, there will be set quotas for HANA log files, HANA data volumes and for the shared HANA data. In DR-enabled cluster a quota should be set only for SAP HANA’s log files. The formula for the quota calculation is

1 quota for logs = (# active Nodes) x 1024 GB 2 quota for data = (# active Nodes) x (RAM per node in GB) x 3 x (Replication factor) 3 quota for shared = (available space) - (quota for logs) - (quota for data)

The number of active nodes needs explanation. For single nodes, this number is of course 1. For clusters this is the count of all cluster nodes which are not dedicated standby nodes. A dedicated standby node is a node which has no HANA instance running with a configured role of master/slaves. Two examples: • In an eight node cluster, there is only one HANA database installed. The first six nodes are installed as worker nodes, the last two are installed as standbys. So this cluster has clearly two dedicated standby nodes. • Another eight node cluster has a HANA system ABC installed with the first seven nodes as workers and the last nodes as a standby node. A second HANA system QA1 is installed with a worker node on the last (eight) node and a standby node on node seven. This cluster has no dedicated standby node as the eight node is not "standby only", it’s actually active for the QA1 cluster. For DR the log quota will also be calculated based on the number of active nodes, in this case as only one HANA cluster is allowed on the DR file system, solely on the count of the worker nodes. The replication factor should be 1 for single nodes, 2 for clusters and 3 for DR enabled clusters. Manual calculation is not recommended. Please use the new saphana-quota-calculator.sh.

C.2 Quota Calculation Script

A script is available to ease the quota calculation. The standard installation uses this script to calculate the quotas during installation and the administrator can also call this script to recalculate the quotas after a topology change happened, e.g. installation of more HANA instances, changing node role, shrinking or growing the cluster. Most values are read from the system or guessed. For a cluster the standard assumption is to have one dedicated standby node. For a DR solution no reliable guess on the nodes can be made and manual override must be used. The basic call is

1 # saphana-quota-calculator.sh

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 209 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

As a result it will give the calculated quotas and the commands to set them to the calculated result. After reviewing these you can add the -a parameter to the call which will automatically set the quotas as calculated. In the case you are running a cluster and the number of dedicated standbys is not one, use the parameter -s <# standby> to set a specific number of standby hosts. 0 is also a valid value. In the case of a DR enabled cluster, the guess for the active worker nodes will be always wrong. Please use also the parameter -w <# workers> to set the number of nodes running HANA as active worker. The number of workers and standbys should equal the number of nodes on a site. Additional parameters are -r to get a more detailed report on the quota calculation and -c to verify the currently set quotas (allows a deviation of 10%, too inaccurate for larger clusters with more than 8 nodes).

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 210 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

D Performance Settings

Please review the following configuration settings if the support script indicates it: 1. Change Processor C-State Boot parameter This will disable the use of some processor C-States, which can reduce power consumption but lower performance. This boot parameter should not have any effect on Lenovo solutions, as restricting the processor C-state is done in other settings. However, SAP requires this parameter be set at boot. (a) ELILO installed systems (SLES based systems) Change line 12 in /etc/elilo.conf from

1 append = "resume=/dev/sda3 splash=silent transparent_hugepage=never ←- ,→intel_idle.max_cstate=0 showopts "

to

1 append = "resume=/dev/sda3 splash=silent transparent_hugepage=never ←- ,→intel_idle.max_cstate=0 processor.max_cstate=0 "

(b) Grup installed systems (RHEL based systems) Change line 17 in /boot/efi/efi/redhat/grub.conf from e.g.

1 kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 ro root=UUID=3d420911-eef8-46de-←- ,→b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.←- ,→UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ←- ,→rd_NO_DM intel_idle.max_cstate=0 transparent_hugepage=never ←- ,→crashkernel=auto rhgb quiet rhgb quiet

to

1 kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 ro root=UUID=3d420911-eef8-46de-←- ,→b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.←- ,→UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ←- ,→rd_NO_DM intel_idle.max_cstate=0 processor.max_cstate=0 ←- ,→transparent_hugepage=never crashkernel=auto rhgb quiet rhgb quiet

2. SAP HANA HDB Parameters There is a set of parameters for HANA that will adjust the asynchronous I/O to increase perfor- mance. These parameters can be applied by the [sid]adm user only after installing HANA. More information is available in SAP Note 1930979– Alert: Sync/Async read ratio.

1 su - [sid]adm 2 hdbparam --paramset fileio.async_write_submit_active=on 3 hdbparam --paramset fileio.async_write_submit_blocks=all 4 hdbparam --paramset fileio.async_read_submit=on

There are 2 additional parameters that are not available in HANA revision 80 but are available in revisions 90 and above.

1 hdbparam --paramset fileio.max_parallel_io_requests=256 2 hdbparam --paramset fileio.size_kernel_io_queue=1024

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 211 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

3. TCP Window Adjustment These settings adjust the network receive and transmit buffers for all connections in the OS. These settings are raised from their defaults in order to increase performance on scale-out systems. To have these changes applied on boot, create the file /etc/init.d/after.local with the following lines:

1 #!/bin/bash 2 sysctl -w net.ipv4.tcp_rmem="8388608 8388608 8388608" 3 sysctl -w net.ipv4.tcp_wmem="8388608 8388608 8388608"

Make the file executable:

1 chmod 755 /etc/init.d/after.local

The lines 17-18 in /etc/sysctl.conf should be changed from

1 net.ipv4.tcp_rmem="4096 262144 8388608" 2 net.ipv4.tcp_wmem="4096 262144 8388608"

to

1 net.ipv4.tcp_rmem="8388608 8388608 8388608" 2 net.ipv4.tcp_wmem="8388608 8388608 8388608"

To temporarily apply the changes immediately without a reboot, run the following commands:

1 sysctl -w net.ipv4.tcp_rmem="8388608 8388608 8388608" 2 sysctl -w net.ipv4.tcp_wmem="8388608 8388608 8388608"

4. Linux I/O Scheduler Adjustment The Linux I/O scheduler should be changed from the default mode (CFQ, or Completely Fair Queuing) to the noop mode. This algorithm change increases I/O performance on SAP HANA. To apply this configuration on boot, you can edit the /etc/init.d/ibm-saphana / /etc/init.d/ lenovo-saphana file installed with the machine. Insert the scheduler change into this script Insert at line 30:

1 echo noop > ${i}/queue/scheduler

Before the change lines 26-31 looks like:

1 for i in /sys/block/sd* ; do 2 if [ -d $i ]; then 3 echo $QUEUESIZE > $i/queue/nr_requests 4 echo $QUEUEDEPTH > $i/device/queuedepth 5 fi 6 done

Afterwards lines 26-32 looks like:

1 for i in /sys/block/sd* ; do 2 if [ -d $i ]; then 3 echo $QUEUESIZE > $i/queue/nr_requests 4 echo $QUEUEDEPTH > $i/device/queuedepth 5 echo noop > ${i}/queue/scheduler 6 fi 7 done

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 212 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

To temporarily apply the settings immediately without a reboot, perform the following command for each disk entry (sda, sdb, etc) in /sys/block/

1 echo noop > /sys/block/sda/queue/scheduler

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 213 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

E Lenovo X6 Server MTM List & Model Overview

Starting with the support of Intel Xeon IvyBridge EX family of processors, SAP has changed there naming of the models. Previously, SAP had named these "T–Shirt" sizes of S,M,L,XL, etc. The new naming convention is purely based on the amount of memory each predefined configuration should contain, for example 128, 256, 512, etc. Each of these servers are orderable with the proper components to fulfill the SAP pre-configured system sizes. The following table shows the SAP HANA T-Shirt Sizes to Machine Type Model (MTM) code mapping. The last x in the MTM is a placeholder for the region code the server was sold in, for example, a U for the USA. While the Machine Type is 6241, the different Models are shown below.

Chassis CPUs Memory Usage Model Possible Model 128GB Standalone AC32S128S 6241-AC3, -H2x1, -HZx2, -HUx3 Standalone AC32S256S 6241-AC3, -H3x1, -HYx2, -HTx3 256GB Scale-out AC32S256C 6241-AC3 2 384GB Standalone AC32S384S 6241-AC3 Standalone AC32S512S 6241-AC3, -H4x1, -HXx2, -HSx3 512GB Scale-out AC32S512S 6241-AC3 256GB Standalone AC34S256S 6241-AC3 Standalone AC34S512S 6241-AC3, -H5x1, -HWx2, -HRx3 512GB 4U Scale-out AC34S512C 6241-AC3 768GB Standalone AC34S768S 6241-AC3 Standalone AC34S1024S 6241-AC3, -H6x1, -HVx2, -HQx3 1TB 4 Scale-out AC34S1024C 6241-AC3 1.5TB Standalone AC34S1536S 6241-AC3 2TB Standalone AC34S2048S 6241-AC3 3TB Standalone AC34S3072S 6241-AC3 4TB Standalone AC34S4096S 6241-AC3 6TB Standalone AC34S6144S 6241-AC3

Table 57: Lenovo MTM Mapping & Model Overview 1 Only IvyBridge processors with DDR3 DIMMs 2 Only Haswell processors with DDR3 DIMMs 3 Only Haswell processors with DDR4 DIMMs

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 214 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Chassis CPUs Memory Usage Model Possible Model 256GB Standalone AC44S256S 6241-AC4 Standalone AC44S512S 6241-AC4, -HBx1, -HEx2, -HHx3 512GB Scale-out AC44S512C 6241-AC4 768GB Standalone AC44S768S 6241-AC4 4 Standalone AC44S1024S 6241-AC4, -HCx1, -HFx2, -HIx3 1TB Scale-out AC44S1024C 6241-AC4 1.5TB Standalone AC44S1536S 6241-AC4 2TB Standalone AC44S2048S 6241-AC4 Standalone AC48S512S 6241-AC4 512GB Scale-out AC48S1024C 6241-AC4 8U 1.5TB Standalone AC48S1536S 6241-AC4 Standalone AC48S2048S 6241-AC4, -HDx1, -HGx2, -HJx3 2TB Scale-out AC48S2048C 6241-AC4 Standalone AC483072S 6241-AC4 3TB 8 Scale-out AC483072C 6241-AC4 Standalone AC48S4096S 6241-AC4 4TB Scale-out AC48S4096C 6241-AC4 Standalone AC48S6144S 6241-AC4 6TB Scale-out AC48S6144C 6241-AC4 Standalone AC48S8192S 6241-AC4 8TB Scale-out AC48S8192C 6241-AC4 Standalone AC48S12288S 6241-AC4 12TB Scale-out AC48S12288C 6241-AC4

Table 58: Lenovo MTM Mapping & Model Overview 1 Only IvyBridge processors with DDR3 DIMMs 2 Only Haswell processors with DDR3 DIMMs 3 Only Haswell processors with DDR4 DIMMs

The model numbers follow this schema: 1. AC3/AC4 is describing the server chassis. AC3 are 4 rack unit sized servers for up to 4 CPU books. AC4 servers are 8 rack unit sized server for up to 8 CPU books. 2. 2S/4S/8S give the number of installed CPU books and by this the number of populated CPU sockets. 3. 128/256/... is the size of the installed RAM in GB. 4. S/C designates the intended usage, either S for Standalone/Single Node or C for Cluster/Scale-out nodes. These model numbers describe the current configuration of the server. A 6241-H2* is configured with 2 CPUs in a 4 Socket chassis with 128GB RAM and will be recognized as a AC32S128S by the installation and any installed scripts. When upgrading this machine with additional 128GB of RAM, the installation and already installed script will show the model as AC32S256S, while the burned-in MTM will still show 6241-H2* or 6241-AC3.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 215 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

F Frequently Asked Questions

Warning These FAQ entries are only valid for certain appliance models and versions. Do not apply the changes in this list until advised by either the support script or Lenovo support. The support script saphana-support-ibm.sh can detect various known problems in your appliance. In case such a problem is found, the support script will give an FAQ entry number. Please follow only the instructions given in the particular entry. When in doubt please contact Lenovo support via SAP’s OSS ticket system. Information on how to run the support script can be found in the Operations Guide, section 2.3 Basic System Check. Please use always the latest support script which may detect new issues found after installing your appliance. You can find the latest version attached to SAP Note 1661146– Lenovo Check Tool for SAP HANA appliances.

F.1 FAQ #1: SAP HANA Memory Limits

Problem: If left unconfigured, each installed and running HANA instance may use up to 97% (90% in older HANA revisions) of the system’s memory. If multiple unconfigured HANA systems or misconfigured HANA systems are running on the same machine(s) "Out of Memory" situations may occur. In this case the so called "OOM Killer" of Linux gets triggered which will terminate running processes at random and in most cases will kill SAP HANA or GPFS first, leading to service interruption. An unconfigured HANA system is a system lacking a global_allocation_limit setting in the HANA system’s global.ini file. Misconfigured SAP HANA systems are multiple systems running at the same time with a combined memory limit over 90% of the physical installed memory. Solution: Please configure the global allocation limit for all systems running at the same time. This can be done by setting the global_allocation_limit parameter in the systems’ global.ini configuration files. Please calculate the combined memory allocation for HANA so that at least 25GB are free for other programs. Please use only the physically installed memory for your calculation. More information on the parameter global_allocation_limit can be found in the "HANA Administration Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described there.

F.2 FAQ #2: GPFS parameter readReplicaPolicy

Problem: Older cluster installations do not have the GPFS parameter "readReplicaPolicy" set to "local" which may improve performance in certain cases. Newer cluster installations have this value set and single nodes are not affected by this parameter at all. It is recommended to configure this value. Solution: Execute the following command on any cluster node at any time:

1 # mmchconfig readReplicaPolicy=local

This can be done during normal operation and the change becomes effective immediately for the whole GPFS cluster and is persistent over reboots.

F.3 FAQ #3: SAP HANA Memory Limit on XS sized Machines

Problem: For a general description of the SAP HANA memory limit see Appendix F.1: FAQ #1: SAP HANA Memory Limits on page 216. XS sized servers have only 128GB RAM installed of which even a

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 216 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

single SAP HANA system will use up to 93.5% equaling 119GB (older revisions of HANA used 90% =b 115GB) if no lower memory limit is configured. This leaves too little memory for other processes which may trigger Out-Of-Memory situations causing crashes. Solution: Please configure the global allocation limit for the installed SAP HANA system to a more apropriate value. The recommended value is 112GB if the GPFS page pool size is set to 4GB (see FAQ #12: GPFS pagepool should be set to 4GB) and 100GB or less if the GPFS page pool is set to 16GB. If multiple systems are running at the same time, please calculate the total memory allocation for HANA so the sum does not exceed the recommended value. Please use only the physically installed memory for your calculation. More information on the parameter global_allocation_limit can be found in the "HANA Administration Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described there.

F.4 FAQ #4: Overlapping NSDs

Problem: Under some rare conditions single node SSD or XS/S gen 2 models may be installed with overlapping NSDs. Overlapping means that the whole drive (e.g. /dev/sdb) as well as a partition on the same device (e.g. /dev/sdb2) may be configured as NSDs in GPFS. As GPFS is writing data on both NSDs, each NSD will overwrite and corrupt data on the other NSD. In the end at some point the whole device NSD will overwrite the partition table and the partition NSD is lost and GPFS will fail. This is the most common situation where the problem will be noticed. Consider any data stored in /sapmnt to be corrupted even if the file system check finds no errors. Solution: The only solution is to reinstall the appliance from scratch. To prevent installing with the same error again, the single node installation must be completed in phase 2 of the guided installation. Do not deselect "Single Node Installation".

F.5 FAQ #5: Missing RPMs

Problem: An upgrade of SAP HANA or another SAP software component fails because of missing dependencies. As some of these package dependencies were added by SAP HANA after your system was initially installed, you may install those missing packages and still receive full support of the Lenovo Systems solution. If you no longer have the SLES for SAP DVD or RHEL DVD (depending on what OS you are using) that had been delivered with your system, you may obtain it again from the SUSE Customer Center respectively Red Hat. Solution: Ensure that the packages listed below are installed on your appliance. • SUSE Linux Enterprise Server for SAP Applications – libuuid – gtk2 - Added for HANA Developer Studio – java-1_6_0-ibm - Added for HANA Developer Studio – libicu - Added since revision 48 (SPS04) – mozilla-xulrunner192-* - Added for HANA Developer Studio – ntp – sudo – syslog-ng

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 217 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

– tcsh – libssh2-1 - Added since revision 53 (SPS05) – expect - Added since revision 53 (SPS05) – autoyast2-installation - Added since revision 53 (SPS05) – yast2-ncurses - Added since revision 53 (SPS05) • Red Hat Enterprise Linux: At the moment there are no known packages that have to be installed additionally. Missing packages can be installed from the SLES for SAP DVD shipped with your appliance using the following instructions. It is possible to add the DVD that was included in your appliance install as a repository and from there install the necessary RPM package. First Check to see if the SUSE Linux Enterprise Server is already added as an repository:

1 # zypper repos 2 3 # | Alias | Name | Enabled | Refresh 4 --+------+------+------+------5 1 | SUSE-Linux-... | SUSE-Linux-... | Yes | No

If it doesn’t exist, please place the DVD in the drive (or add it via the Virtual Media Manager) and add it as a repository. This example uses the SLES for SAP 11 SP1 media.

1 # zypper addrepo --type yast2 --gpgcheck --no-keep-packages\ 2 --refresh --check dvd:///?devices=/dev/sr1 \ 3 "SUSE-Linux-Enterprise-Server-11-SP1_11.1.1" 4 5 This is a changeable read-only media (CD/DVD), disabling autorefresh. 6 Adding repository 'SLES-for-SAP-Applications 11.1.1' [done] 7 Repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1' 8 successfully added 9 Enabled: Yes 10 Autorefresh: No 11 GPG check: Yes 12 URI: dvd:///?devices=/dev/sr1 13 14 Reading data from 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1' 15 media 16 Retrieving repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1' 17 metadata [done] 18 Building repository 'SUSE-Linux-Enterprise-Server-11-SP1_11.1.1' 19 cache [done]

The drawback of this solution is, that you always have to insert the DVD into the DVD-Drive or mounted via VMM or KVM. Another possibility is to copy the DVD to a local repository and add this repository to zypper. First find out if the local repository is a DVD repository

1 # zypper lr -u 2 # | Alias | Name ←- ,→ | Enabled | Refresh | URI 3 --+------+------+------+------+------←- ,→ 4 1 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server←- ,→-11-SP3 11.3.3-1.138 | Yes | No | cd:///?devices=/dev/sr0

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 218 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Copy the DVD to a local Directory

1 # cp -r /media/SLES-11-SP3-DVD*/* /var/tmp/install/sles11/ISO/

Register the directory as a repository to zypper

1 # zypper addrepo --type yast2 --gpgcheck --no-keep-packages -f file:///var/tmp/←- ,→install/sles11/ISO/ "SUSE-Linux-Enterprise-Server-11-SP3" 2 Adding repository 'SUSE-Linux-Enterprise-Server-11-SP3' [done] 3 Repository 'SUSE-Linux-Enterprise-Server-11-SP3' successfully added 4 Enabled: Yes 5 Autorefresh: Yes 6 GPG check: Yes 7 URI: file:/var/tmp/install/sles11/ISO/

For verification you can list the repositories again. you should see an output similar to this

1 # zypper lr -u 2 # | Alias | Name ←- ,→ | Enabled | Refresh | URI 3 --+------+------+------+------+------←- ,→ 4 1 | SUSE-Linux-Enterprise-Server-11-SP3 | SUSE-Linux-Enterprise-Server←- ,→-11-SP3 | Yes | Yes | file:/var/tmp/install/sles11/ISO/ 5 2 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server←- ,→-11-SP3 11.3.3-1.138 | Yes | No | cd:///?devices=/dev/sr0

Then search to ensure that the package can be found. This example searches for libssh.

1 # zypper search libssh 2 3 Loading repository data... 4 Reading installed packages... 5 6 S | Name | Summary | Type 7 --+------+------+------8 | libssh2-1 | A library implementing the SSH2 ... | package

Then install the package:

1 # zypper install libssh2-1 2 3 Loading repository data... 4 Reading installed packages... 5 Resolving package dependencies... 6 : 7 : 8 1 new package to install. 9 Overall download size: 55.0 KiB. After the operation, additional 144.0 10 KiB will be used. 11 Continue? [y/n/?] (y): 12 Retrieving package libssh2-1-0.19.0+20080814-2.16.1.x86_64 (1/1), 55.0 13 KiB (144.0 KiB unpacked) 14 Retrieving: libssh2-1-0.19.0+20080814-2.16.1.x86_64.rpm [done] 15 Installing: libssh2-1-0.19.0+20080814-2.16.1 [done]

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 219 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

F.6 FAQ #6: CPU Governor set to ondemand

Problem: Linux is using a technology for power saving called "CPU governors" to control CPU throttling and power consumption. By default Linux uses the governor "ondemand" which will dynamically throttle CPUs up and down depending on CPU load. SAP advised to use the governor "performance" as the ondemand governor will impact HANA performance due to too slow CPU upscaling by this governor. Since appliance version 1.5.53-5 (or simply SLES for SAP 11 SP2 based appliances) we changed the CPU governor to performance. In case of an upgrade you also need to change the governor setting. If you are still running SLES for SAP 11 SP1 based appliances, you may also change this setting to trade in power saving for performance. This performance boost was not quantified by the development team. Solution: On all nodes append the following lines to the file /etc/rc.d/boot.local:

1 bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor) 2 # Phoenix Technologies LTD means we are running in a VM and governors are not ←- ,→available 3 if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies ←- ,→LTD" ]; then 4 /sbin/modprobe acpi_cpufreq 5 for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor 6 do 7 echo performance > $i 8 done 9 fi

The setting will change on the next reboot. You can also change safely the governor settings immediately by executing the same lines at the shell. Copy & paste all the lines at once, or type them one by one.

F.7 FAQ #7: No disk space left bug (Bug IV33610)

Problem: Starting HANA fails due to insufficient disk space. The following error message will be found in indexserver or nameserver trace:

1 Error during asynchronous file transfer, rc=28: No space left on device.

Using the command ’df’ will show that there is still disk space left. This problem is due to a bug in GPFS versions between 3.4.0-12 and 3.4.0-20 which will cause GPFS to step into a read-only mode. See SAP Note 1846872– "No space left on device" error reported from HANA. Solution: Make sure to shutdown all HANA nodes by issuing shutdown command from the studio, or login in with ssh using the sidadm user. Then run:

1 HDB info

to see if there is any HANA processes running. If there are, run

1 kill -9 proc_pid

to shut them down, one by one. Download and apply GPFS version 3.4.0.23. Refer to the section 13.5: Updating GPFS on page 167 for information about how to upgrade GPFS. Note It is recommended that you consider upgrading your GPFS version from 3.4 to 3.5 as support for GPFS 3.4 has been discontinued from IBM.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 220 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

SAP highly recommends that you run uniqueChecker.py script after patching GPFS to make sure that your database is consistent.

F.8 FAQ #8: Setting C-States

Problem: Poor performance of SAP HANA due to Intel processor settings. Solution: As recommended in the SAP Notes 1824819– SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP2 and 1954788– SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3 and additionally described in the IBM RETAIN Tip H20700035 - Linux Ignores C-State Settings in Unified Extensible Firmware Interface (UEFI), the control (’C’) states of the Intel processor should to be turned off for the most reliable performance of SAP HANA. By default C-States are enabled in the UEFI due to the fact that we set the processor to Customer Mode. With C-States being turned on you might see performance degradations with SAP HANA. We recommend to turn off the processor C-States using the Linux kernel boot parameter:

1 processor.max_cstate=0

The Linux kernel used by SAP HANA includes a built-in driver (’intel_idle’) which will ignore any C-State limits imposed by Basic Input/Output System (BIOS)/Unified Extensible Firmware Interface (UEFI) when it is active. This driver may cause issues by enabling C-States even though they are disabled in the BIOS or UEFI. This can cause minor latency as the CPUs transition out of a C-State and into a running state. This is not the preferred state for the SAP HANA appliance and must be changed. To prevent the ’intel_idle’ driver from ignoring BIOS or UEFI settings for C-States, add the following start parameter to the kernel’s boot loader configuration file:

1 intel_idle.max_cstate=0

Append both parameters to the end of the kernel command line of your boot loader (/boot/grub/menu.lst) and reboot the server. Warning For clustered configurations, this change needs to be done on each server of the cluster. Only make this change when all servers can be rebooted at once, or when you have an active stand- by node to take over the rebooting systems HANA services. Do not try to reboot more servers than stand-by nodes are active For further information please refer to the SUSE knowledgebase article.

F.9 FAQ #9: ServeRAID M5120 RAID Adapter FW Issues

Problem: After the initial release of the new X6-based servers (x3850 X6, x3950 X6) a serious issue in various firmware versions of the ServeRAID M5210 RAID adapter has been found which can trigger continuous controller resets. This happens only under heavy load and each controller reset may cause service interruption. Certain firmware versions do not exhibit this issue, but these versions show severely degraded I/O performance. Only servers using the ServeRAID M5120 controller for attaching an external SAS enclosure are affected. Future appliance versions will be have the workaround for the controller reset issue preinstalled while the performance issue can be only solved by an up- or downgrade to an unaffected firmware version.

35http://www.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5091901

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 221 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

Non-exhaustive list of known affected firmware versions:

Issue Affected versions Controller resets 23.7.1-0010, 23.12.0-0011, 23.12.0-0016, 23.12.0-0019 Lowered Performance 23.16.0-0018, 23.16.0-0027

Table 59: ServeRAID M5120 Firmware Issues

Solution: The current recommendation is to use firmware version 23.22.0-0024 (or newer, if listed as stable by Lenovo SAP HANA Team) and to change the following configuration value in the installed OS. Both can be done after installation.

F.9.1 Changing Queue Depth

On the installed appliance, please edit /etc/init.d/ibm-saphana and change the lines

1 function start() { 2 QUEUESIZE=1024 3 for i in /sys/block/sd* ; do 4 if [ -d $i ]; then 5 echo $QUEUESIZE > $i/queue/nr_requests 6 fi 7 done

to this version (if not already set)

1 function start() { 2 QUEUESIZE=1024 3 QUEUEDEPTH=250 4 for i in /sys/block/sd* ; do 5 if [ -d $i ]; then 6 echo $QUEUESIZE > $i/queue/nr_requests 7 echo $QUEUEDEPTH > $i/device/queue_depth 8 fi 9 done

by inserting lines 3 & 7. The new settings will be set on the next reboot or by calling

1 # service ibm-saphana start

Please ignore any output.

F.9.2 Use recommended Firmware version

1. Check which FW Package Build is installed on all M5120 RAID controllers:

1 # /opt/MegaRAID/storcli/storcli64 -AdpAllInfo -aAll | grep 'M5120' -B 5 -A 3 2 3 Adapter #1 4 5 ======6 Versions 7 ======8 Product Name : ServeRAID M5120

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 222 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

9 Serial No : xxxxxxxxxx 10 FW Package Build: 23.22.0-0024

Currently, version 23.22.0-0024 is recommended. Download the 23.22.0-0024 FW package for ServeRAID 5100 SAS/SATA adapters via IBM Fixcentral or use following direct link: https: //ibm.biz/BdRatD. 2. Make the downloaded file executable and then run it: chmod +x ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin ./ibm_fw_sraidmr_5100-23.22.0-0024_linux_32-64.bin -s 3. Please reboot the server after updating all M5120 controllers. 4. After reboot: Check if the queue depth is set to 250 for all devices on M5120 RAID controller:

1 for dev in $(lsscsi |grep -i m5120 |grep -E -o '/dev/sd[a-z]+'| cut -d '/' -f3)←- ,→ ; do cat /sys/block/${dev}/device/queue_depth ; done

F.10 FAQ #10: GPFS Parameter enableLinuxReplicatedAIO

With GPFS version 3.5.0-13 the new GPFS parameter enableLinuxReplicatedAIO was introduced. Please note the following: • Single node installations: Single node installations are not affected by this parameter. It can be set to "yes" or "no". • Cluster installations: – GPFS 3.5.0-13 - 3.5.0-15: The parameter must be set to "no". When upgrading to GPFS 3.5.0-16 or higher you have to manually set the value to "yes". Warning Instead of setting the parameter to "no" we highly recommend to upgrade GPFS to 3.5.0-16 or higher. – GPFS 3.5.0-16 or higher: The parameter must be set to "yes". • DR cluster installations: The parameter must be set to "yes". The support script (saphana-support-ibm.sh) checks if the parameter is set correctly. If it is not set correctly, adjust the setting:

1 # mmchconfig enableLinuxReplicatedAIO=no 2 # mmchconfig enableLinuxReplicatedAIO=yes

F.11 FAQ #11: GPFS NSD on Devices with GPT Labels

Problem: In some very rare occasions GPFS NSDs may be created on devices with a GUID Partition Tables (GPT). When the NSD is created parts of the primary GPT header are overwritten. Newer UEFI firmware releases offer an option to repair damaged GPTs and if activated the UEFI may try to recover the primary GPT from the backup copy during boot-up. This will destroy the NSD header and in case of single nodes this leads to the loss of all data in the GPFS filesystem. To cause this issue, the following prerequisites must all apply:

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 223 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

• A storage device used as a NSD in a GPFS filesystem must have a GPT before the NSD was created. This can only happen if the drive or RAID array was used before and has not been wiped or reassembled. As part of the HANA appliance, GPT labels on non-OS disks are only created as part of the mixed eX5/X6 clusters. If a system was only used for the HANA appliance, this cannot occur unless there was a misconfiguration. • GPFS 3.4 or GPFS 3.5 was used when the NSD and the filesystem was created, either during installation or manually after installation, regardless of the current running GPFS version. GPFS 4.1 uses protective partition tables to prevent this issue when creating new NSDs. • An UEFI version with GPT recovery functionality is either installed or an upgrade to such a version is planned. Further risk comes from the UEFI upgrade as these new UEFI versions will enable the GPT recovery by default. The probability for this combination is very low. Solution: If the support script pointed you to this FAQ entry, please contact Lenovo Support via SAP’s OSS Ticket System and put the message on the Queue BC-OP-LNX-IBM. Please prepare a support script dump as described in SAP Note 1661146– Lenovo Check Tool for SAP HANA appliances. The Lenovo support will then devise a solution for your installation. When the ASU tool is installed, run the command

1 # /opt/lenovo/toolscenter/asu/asu64 show | grep -i gpt

If the Lenovo Systems Solution for SAP HANA Platform Edition was installed with an ISO image below version 1.9.96-13, the ASU tool will reside here:

1 # /opt/ibm/toolscenter/asu/asu64 show | grep -i gpt

The setting has various names, but any variable named GPT and Recovery should be set to "None". If it is set to "Automatic" do not reboot the system. If there is no such setting, do not upgrade the UEFI firmware until the GPTs have been cleared.

F.12 FAQ #12: GPFS pagepool should be set to 4GB

Problem: GPFS is configured to use 16GB RAM for its so called pagepool. Recent tests showed that the size of this pagepool can be safely reduced to 4GB which will yield 12GB of memory for other running processes. Therefore it is recommended to change this parameter on all appliance installations and versions. Updated versions of the support script will warn if the pagepool size is not 4GB and will refer to this FAQ entry. Solution: Please change the pagepool size to 4GB. Execute

1 # mmchconfig pagepool=4G

to change the setting cluster-wide. This means this command needs to be run only once on Single Node and clustered installation. The pagepool is allocated during the startup of GPFS, so a GPFS restart is required to activate the new setting. Please stop HANA and any processes that access GPFS filesystems before restarting GPFS. To restart GPFS execute

1 # mmshutdown 2 # mmstartup

In clusters all nodes need to be restarted. You can do this one node at a time or restart all nodes at once by adding the parameter -a to both commands. In the latter case please make sure no program is accessing GPFS filesystems on any node.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 224 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

To verify the configured pagepool size run

1 # mmlsconfig | grep pagepool

To verify the current active pagepool size run

1 # mmdiag --config

and search for the pagepool line. This value is shown in bytes.

F.13 FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506

Problem: SLES offers an option to limit the size of the page cache pool. Per default the page cache size is umlimited. SAP recommends in SAP Note 1557506– Linux paging improvements to limit this page cache to 4GB of RAM. This may improve resilience against Out-Of-Memory events. Future appliance software versions will set this value by default. RHEL does currently not offer this option. Solution: Add the following line to file /etc/sysctl.conf:

1 vm.pagecache_limit_mb = 4096

and run

1 # sysctl -e -p

to activate this value without a reboot. This change can be done without a downtime.

F.14 FAQ #14: restripeOnDiskFailure and start-disks-on-startup

GPFS 3.5 and higher come with the new parameter restripeOnDiskFailure. The GPFS callback script start-disks-on-startup automatically installed on the Lenovo Solution is superseded by this parameter – IBM GPFS NSDs are automatically started on startup when restripeOnDiskFailure is activated. On DR cluster installations, neither the callback script nor restripeOnDiskFailure should be activated. Solution: To enable the new parameter on all nodes in the cluster execute:

1 # mmchconfig restripeOnDiskFailure=yes -N all

To remove the now unnecessary callback script start-disks-on-startup execute:

1 # mmdelcallback start-disks-on-startup

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 225 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

G References

G.1 Lenovo References

Lenovo Solution Documentation • Lenovo Systems Solution for SAP HANA Quick Start Guide • Lenovo Systems X6 Solution for SAP HANA Implementation Guide • SAP Note 1650046– Lenovo Systems X6 Solution for SAP HANA Operations Guide Lenovo System x Documentation • IBM X6 Portfolio Overview Redbook • IBM eX5 Portfolio Overview Redbook • IBM System Storage EXP2500 Express Specifications • Lenovo RackSwitch G8052 Redbook • Lenovo RackSwitch G8124E Redbook • Lenovo RackSwitch G8264 Redbook • Lenovo RackSwitch G8272 Redbook • Lenovo RackSwitch G8296 Redbook • LNVO-ASU – Lenovo Advanced Settings Utility (ASU) • LNVO-DSA – Lenovo Dynamic System Analysis (DSA) • MIGR-5090923 – IBM SSD Wear Gauge CLI utility

G.2 IBM References

IBM General Parallel File System Documentation • IBM General Parallel File System Documentation • GPFS FAQ (with supported OS levels) • GPFS Service on IBM Fix Central (IBM ID required) for GPFS 3.5.0 • GPFS Books – IBM developerWorks Article: GPFS Quick Start Guide for Linux • GPFS Support in IBM Support Portal (IBM ID required)

G.3 SAP General Help (SAP Service Marketplace ID required)

• SAP Service Marketplace • SAP Help Portal • SAP HANA Ramp-Up Knowledge Transfer Learning Maps • SAP HANA Software Download at SAP Software Download Center → Support Packages and Patches / Installations and Upgrades → A–Z Index → H (for SAP HANA)

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 226 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

G.4 SAP Notes (SAP Service Marketplace ID required)

Generic SAP Notes about SAP HANA • SAP Note 1730996– Unrecommended external software and software versions • SAP Note 1730929– Using external tools in an SAP HANA appliance • SAP Note 1803039– Statistics server CHECK_HOSTS_CPU intern. error when restart SAP Notes about the Lenovo Systems Solution for SAP HANA • SAP Note 1650046– Lenovo SAP HANA Appliance Operations Guide • SAP Note 1661146– Lenovo Check Tool for SAP HANA appliances • SAP Note 1880960– Lenovo Systems Solution for SAP HANA Platform Edition Customer Main- tenance SAP Notes regarding SAP HANA • SAP Note 1523337– SAP HANA Database 1.00 - Central Note • SAP Note 2159166– SAP HANA SPS 09 Database Revision 96 • SAP Note 1681092– Multiple SAP HANA databases on one SAP HANA system • SAP Note 1642148– FAQ: SAP HANA Database Backup & Recovery • SAP Note 1780950– Connection problems due to host name resolution • SAP Note 1829651– Time zone settings in HANA scale out landscapes • SAP Note 1743225– Potential failure of connections with scale out nodes • SAP Note 1888072– SAP HANA DB: Indexserver crash in __strcmp_sse42 • SAP Note 1890444– Slow HANA system due to CPU power save mode SAP Notes regarding SUSE Linux Enterprise Server for SAP Applications • SAP Note 784391– SAP support terms and 3rd-party Linux kernel drivers • SAP Note 1310037– SUSE LINUX Enterprise Server 11: Installation notes • SAP Note 1954788– SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3 • SAP Note 618104– Linux SAP System Information Tool • SAP Note 1056161– SUSE Priority Support for SAP applications • SAP Note 2001528– Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 SAP Notes regarding Red Hat Enterprise Linux • SAP Note 2013638– SAP HANA DB: Recommended OS settings for RHEL 6.5 • SAP Note 2136965– SAP HANA DB: Recommended OS settings for RHEL 6.6 • SAP Note 2001528– Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11 SAP Notes regarding IBM GPFS • SAP Note 1084263– Cluster File System: Use of GPFS on Linux • SAP Note 1902281– GPFS 3.5 incompatibility with Linux kernel 3.0.58 and higher

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 227 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

• SAP Note 2051052– GPFS "No space left on device" when df shows free space SAP Notes regarding Virtualization • SAP Note 1122387– Linux: SAP Support in virtualized environments

G.5 Novell SUSE Linux Enterprise Server References

Currently Supported • SUSE Linux Enterprise Server 11 SP3 Release Notes • SUSE Linux Enterprise Server for SAP Applications 11 SP3 Media

G.6 Red Hat Enterprise Linux References (Red Hat account required)

• Red Hat Enterprise Linux 6 Why can I not install or start SAP HANA after a system upgrade? • Red Hat Enterprise Linux 6 Red Hat Enterprise Linux for SAP HANA: system updates and sup- portability

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 228 1.9.96-13 © Copyright Lenovo, 2015 Technical Documentation

H Changelog

This section describes the changes that have been done within a release version since it was published.

X6 Implementation Guide Lenovo® Systems Solution™ for SAP HANA® 229 1.9.96-13 © Copyright Lenovo, 2015