HPE Hyper Converged 380 Installation Guide

Total Page:16

File Type:pdf, Size:1020Kb

HPE Hyper Converged 380 Installation Guide

HPE Hyper Converged 380 Installation Guide

Abstract This document describes how to install and configure the HPE Hyper Converged 380 appliance and expansion nodes. This document is for the person who installs, administers, and troubleshoots servers and is skilled in network configuration and virtual environments. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.

Part Number: 860192-004 Published: November, 2016 Edition: 4 © Copyright 2016 Hewlett Packard Enterprise Development LP

The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website. Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. ® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

VMware®, vCenter™, and vSphere™ are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. NVIDIA, the NVIDIA logo, and NVIDIA Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Contents

Introduction...... 7 Product introduction...... 7 Purpose of this guide...... 7 Supported versions...... 7

Before you begin...... 9 Networking requirements...... 9 Flat or VLAN-tagged networks...... 9 IPv6 enablement at switch level...... 9 IP address assignments...... 9 VMware vCenter...... 13 HPE iLO 4...... 14

Preinstallation worksheets...... 15 Appliance networks...... 15 vCenter requirements...... 16 Local vCenter...... 16 Remote vCenter...... 17 Settings...... 18 iLO addresses...... 20 iLO IPv4 address requirements...... 20 iLO address worksheet...... 20 Default user names and passwords...... 20 CloudSystem...... 21

Appliance components...... 23 Front panel...... 23 Rear panel components...... 24 General virtualization and VDI rear panel components...... 24 CloudSystem rear panel components...... 25

Installing the HC 380 appliance nodes...... 26 Optimum environment...... 26 Space and airflow requirements...... 26 Temperature requirements...... 27 Power requirements...... 27 Connecting a DC power cable to a DC power source...... 27 Rack warnings...... 28 Installing the node into the rack ...... 29 Cabling the system...... 30 General virtualization configuration (all 1 GbE appliance)...... 30 General virtualization (10GbE appliance) and VDI configurations...... 32 CloudSystem configuration...... 33

Contents 3 Configuring the system...... 35 Configuring the HC 380 system...... 35 Configuring the network switches...... 35 Powering on all nodes...... 36 Configuring a laptop or workstation to access the system...... 36 Testing network connectivity...... 37 Licensing and installing NVIDIA Tesla M60 GPUs...... 37 Installing the NVIDIA GPU mode change utility on the vSphere host...... 38 Verifying the NVIDIA GPU mode...... 38 Installing NVIDIA GRID Manager Software on vSphere...... 39 Installing HPE HC StoreVirtual Status Provider on the remote vCenter server...... 39 Configuring the HC 380 using OneView InstantOn...... 39 Launching OneView InstantOn...... 40 Completing the Introduction screen...... 41 Completing the vCenter screen...... 41 Completing the Health screen...... 43 Completing the IP Assignments screen...... 44 Completing the Credentials screen...... 45 Completing the Settings screen...... 46 Completing the Review Configuration Screen...... 47 Completing the Next Steps screen...... 48 Quick-reset...... 49 Quick reset guidelines...... 50 Performing a Quickreset...... 50 Installing HC 380 Management UI...... 56

Completing the initial HC 380 Management UI configuration...... 59 Performing the initial HC 380 Management UI setup...... 59 Configuring LDAP or Active Directory...... 60 User roles system access ...... 60 Creating datastores...... 61

Installing CloudSystem...... 64 Limitations...... 64 Running the installation utility...... 64 Overview...... 64 Downloading the MySQL JDBC driver...... 65 Enabling SSH on each ESXi server...... 65 Running the installation utility...... 65 Disabling SSH on each ESXi server...... 73 Upgrading CloudSystem...... 74 Prerequisites for upgrading CloudSystem...... 74 Upgrading CloudSystem 9.0 to 9.01 and 9.02...... 74 Tenant and Provider Networking...... 74 Tenant...... 74 Provider...... 75 External...... 75 Validating CloudSystem...... 76 Create router...... 76

Expanding the system...... 78

4 Contents Prerequisites to expanding the HC 380 system...... 78 Enabling VMware Enhanced vMotion Compatibility support...... 78 Expanding the HC 380 using OneView InstantOn...... 79 Expanding CloudSystem...... 83 Compute node expansion...... 83 Pre-expansion preparation...... 83 Configuring the new node in the CMC...... 84 Configuring the volume in vCenter...... 85 Configure the virtual distributed switches of vCenter...... 85 Activate the compute host cluster (compute node only)...... 86

Troubleshooting...... 87 Troubleshooting OneView InstantOn...... 87 Certificate error when launching vCenter web client...... 87 HC 380 nodes are not discovered...... 87 OneView InstantOn hangs during deployment...... 87 vCenter license status on Health screen is red...... 88 OneView InstantOn progress indicator appears to hang...... 88 "Invalid username and password" error appears when you specify a local vCenter...... 88 Application performance on management VM might decrease...... 88 "The page cannot be displayed" error message appears...... 89 OneView InstantOn hangs with error message "0:02 Adding SAN to vCenter"...... 89 Troubleshooting CloudSystem...... 89 Connection error to vCenter server...... 89 Issue tagging DCM VLAN portgroups...... 89 Unable to migrate host to management cluster...... 90 Trouble setting up storage...... 91 Distributed switches were not created as expected...... 91 Foundation or Enterprise zip files on the datastore supplied by the factory image are not found...... 92 Foundation or Enterprise zip files can not be unzipped...... 92 Storage is not available for OVA images...... 92 Issues creating first management appliance...... 93 CloudSystem was not deployed successfully...... 93 Could not update the hpcs-data* distributed switch...... 93 Could not register vCenter with CloudSystem...... 94 Could not activate compute nodes...... 94 Could not create Tenant VLAN Segment Ranges...... 95 Could not create Tenant and Provider VLAN Networks...... 95 Could not add Subnets to Networks...... 95 Could not create router...... 96 Could not update to 9.02...... 96 Could not update passwords...... 96 VSA volumes did not stabilize in 10 minutes…cannot continue...... 96 VM did not power off - ...... 97 Problem finding original vCenter cluster...... 97 Could not find the VSA VM on host ...... 97 Deploy CloudSystem...... 97

Appendix A: Network switch configuration...... 98 Hewlett Packard Enterprise switches...... 98 Network cabling...... 98 Configuring the switches...... 99 Connecting to the serial console port...... 99

Contents 5 IRF configuration...... 99 Cisco Nexus networking...... 105 Network cabling...... 105 Configuring the switches...... 106 Validating the switch configuration...... 111 Uplink into existing network infrastructure...... 115 Configuration worksheet...... 115

Appendix B: CloudSystem 9.0 Management Host Networking...... 116

Appendix C: CloudSystem 9.0 Compute Host Networking...... 117

Appendix D: CloudSystem 9.0 Consoles...... 118

Appendix E: CloudSystem Network Diagram...... 120

Appendix F: Remote vCenter setup...... 121

Appendix G: Management group quorum consideration...... 123

Appendix H: IP addresses for sample cluster...... 124 ESXi management network IP addresses worksheet...... 124 vSphere vMotion network IP addresses worksheet...... 128 Storage network IP addresses worksheet...... 128 CloudSystem network IP addresses worksheet...... 129

Specifications...... 131 HC 380 specifications...... 131

Support and other resources...... 132 Accessing Hewlett Packard Enterprise Support...... 132 Accessing updates...... 132 Customer self repair...... 132 Remote support...... 133 Warranty information...... 133 Regulatory information...... 133 Documentation feedback...... 134

Acronyms and abbreviations...... 135

6 Contents Introduction

Product introduction The Hyper Converged 380 system is a virtualization appliance that combines compute and storage resources in the same chassis. It is designed to be deployed easily and manage a variety of virtualized workloads in medium-sized businesses and enterprises. The system is available in three workload configurations: • General virtualization — supports general-purpose virtualization workloads. • HPE Helion CloudSystem — open and fully integrated solution delivering automation, orchestration, and control across multiple clouds • Virtual Desktop Infrastructure (VDI) — supports specific VDI workloads.

Purpose of this guide All hardware comes preintegrated and ready to to a network switch. All software, with the exception of the VDI-specific application, comes preloaded enabling a simple installation. This guide contains information for installing and configuring the appliance and adding expansion nodes. Use the instructions and guidelines in this guide to perform the following tasks. • Initial installation and deployment tasks ◦ Plan for the installation by using the preinstallation worksheets to collect the information required ◦ Install the hardware into your datacenter environment ◦ Connect the appliance to your network and connect to the system ◦ Deploy the system using the HPE OneView InstantOn configuration utility ◦ Install HC 380 Management UI ◦ Complete the initial configuration of HC 380 Management UI ◦ Install Cloud System (optional) • Adding HC 380 expansion nodes ◦ Expand the system

NOTE: For information about installing the VDI configuation, see HPE Reference Architecture for VMware Horizon (with View) 6.2.2 on HPE Hyper Converged 380 on the Hewlett Packard Enterprise website

Supported versions The HC 380 ships with a supported set of software versions that comply with the HC 380 firmware and software compatibility matrix. Over time, the HC 380 matrix may be updated to support newer versions. To be eligible for solution-level support, customers must maintain the HC 380 to be in compliance with the HC 380 compatibility matrix. For more information, see the HPE Hyper Converged 380 Firmware and Software Compatibility Matrix on the Hewlett Packard Enterprise website.

Introduction 7 NOTE: Any expansion node that you add to the HC 380 cluster must be the same version as the existing HC 380 cluster. If you purchase an expansion node that is a different version from the cluster, perform either of the following: • Upgrade the cluster • Downgrade the node to match the system using the USB reset feature. To verify the version of the deployed cluster, view the version that is displayed on the splash screen that appears when you launch the HC 380 Management UI. For more information about upgrading the cluster or using the USB reset feature, see the HPE Hyper Converged 380 User Guide at http:// www.hpe.com/support/hc380ugeen.

8 Introduction Before you begin

The HC 380 installation process is designed for an IT specialist who is familiar with computer hardware and software concepts and virtual machine networking. You are encouraged to read through this document before you begin installation and familiarize yourself with each of the steps. You need the following items to complete the installation successfully: • Network switches and cables that meet the following requirements: ◦ 1Gb connections for each of the iLO and 1Gb LOM ports on each node ◦ 1Gb or 10Gb connections for the FlexLOM ports on each node ◦ 1Gb and 10Gb connections IPv6 capable and enabled • Appropriate power and rack space for the HC 380 nodes and any other hardware that may be part of the environment. • Laptop or other computer that can be network cabled directly to a node to begin configuration. The instructions assume that you are using a Windows-based computer with Remote Desktop Services installed. • VMware vSphere Enterprise or Enterprise Plus license Networking requirements Planning and executing the network installation is the single most important item to a successful installation. If you are integrating into an existing infrastructure or isolating the appliance from other resources, you must consider the initial appliance installation and plan for expansion.

IMPORTANT: The networking choices made during the initial installation may not be reversible at a later time without a complete reinstallation.

Flat or VLAN-tagged networks The HC 380 appliance creates and utilizes different internal networks to segregate traffic, including ESXi management, vMotion, and storage. These networks may be configured in either a "flat" or VLAN-tagged network topology, depending on the workload configuration and your specific network requirements. Here are the options based on workload type: • General virtualization: Flat or VLAN-tagged network • CloudSystem: VLAN-tagged network only • VDI: Flat or VLAN-tagged network If you have a choice and are unsure about which network type to use, consult your network administrator.

IPv6 enablement at switch level The HC 380 requires that the 1Gb and 10Gb switches be IPv6 enabled. Most switches have IPv6 enabled by default, but some companies may explicitly disable IPv6 Link Local by setting up access control lists (ACLs) or performing other IT functions. If IPv6 is not enabled, the HC 380 nodes are not discovered during the installation progress and the installation and deployment will not complete.

IP address assignments Use the topics in this section to plan your appliance networks and determine the IP addresses that are required for your installation.

Before you begin 9 Note the following points about the IP address assignments: • The appliance uses a private IPv4 network (192.168.42.0/24) for internal system communications that can not be used by other devices sharing the same network. • If you plan to expand the number of nodes in the future, Hewlett Packard Enterprise recommends that you preallocate and leave enough room in the IP address ranges. Preallocating allows you to add future nodes with IP addresses matching the initial installation subnet ranges. If you choose not to preallocate the IP addresses now, the future nodes require IP addresses within the subnet ranges used during initial system deployment. • You need between 5 and 119 IPv4 addresses, depending upon whether you are performing an initial configuration or expanding the HC 380 cluster. ◦ If you are performing an initial configuration, the system requires between 15 and 119 IPv4 addresses, depending on the number of nodes in your initial configuration and whether you are installing CloudSystem. ◦ If you are expanding an existing HC 380 cluster, you need between 5 and 70 IPv4 addresses, depending on the number of nodes you are adding and whether you are using CloudSystem. • Since the OneView InstantOn configuration utility accepts a starting IP address and automatically increments the IP addresses by the number of nodes in the initial configuration or expansion, some of the IP addresses must be contiguous. • For examples and worksheets to help you plan your networks, see "Appendix H: IP addresses for sample cluster."

ESXi management network IP addresses The ESXi management network assigned in OneView InstantOn requires contiguous IP addresses. When assigning IP addresses for the ESXi management network, the Management VM IP address is assigned first and is the starting IP address. Record the starting IP address on the preinstallation worksheet. You must choose separately the HC380 Management UI and HC380 OneView VM IP addresses on the ESXi management network so that they do not conflict with the IP addresses assigned by OneView InstantOn. When planning the ESXi management network, Hewlett Packard Enterprise recommends that you use the convention shown in the table.

Network Purpose Example Count Notes Addresses W.X.Y.n HC 380 172.28.0.1 1 This value is provided Management UI during the HC 380 VM Management UI installation. W.X.Y.n+1 HC 380 OneView 172.28.0.2 1 This value is provided VM during the initial configuration of the HC 380 Management UI W.X.Y.n+2 HC 380 172.28.0.3 1 This value is the starting IP Management VM address on the IP assignments screen of OneView InstantOn. Table Continued

10 ESXi management network IP addresses Network Purpose Example Count Notes Addresses

W.X.Y.n+3 - ESXi Nodes (16 172.28.0.4 – 1 - 16 These address values are W.X.Y.n+18 nodes) automatically and 172.28.0.19 sequentially assigned to the nodes by OneView InstantOn, starting immediately after the HC380 Management VM IP address.

W.X.Y.n+19 – CloudSystem 172.28.0.20 - 25 For CloudSystem only. W.X.Y.n+43 Management and Compute VMs 172.28.0.44 (optional)

W.X.Y.n+44 – CloudSystem 172.28.0.45 – 3 For CloudSystem only. W.X.Y.n+46 Console VIP (optional) 172.28.0.47

The examples shown above show that all IP addresses are contiguous. However, the only addresses that the system requires to be contiguous are the addresses used by HC380 management VM and the ESXi nodes, on the ESXi management network. The other addresses are not required to be contiguous, but Hewlett Packard Enterprise recommends that they be in the same address range, to enable expansion at a later date. Use one of the following calculations to help you determine the number of IPv4 addresses you need for your ESXi management network, where N is the number of nodes in your cluster. • N + 3 (for General virtualization and VDI) • N + 31 (for CloudSystem) Use the following guidelines and examples for a 16-node and two-node cluster. 16-node cluster For a 16-node cluster, you need between 19 and 47 IP addresses.

Purpose Example Count

HC 380 Management UI VM 172.28.0.1 1

HC 380 OneView VM 172.28.0.2 1

HC 380 Management VM 172.28.0.31 1

ESXi Nodes (16 nodes) 172.28.0.41 - 16 172.28.0.191

Total: 19

CloudSystem Management and 172.28.0.20 - 25 Compute VMs (optional) 172.28.0.44 Table Continued

Before you begin 11 Purpose Example Count

CloudSystem Console VIP 172.28.0.45 – 3 (optional) 172.28.0.47

Total: 47 1 Must be contiguous

Two-node cluster For a two-node cluster, Hewlett Packard Enterprise recommends that you reserve IP addresses to allow for future expansion. Therefore, you need between 19 and 47 IP addresses.

Purpose Example Count HC 380 Management UI VM 172.28.0.1 1 HC 380 OneView VM 172.28.0.2 1 HC 380 Management VM 172.28.0.31 1

ESXi nodes (two nodes) 172.28.0.4 1 2 172.28.0.51

Reserved for expansion 172.28.0.61 - 14 172.28.0.191

Total: 19

CloudSystem Management and 172.28.0.20- 25 Compute VMs (optional) 172.28.0.44

CloudSystem Console VIP 172.28.0.45 – 3 (optional) 172.28.0.47

Total: 47 1 Must be contiguous vSphere vMotion network IP addresses The vSphere vMotion network requires contiguous IPv4 addresses, one for each HC 380 node in the cluster. In OneView InstantOn, you provide the first IP address as the starting address, and the program automatically assigns all other IP addresses in sequence. Record the starting IP address on the preinstallation worksheet. If you are installing CloudSystem, no additional IP addresses are needed for the vMotion network. Use the following examples for the required number of IP addresses: • For a 16-node cluster, you will need 16 contiguous IPv4 addresses. • For a two-node cluster, you will need 2 contiguous IPv4 addresses. • For a three-node cluster, you will need 3 contiguous IPv4 addresses.

Storage network IP addresses The storage network requires three contiguous IPv4 addresses for each node, plus two additional addresses (one for the management VM on the storage network and one for the storage cluster). These IP addresses are used by the HC 380 Management VM, iSCSI initiators, and HPE StoreVirtual.

12 vSphere vMotion network IP addresses Use the following examples for the required number of IP addresses: • For a sixteen-node cluster, you will need 50 contiguous IPv4 addresses • For a two-node cluster, you will need 8 contiguous IPv4 addresses • For a three-node cluster, you will need 11 contiguous IPv4 addresses In OneView InstantOn, you provide the first IP address as the starting address, and the program automatically assigns all other IP addresses in sequence. Record the starting IP address on the preinstallation worksheet.

Planning for expansion If you are planning to expand in the future, Hewlett Packard Enterprise recommends that you preallocate the IP addresses that are needed for the future nodes. For each new expansion node, you need five IPv4 addresses for OneView InstantOn. HC 380 Management UI also requires the iLO IPv4 address for the node. The five IPv4 addresses are used as follows: • One address is used for the ESXi host on the ESXi Management network. • One address is used for the vSphere vMotion component on the ESXi host. • Three addresses are used by the storage network, including two for the iSCSI initiators. Adding an HC 380 expansion node does not require any changes to the IP addresses of the following items: • HC 380 Management UI VM • HC 380 OneView VM • HC 380 Management VM • StoreVirtual cluster If you are planning to expand to a 16-node system in the future, Hewlett Packard Enterprise recommends, but does not require, that you preallocate the IP addresses that will be needed for the remaining nodes now. Preallocating IP addresses for all 16 nodes allows you to assign contiguous blocks of IP addresses to the various networks. For each network type (ESXi, vSphere vMotion, Storage), you choose the first free IP address in that group’s range. After that IP address is chosen, OneView InstantOn will automatically increment and assign the remaining IP addresses for that group.

VMware vCenter Each HC 380 node includes a built-in Management VM on which VMware vCenter Server and HPE OneView for vCenter is preinstalled. In OneView InstantOn, this is considered a local vCenter setup. You can also deploy HC 380 using an existing VMware vCenter Server (or vCenter Server appliance) instance where OneView for VMware vCenter is already integrated. In OneView InstantOn, this is considered a remote vCenter setup. This remote setup allows you to centrally manage multiple remote sites/deployments while reducing vCenter licensing costs. During OneView InstantOn configuration, OneView InstantOn checks that the local or remote vCenter Server has OneView for vCenter configured, and will not continue until this check is complete. For additional information, see "Appendix F: Remote vCenter setup." Because of a VMware restriction that prevents the renaming of the vCenter server, Hewlett Packard Enterprise recommends that you use a remote vCenter setup if you require a custom fully qualified domain name. For more information about the VMware restriction, see https://kb.vmware.com/ selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2130599

Planning for expansion 13 NOTE: • A remote vCenter setup is not supported with CloudSystem. • You can deploy only one HC 380 cluster per vCenter datacenter. If you want to deploy multiple HC 380 clusters, they must be in separate vCenter datacenters.

Remote vCenter server requirements Before deploying the HC 380 with the remote vCenter option, ensure that the following requirements are met on the remote server on which VMware vCenter Server and OneView for VMware vCenter are installed: • Verify that there is network connectivity on the ESXi management network and on the storage network between the remote vCenter system and the FlexLOM ports on the HC 380 nodes. You need the IP address for the remote vCenter server and the default port used for SSO. • Verify that the remote vCenter is not running on a host intended to be an HC 380 appliance instance. • Disable the firewalls for the HC 380 appliance, or enable the following ports for inbound access: ◦ HPE HTTPS Port 3501 TCP ◦ HPE UIM Port 3504 TCP (must be accessible from the Management VM of the system running OneView InstantOn)

NOTE: – For more information about these ports, see “Default port values” in HP OneView for VMware vCenter Installation Guide found on the Hewlett Packard Enterprise website. – These firewall or port settings are only required during deployment of a new system installation or a system expansion. Once that deployment is complete, you can either re- enable the firewalls or disable port access. • Install VMware vCenter Server. Hewlett Packard Enterprise recommends that you always install the latest updates of the software. • Install OneView for VMware vCenter and ensure that it has access to the networks that will be used for HC 380 ESXi management and for HC 380 storage. • Ensure that you have access to the OneView for VMware vCenter administrator credentials on the remote server. These credentials are used by OneView InstantOn during deployment. • Verify that the remote server can support the additional system hosts, datacenter, and Virtual Machines that will be installed or added (expanded).

NOTE: To determine the supported limits for each version, see the VMware vCenter Server documentation.

HPE iLO 4 Although iLO 4 is not required for daily use, Hewlett Packard Enterprise recommends that you configure the iLO of each HC 380 server as part of the initial setup. iLO is required when performing a node recovery, for performing a firmware update, and for CloudSystem. The default iLO credentials can be found on the toe tag on each HC 380 node. For more information about configuring iLO, see HPE iLO 4 User Guide, which is available on the Hewlett Packard Enterprise website.

14 Remote vCenter server requirements Preinstallation worksheets

This section contains information lists for data needed during the appliance installation and configuration. If you are expanding an existing configuration, you will need to obtain data from the existing appliance and add any data for an expansion node. • Appliance networks • vCenter requirements • Settings • iLO addresses • Default user names and passwords • CloudSystem Appliance networks For each network type (ESXi, vSphere vMotion, Storage), you choose the first IP address in the range for that group. Once that IP address is chosen, OneView InstantOn automatically increments and assigns the remaining IP addresses for that group. Each network type assigned in OneView InstantOn requires contiguous IP addresses. Use the worksheets for an initial installation and for an expansion. ESXi Network Components To help you plan the ESXi management network, see "ESXi management network IP addresses."

ESXi network components Value

Starting IP address (used by the HC 380 Management VM during initial installation or by the first ESXi node being added during an expansion)

Subnet Mask Gateway

vSphere vMotion IPs To help you plan the vSphere vMotion network, see "vSphere vMotion network IP addresses."

vSphere vMotion IPs Value Starting IP address Subnet Mask VLAN ID (optional, for CloudSystem)

Storage network IPs To help you plan the storage network, see "Storage network IP addresses."

Preinstallation worksheets 15 Storage network IPs Value Starting IP address Subnet Mask Gateway VLAN ID (optional, for CloudSystem)

HC 380 Management UI VM For the HC 380 Management UI initial configuration, you assign two additional IPv4 addresses for the HC 380 Management UI and HPE OneView components, listed in the worksheet. IP addresses assigned to the HC 380 Management UI VM and HPE OneView VM should be from the same ESXi management network as above. The IP addresses used during the initial HC 380 Management UI configuration should be outside the IP addresses that are assigned by the OneView InstantOn deployment. To help you determine the IP addresses to use in this worksheet, see "ESXi management network IP addresses."

Item Value HC 380 Management UI VM administrator password HC 380 Management UI VM IP address Subnet Mask Gateway HPE OneView VM IP address Subnet Mask Gateway vCenter requirements Use either of the following topics in this section: • Local vCenter - if you are installing the product using the vCenter software that shipped with the system • Remote vCenter - if you are installing the product using the vCenter software currently running on another server

Local vCenter Use the worksheet below to prepare to install the HC 380 using the vCenter software that shipped with the system.

16 vCenter requirements Item Value VMware vCenter Server 6 Standard license Datacenter and cluster on OneView for VMware vCenter that will be used for the system hosts and VSA storage. You can use an existing cluster in the datacenter or create a cluster.

NOTE: Ensure that you follow VMware naming conventions when creating datacenter and cluster names. If special characters are used in the names, OneView InstantOn will hang.

Remote vCenter Use this section only: • If you plan to use a remote vCenter in your initial HC 380 installation • If you are expanding and you selected the remote vCenter option during the initial installation Initial installation If you choose the remote vCenter option on the OneView InstantOn vCenter screen, you are required to enter the following items into the Management VM ESX Connectivity area on the vCenter screen: • HC 380 Management VM IP address • Subnet mask • Gateway • DNS addresses These values are used to configure the Management VM and enable OneView InstantOn to access the remote server. These values also allow OneView InstantOn to verify that the remote instance of OneView for VMware vCenter is installed at the correct, minimum version. The HC 380 Management VM IP address, subnet mask, and gateway are automatically populated on the OneView InstantOn IP assignment screen as the starting address for the ESXi network. The DNS value is used on the OneView InstantOn Settings screen. For more information on a remote vCenter setup, see "Appendix F: Remote vCenter setup."

Item Value IP address of remote vCenter system Port on remote vCenter system User name for vCenter on remote system Password for vCenter on remote system

IP address for HC 380 Management VM on the ESXi management network (same as starting address of the ESXi components on IP assignment screen) Table Continued

Remote vCenter 17 Item Value

Subnet Mask for the HC 380 ESXi management network (same as subnet mask value of the ESXi components on IP assignment screen)

Gateway on the HC 380 ESXi management network (same as gateway value of the ESXi components on IP assignment screen)

DNS Server on the HC 380 ESXi management network (same as DNS value from Settings screen)

Datacenter and cluster on OneView for VMware vCenter that will be used for the system hosts and VSA storage. You can use an existing cluster in the datacenter or create a cluster.

NOTE: Ensure that you follow VMware naming conventions when creating datacenter and cluster names. If special characters are used in the names, OneView InstantOn will hang.

Expansion During an expansion, you are required to provide the password for the remote vCenter user name.

Remote vCenter server item Value IP address Automatically populated from initial installation Port Automatically populated from initial installation User name Automatically populated from initial installation Password

Settings Use the following checklist to set up your storage network for a new deployment.

18 Settings Item Value

Management group name

NOTE: The default name is HP-HyperConv-, but you can change it. The guidelines for StoreVirtual management group names are: • Up to 127 characters • Must begin with a letter • Allowed characters: 0–9, a-z, A-Z, hyphen (-), underline (_), and any of the following special characters \ ! @ # % ^ & * ( ) + \ | \ ] } \ [ { ? . > <

StoreVirtual user name Must contain 3–30 characters, begin with a letter, and can include numbers, or the _* characters

StoreVirtual password Must contain 5–40 characters and not include the \ / , . ; ' " : characters

vSphere license (optional) Only needed if connecting to a local vCenter

Storage network DNS Storage network NTP (optional) Storage network mail server Storage network mail server port Sender email Recipient email NFS file share path (for two-node system)1 1 For more information, see "Appendix G: Management group quorum consideration "

Use the following checklist for an expansion.

Item Value

StoreVirtual user name Use same value provided during initial deployment.

StoreVirtual password Use same value provided during initial deployment.

Preinstallation worksheets 19 iLO addresses iLO IPv4 address requirements Each HC 380 server has an iLO which is manually configured with an IPv4 address accessible from the ESXi management network. HC 380 Management UI imports the iLO for each of the servers by IP address during its configuration process, so that it can manage the iLOs. HC 380 Management UI does not require that you enter the server serial number, but it does associate each IP address to a serial number. All of the iLO IP addresses must be accessible from the ESXi management network. For more information about iLO, see "HPE iLO 4." iLO address worksheet Use the worksheet to record the node serial number and the iLO IP address associated to that serial number. During an expansion, you can use this worksheet and then manually enter the IPv4 addresses into HC 380 Management UI (Settings area) for each new node.

Node Serial number iLO IP address 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Default user names and passwords The following default usernames and passwords are shipped with the appliance or are needed to complete the setup.

20 iLO addresses Item Username Password vCenter Server (local) [email protected] HyperConv!234 HC380 Management VM administrator HyperConv!234 ESXi shell root HyperConv!234 iLO administrator CloudSystem Operations admin unset Console OpenStack Horizon Console admin unset CloudSystem Management cloudadmin cloudadmin Appliances CSA MarketPlace Portal consumer cloud Operations Orchestration administrator unset

For more information about iLO, see "HPE iLO 4." CloudSystem Use the following worksheet to assist in the preplanning for the CloudSystem workload configuration. If you are installing the general virtualization or VDI workload configurations, you can skip this spreadsheet. While the Block and Object networks are not used, the installation process requires a VLAN for each network. Assign two unused VLAN IDs for these networks that do not interfere with any of your other networks. For more information, see "Appendix E: CloudSystem Network Diagram."

Item Value Data Center Management Network DCM IP Range: (25 contiguous IPs) DCM VLAN ID: DCM CIDR: DCM Gateway: DCM Management Appliance VIP: DCM Management Appliance FQDN: DCM Cloud Controller VIP: DCM Cloud Controller FQDN: DCM Enterprise Appliance VIP: DCM Enterprise Appliance FQDN: Cloud Management Network CLM VLAN ID: Consumer Access Network CAN IP Range: (+3 contiguous IPs) Table Continued

CloudSystem 21 Item Value CAN VLAN ID: CAN CIDR: CAN Gateway: CAN Cloud Controller VIP: CAN Cloud Controller FQDN: CAN Enterprise Appliance VIP: CAN Enterprise Appliance FQDN: External Network External VLAN ID: Provider Networks Provider 1 VLAN ID: Provider 1 CIDR: Provider 2 VLAN ID: Provider 2 CIDR: Provider 3 VLAN ID: Provider 3 CIDR: Provider 4 VLAN ID: Provider 4 CIDR: Tenant Networks Tenant 1 VLAN ID: Tenant 1 CIDR: Tenant 2 VLAN ID: Tenant 2 CIDR: Tenant 3 VLAN ID: Tenant 3 CIDR: Tenant 4 VLAN ID: Tenant 4 CIDR: Block Storage Networks Block Storage VLAN ID: Object Proxy Network Object Proxy VLAN ID:

22 Preinstallation worksheets Appliance components

The following diagrams are examples to help you understand important component locations. Because the HC 380 is available with an array of options, the storage, networking, and power components vary for your specific configuration.

NOTE: The tamper-proof coating used on the Microsoft Certificate of Authenticity (COA) label was removed from your server for the software license application at the Hewlett Packard Enterprise factory.

Front panel • Front panel with HDDs or SSDs in all three storage bays

Item Description 1 Bay 3, with 8 HDDs or SSDs (optional) 2 Bay 2, with 8 HDDs or SSDs (optional) 3 Bay 1, with 8 HDDs or SSDs • Front panel hybrid configuration with 6 HDDs and 2 SSDs in each storage bay

Item Description 1 Bay 3, with 6 HDDs and 2 SSDs (optional) 2 Bay 2, with 6 HDDs and 2 SSDs (optional) 3 Bay 1, with 6 HDDs and 2 SSDs

Appliance components 23 Rear panel components

General virtualization and VDI rear panel components • 10 GbE appliance

Item Description 1 10 GbE NIC Port 2 2 10 GbE NIC Port 1 (FlexLOM) 3 nVIDIA graphics card (VDI only, optional) 4 iLO connector 5 1 GbE RJ-45 port 1 (Do not use during initial configuration.) 6 1 GbE RJ-45 port 2 (For connection to a laptop or workstation for setup, on management VM) 7 1 GbE RJ-45 port 3 (Not used during initial configuration; available for customer network) 8 1 GbE RJ-45 port 4 (Not used during initial configuration; available for customer network) 9 nVIDIA graphics card (VDI only, optional) 10 Power supply 1 (PS1) 11 Power supply 2 (PS2) 12 Rear panel HDDs (VDI only, optional) • All 1 GbE appliance (General virtualization only)

Item Description 1 1 GbE RJ-45 port 8 2 1 GbE RJ-45 port 7 Table Continued

24 Rear panel components Item Description 3 1 GbE RJ-45 port 6 4 1 GbE RJ-45 port 5 5 iLO connector 6 1 GbE RJ-45 port 1 (Do not use during initial configuration.) 7 1 GbE RJ-45 port 2 (For connection to a laptop or workstation for setup, on management VM) 8 1 GbE RJ-45 port 3 (Not used during initial configuration; available for customer network) 9 1 GbE RJ-45 port 4 (Not used during initial configuration; available for customer network) 10 Power supply 1 (PS1) 11 Power supply 2 (PS2)

CloudSystem rear panel components

Item Description 1 10 GbE NIC Port 2 2 10 GbE NIC Port 1 3 10 GbE NIC (Ports 4 and 3) 4 10 GbE NIC (Ports 6 and 5) 5 iLO connector 6 1 GbE RJ-45 port 1 (Do not use during initial configuration.) 7 1 GbE RJ-45 port 2 (For connection to a laptop or workstation for setup, on management VM) 8 1 GbE RJ-45 port 3 (Not used during initial configuration; available for customer network) 9 1 GbE RJ-45 port 4 (Not used during initial configuration; available for customer network) 10 Power supply 2 (PS1) 11 Power supply 2 (PS2)

CloudSystem rear panel components 25 Installing the HC 380 appliance nodes

This section provides instructions for installing the appliance nodes into a rack and cabling the network. Optimum environment When installing the server in a rack, select a location that meets the environmental standards described in this section.

Space and airflow requirements To allow for servicing and adequate airflow, observe the following space and airflow requirements when deciding where to install a rack: • Leave a minimum clearance of 85.09 cm (33.5 in) in front of the rack. • Leave a minimum clearance of 76.2 cm (30 in) behind the rack. • Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack or row of racks. Hewlett Packard Enterprise nodes draw in cool air through the front door and expel warm air through the rear door. Therefore, the front and rear rack doors must be adequately ventilated to allow ambient room air to enter the cabinet, and the rear door must be adequately ventilated to allow the warm air to escape from the cabinet.

CAUTION: To prevent improper cooling and damage to the equipment, do not block the ventilation openings.

When vertical space in the rack is not filled by a server or rack component, the gaps between the components cause changes in airflow through the rack and across the servers. Cover all gaps with blanking panels to maintain proper airflow.

CAUTION: Always use blanking panels to fill empty vertical spaces in the rack. This arrangement ensures proper airflow. Using a rack without blanking panels results in improper cooling that can lead to thermal damage.

The 9000 and 10000 Series Racks provide proper server cooling from flow-through perforations in the front and rear doors that provide 64 percent open area for ventilation.

CAUTION: When using a branded 7000 series rack, install the high airflow rack door insert (PN 327281-B21 for 42U rack, PN 157847-B21 for 22U rack) to provide proper front-to-back airflow and cooling.

26 Installing the HC 380 appliance nodes CAUTION: If a third-party rack is used, observe the following additional requirements to ensure adequate airflow and to prevent damage to the equipment: • Front and rear doors—If the 42U rack includes closing front and rear doors, you must allow 5,350 sq cm (830 sq in) of holes evenly distributed from top to bottom to permit adequate airflow (equivalent to the required 64 percent open area for ventilation). • Side—The clearance between the installed rack component and the side panels of the rack must be a minimum of 7 cm (2.75 in).

Temperature requirements To ensure continued safe and reliable equipment operation, install or position the system in a well- ventilated, climate-controlled environment. The maximum recommended ambient operating temperature (TMRA) for most server products is 35°C (95°F). The temperature in the room where the rack is located must not exceed 35°C (95°F).

CAUTION: To reduce the risk of damage to the equipment when installing third-party options: • Do not permit optional equipment to impede airflow around the server or to increase the internal rack temperature beyond the maximum allowable limits. • Do not exceed the manufacturer’s TMRA.

Power requirements Installation of this equipment must comply with local and regional electrical regulations governing the installation of information technology equipment by licensed electricians. This equipment is designed to operate in installations covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA-75, 1992 (code for Protection of Electronic Computer/Data Processing Equipment). For electrical power ratings on options, refer to the product rating label or the user documentation supplied with that option.

WARNING: To reduce the risk of personal injury, fire, or damage to the equipment, do not overload the AC supply branch circuit that provides power to the rack. Consult the electrical authority having jurisdiction over wiring and installation requirements of your facility.

CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating uninterruptible power supply. This device protects the hardware from damage caused by power surges and voltage spikes and keeps the system in operation during a power failure.

Connecting a DC power cable to a DC power source

NOTE: To reduce the risk of electric shock or energy hazards: • This equipment must be installed by trained service personnel, as defined by the NEC and IEC 60950-1, Second Edition, the standard for Safety of Information Technology Equipment. • Connect the equipment to a reliably grounded Secondary circuit source. A Secondary circuit has no direct connection to a Primary circuit and derives its power from a transformer, converter, or equivalent isolation device. • The branch circuit overcurrent protection must be rated 27 A.

Temperature requirements 27 WARNING: When installing a DC power supply, the ground wire must be connected before the positive or negative leads.

WARNING: Remove power from the power supply before performing any installation steps or maintenance on the power supply.

CAUTION: The server equipment connects the earthed conductor of the DC supply circuit to the earthing conductor at the equipment. For more information, see the documentation that ships with the power supply.

CAUTION: If the DC connection exists between the earthed conductor of the DC supply circuit and the earthing conductor at the server equipment, the following conditions must be met:

• This equipment must be connected directly to the DC supply system earthing electrode conductor or to a bonding jumper from an earthing terminal bar or bus to which the DC supply system earthing electrode conductor is connected. • This equipment should be located in the same immediate area (such as adjacent cabinets) as any other equipment that has a connection between the earthed conductor of the same DC supply circuit and the earthing conductor, and also the point of earthing of the DC system. The DC system should be earthed elsewhere. • The DC supply source is to be located within the same premises as the equipment. • Switching or disconnecting devices should not be in the earthed circuit conductor between the DC source and the point of connection of the earthing electrode conductor. To connect a DC power cable to a DC power source: 1. Cut the DC power cord ends no shorter than 150 cm (59.06 in). 2. If the power source requires ring tongues, use a crimping tool to install the ring tongues on the power cord wires.

IMPORTANT: The ring terminals must be UL approved and accommodate 12 gauge wires.

IMPORTANT: The minimum nominal thread diameter of a pillar or stud type terminal must be 3.5 mm (0.138 in); the diameter of a screw type terminal must be 4.0 mm (0.157 in). 3. Stack each same-colored pair of wires and then attach them to the same power source. The power cord consists of three wires (black, red, and green). For more information, see the documentation that ships with the power supply. Rack warnings

WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:

• The leveling jacks are extended to the floor. • The full weight of the rack rests on the leveling jacks. • The stabilizing feet are attached to the rack if it is a single-rack installation.

28 Rack warnings • The racks are coupled together in multiple-rack installations. • Only one component is extended at a time. A rack may become unstable if more than one component is extended for any reason.

WARNING: To reduce the risk of personal injury or equipment damage when unloading a rack:

• At least two people are needed to safely unload the rack from the pallet. An empty 42U rack can weigh as much as 115 kg (253 lb), can stand more than 2.1 m (7 ft) tall, and might become unstable when being moved on its casters. • Never stand in front of the rack when it is rolling down the ramp from the pallet. Always handle the rack from both sides. Installing the node into the rack

CAUTION: Always plan the rack installation so that the heaviest item is on the bottom of the rack. Install the heaviest item first, and continue to populate the rack from the bottom to the top.

Procedure 1. Install the server and cable management arm into the rack. For more information, see the installation instructions that ship with the 2U Quick Deploy Rail System. 2. Connect peripheral devices to the server. For information on identifying connectors, see one of the following: •"General virtualization and VDI rear panel components" •"CloudSystem rear panel components"

WARNING: To reduce the risk of electric shock, fire, or damage to the equipment, do not plug telephone or telecommunications connectors into RJ-45 connectors.

3. Connect the power cord to the rear of the server. 4. Install the power cord anchors.

5. Secure the cables to the cable management arm.

Installing the node into the rack 29 IMPORTANT: When using cable management arm components, be sure to leave enough slack in each of the cables to prevent damage to the cables when the server is extended from the rack.

6. Connect the power cord to the AC power source.

WARNING: To reduce the risk of electric shock or damage to the equipment: • Do not disable the power cord grounding plug. The grounding plug is an important safety feature. • Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times. • Unplug the power cord from the power supply to disconnect power to the equipment. • Do not route the power cord where it can be walked on or pinched by items placed against it. Pay particular attention to the plug, electrical outlet, and the point where the cord extends from the node.

Cabling the system This section provides some cabling examples to help you properly cable your appliance. Your configuration may differ from the examples, but the details should provide guidance for you to properly cable the appliance in your environment. After completing the network connections, be sure to connect the power cables to the system. Hewlett Packard Enterprise recommends two 10GbE or 1GbE (if 1GbE solution is used) switches configured in a highly available configuration to ensure a network switch failure does not prevent access to the HC 380 configuration. Examples on how to configure both HPE 5900 series switches and Cisco Nexus 5600 switches are provided in "Appendix A: Network switch configuration."

General virtualization configuration (all 1 GbE appliance) The following cabling example shows the use of three 1 GbE switches with the all 1 GbE appliance. This example applies to the General Virtualization workload configuration only.

30 Cabling the system Item Description Ite Description m 1 1 GbE Switch A (IPv6 enabled) 13 Connect Node 2, Port 4 to Switch A (IPv6 enabled) 2 Interconnect switch links 14 Connect Node 2, Port 2 to Switch A (IPv6 enabled) 3 Interconnect switch links 15 Connect Node 2, Port 3 to Switch B (IPv6 enabled) 4 1 GbE Switch B (IPv6 enabled) 16 Connect Node 2, Port 1 to Switch B (IPv6 enabled) 5 Connect Node 1, Port 4 to Switch A (IPv6 17 1GbE RJ-45 port 2 (Not used) enabled) 6 Connect Node 1, Port 2 to Switch A (IPv6 18 1GbE RJ-45 port 3 (Not used during initial enabled) configuration; available for customer network) 7 Connect Node 1, Port 3 to Switch B (IPv6 19 1GbE RJ-45 port 4 (Not used during initial enabled) configuration; available for customer network) 8 Connect Node 1, Port 1 to Switch B (IPv6 20 Node 2 enabled) 9 1GbE RJ-45 port 2 (For connection to a 21 1 GbE Switch laptop or workstation for setup) 10 1GbE RJ-45 port 3 (Not used during initial 22 Connect Node 2, iLO port to 1 GbE Switch configuration; available for customer network) Table Continued

Installing the HC 380 appliance nodes 31 Item Description Ite Description m 11 1GbE RJ-45 port 4 (Not used during initial 23 Connect Node 1, iLO port to 1 GbE Switch configuration; available for customer network) 12 Node 1

General virtualization (10GbE appliance) and VDI configurations The following cabling example shows the use of two 10 GbE switches and one 1 GbE switch. This example applies to both the General Virtualization and VDI workload configurations. Though the rear components might vary for each configuration, the cabling for the two configurations is similar.

Ite Description Ite Description m m 1 10 GbE Switch A (IPv6 enabled) 11 Connect Node 2, Port 2 to Switch A (IPv6 enabled) 2 Interconnect switch links 12 Connect Node 2, Port 1 to Switch B (IPv6 enabled) 3 Interconnect switch links 13 1GbE RJ-45 port 2 (Not used) 4 10 GbE Switch B (IPv6 enabled) 14 1GbE RJ-45 port 3 (Not used during initial configuration; available for customer network) 5 Connect Node 1, Port 2 to Switch A (IPv6 15 1GbE RJ-45 port 4 (Not used during initial enabled) configuration; available for customer network) Table Continued

32 General virtualization (10GbE appliance) and VDI configurations Ite Description Ite Description m m 6 Connect Node 1, Port 1 to Switch B (IPv6 16 Node 2 enabled) 7 1GbE RJ-45 port 2 (For connection to a 17 1 GbE Switch laptop or workstation for setup) 8 1GbE RJ-45 port 3 (Not used during initial 18 Connect Node 2, iLO port to 1 GbE switch configuration; available for customer network) 9 1GbE RJ-45 port 4 (Not used during initial 19 Connect Node 1, iLO port to 1 GbE switch configuration; available for customer network) 10 Node 1

CloudSystem configuration

Ite Description Ite Description m m 1 10 GbE Switch A (IPv6 enabled) 15 Connect Node 1, Port 4 to Switch A (IPv6 enabled) 2 Interconnect switch links 16 Connect Node 1, Port 6 to Switch A (IPv6 enabled) 3 Interconnect switch links 17 Connect Node 1, Port 5 to Switch B (IPv6 enabled) Table Continued

CloudSystem configuration 33 Ite Description Ite Description m m 4 10 GbE Switch B (IPv6 enabled) 18 Connect Node 1, Port 3 to Switch B (IPv6 enabled) 5 Connect Node 1, Port 4 to Switch A (IPv6 19 Node 2 enabled) 6 Connect Node 1, Port 6 to Switch A (IPv6 20 Connect Node 2, Port 2 to Switch A (IPv6 enabled) enabled) 7 Connect Node 1, Port 5 to Switch B (IPv6 21 Connect Node 2, Port 1 to Switch B (IPv6 enabled) enabled)

8 Connect Node 1, Port 3 to Switch B (IPv6 22 1 GbE RJ-45 Port 2 (Not used during initial enabled) configuration; available for customer network)

9 Node 1 23 1 GbE RJ-45 Port 3 (Not used during initial configuration; available for customer network) 10 Connect Node 1, Port 2 to Switch A (IPv6 24 1 GbE RJ-45 Port 4 (Not used during initial enabled) configuration; available for customer network) 11 Connect Node 1, Port 1 to Switch B (IPv6 25 1 GbE switch enabled) 12 1 GbE RJ-45 Port 2 (For connection to a 26 Connect Node 2, iLO port to 1 GbE switch laptop or workstation for setup) 13 1 GbE RJ-45 Port 3 (Not used during initial 27 Connect Node 1, iLO port to 1 GbE switch configuration; available for customer network) 14 1 GbE RJ-45 Port 4 (Not used during initial configuration; available for customer network)

34 Installing the HC 380 appliance nodes Configuring the system

Configuring the HC 380 system This section describes how to complete the appliance configuration steps. Before you begin, ensure that the switches and appliance nodes have been racked and cabled, and that you have completed the preinstallation worksheets.

Procedure 1. Configure the network switches. 2. Power on all nodes. 3. Configure a laptop or workstation to access the system. 4. Test network connectivity. 5. License and install NVIDIA Tesla M60 GPUs. (optional) 6. Install HPE HC StoreVirtual Status Provider. 7. Configure the HC 380 using OneView InstantOn. 8. Install HC 380 Management UI. Configuring the network switches The system uses the FlexLOM (1 or 10GbE interfaces) as well as additional 10GbE Interfaces (for CloudSystem) for all network communication beyond the original deployment. The switch configuration in your environment may vary. For a sample network switch configuration and instructions to create the sample configuration, see "Appendix A: Network switch configuration." General virtualization and VDI workload configurations HC 380 General virtualization or VDI solutions are supported on a flat/untagged network or a network with tagged VLANs to segment iSCSI and vMotion traffic. The ESXi management network must always be an untagged (pvid) network. You may put the ESXi, vMotion, and storage networks on the same subnet, but Hewlett Packard Enterprise recommends separate subnets or VLANs to isolate and manage the network traffic. When using the HC 380 General Virtualization or VDI solutions, the following networks and configurations are used over the FlexLOM (1 or 10GbE) interface for each node: • ESXi Management Network (untagged, pvid network). • VMware vMotion Network (untagged or tagged network). • VSA iSCSI Storage network (untagged or tagged network). • IPv6 must be enabled for the VLANs used by the HC380. CloudSystem configuration CloudSystem requires separate VLANs for each network. The ESXi management network must be untagged for initial deployment and then migrated to a tagged network during the installation process. For more information about migrating, see "Installing CloudSystem." When using the CloudSystem solutions, the following networks and configurations are used by the solution over the FlexLOM, PCI Slot 2, and PCI Slot 3 (10GbE) interfaces for each node: • HC 380 ESXi Management Network/Data Center Management Network (untagged pvid network). • VSA iSCSI Storage network/iSCSI Storage Network (tagged network). • VMware vMotion Network (tagged network).

Configuring the system 35 • Cloud Management Network (tagged network). • Consumer Access Network (tagged network). • Object Proxy Network (tagged network). • IPv6 must be enabled for the VLANs used by the HC 380. All networks require separate subnets. The subnet 192.168.0.0/21 is reserved by CloudSystem and cannot be changed. Powering on all nodes The system firmware initiates an automatic power-on sequence when the power cables are connected and the nodes are installed. The default power setting is set to always on. Do not change the default power setting unless instructed by Hewlett Packard Enterprise. Before powering on the nodes, ensure that the cables are configured as recommended. For more information, see "Cabling the system." If the system does not automatically power on, you can use the following alternate methods: • Use a virtual power button selection through iLO. • Press and release the Power On/Standby button. When the node goes from the standby mode to the full power mode, the node power LED changes from amber to green. For more information about iLO, see the Hewlett Packard Enterprise website. Configuring a laptop or workstation to access the system Use these steps to connect an installation computer to an HC 380 node and establish IPv4 network connectivity. Windows remote desktop software is used to establish a remote desktop session on the HC380 node.

NOTE: Instructions are provided for a Windows system. If you are using a non-Windows system, see the appropriate documentation.

To access the system, use a laptop or workstation with a 1 GbE port that can run Microsoft Windows Remote Desktop Services (for example, mtsc.exe).

Procedure 1. Disconnect the laptop or workstation from all networks. 2. Connect the 1 GbE laptop or workstation port to the system using a Cat5E cable. Use the following illustration to locate the correct port. Your rear components may vary from what is shown.

3. Configure the laptop or workstation port to use the static IP address 192.168.42.99 with subnet mask 255.255.255.0 (a gateway address is not required).

36 Powering on all nodes IMPORTANT: Do not configure a laptop or workstation with an IP address of 192.168.42.100 or greater. These addresses might be used by the appliance.

a) Access the Network and Sharing Center from the Windows desktop. b) Navigate to the available network connections. c) Right-click the appropriate NIC and select Properties. d) Select Protocol Version 4, and then select Properties. e) Select Use the following IP address and enter the IP address 192.168.42.99, and subnet mask, 255.255.255.0. Click OK. 4. From the laptop or workstation, locate and select Remote Desktop Connection from the Start menu. In the Computer field, enter 192.168.42.100, and then click Connect. 5. In the Windows Security dialog , enter the credentials: • User name: administrator • Password: HyperConv!234 Testing network connectivity Before continuing to configure the HC 380, test your network setup to ensure that IPv6 link-local is enabled on your network between your HC 380 nodes. To ensure IPv6 link-local is configured properly, perform the following test.

Procedure 1. (Optional) Test the IPv6 connectivity to the iLO of each HC 380 node. This test may not work if the switch your iLO is connected to does not support IPv6. 2. Test the IPv6 connectivity to the VMware ESXi hosts from the node you plan to use as the management VM. You can also test network connectivity by pinging an IPv6 address. For more information on each of these tasks, see "Validating the switch configuration." Licensing and installing NVIDIA Tesla M60 GPUs Use this section if you use the VDI configuration of the HC 380 and plan to use NVIDIA Tesla M60 GPUs. The procedures in this section are used to perform the following: • Install the GPU configuration utility • Verify that the GPU mode is set to Graphics • Install the NVIDIA drivers on the HC 380 vSphere nodes The procedures in this section must be performed before configuring the HC 380 using OneView InstantOn. If your HC 380 purchase includes NVIDIA Tesla M60 GPUs, you must visit the NVIDIA website to register your product and download the following: • 90-day trial license • Preferred Solution Provider list • NVIDIA GRID software • GPU mode change utility • License server software • Documentation

Testing network connectivity 37 To maintain full functionality of your GPU after the 90-day trial license expires, you must purchase a license from a Preferred Solution Provider. Locate a Preferred Solution Provider using the provider list. For NVIDIA GRID quickstart documentation and resources, visit the NVIDIA website.

Installing the NVIDIA GPU mode change utility on the vSphere host Use the following steps to install the GPU mode change utility that is used to verify the NVIDIA GPU mode.

Procedure 1. Download the GPU mode change utility by registering for the NVIDIA Grid software from the NVIDIA website. 2. On the vSphere host, extract the GPU mode change utility from NVIDIA- gpumodeswitch-2016-04.zip. 3. Copy the NVIDIA-GpuModeSwitch-1OEM.600.0.0.2494585.x86_64.vib to the root of the vSphere host. 4. To put the HC 380 node into Maintenance mode, run the following command: # esxcli system maintenanceMode set -e true -t 0 5. To install the GPU mode change utility, run the following command : # esxcli software vib install --no-sig-check -v /NVIDIA-GpuModeSwitch-1OEM. 600.0.0.2494585.x86_64.vib Since the Software acceptance level is set to PartnerSupported by default, use the no-sig-check option. 6. To remove the vSphere host from Maintenance mode, run the following command: # esxcli system maintenanceMode set -e false -t 0 7. Wait for the host to exit Maintenance Mode and then run the following command to reboot the vSphere host. # reboot

Verifying the NVIDIA GPU mode On the vSphere host, run the following command to list the current GPU mode: # gpumodeswitch --listgpumodes If the GPU mode is set to Graphics, proceed to the next task "Installing NVIDIA GRID Manager software on vSphere." If the GPU mode is not set to Graphics, perform the following steps to change the GPU mode.

Procedure 1. To switch the mode, run the following command: # gpumodeswitch --gpumode graphics 2. To confirm updating all graphics adapters, type y. 3. To reboot the vSphere host, run the following command: # reboot 4. Verify that the GPU Mode is set to Graphics by running the following command: # gpumodeswitch --listgpumodes

38 Installing the NVIDIA GPU mode change utility on the vSphere host Installing NVIDIA GRID Manager Software on vSphere

Procedure 1. Obtain the nVIDIA GRID Manager software by visiting the NVIDIA website. 2. Extract the GPU configuration utility from NVIDIA-GRID-vSphere-6.0-361.45.09-362.56.zip. 3. Copy the NVIDIA-vGPU-VMware_ESXi_6.0_Host_Driver_361.45.09-1OEM.600.0.0.2494585.vib to the root of the vSphere host. 4. To put the HC 380 node into Maintenance mode, run the following command from the vSphere host console (or SSH): # esxcli system maintenanceMode set -e true -t 0 5. To install the NVIDIA GRID manager software, run the following command: # esxcli software vib install -v / NVIDIA-vGPU- VMware_ESXi_6.0_Host_Driver_361.45.09-1OEM.600.0.0.2494585.vib 6. To reboot the vSphere host, run the following command: # reboot 7. To remove the vSphere host from Maintenance mode, run the following command: # esxcli system maintenanceMode set -e false -t 0 8. To verify that NVIDIA GRID Tesla M60 graphics adapters are present and installed, run the following command: # nvidia-smi Installing HPE HC StoreVirtual Status Provider on the remote vCenter server If you use a remote vCenter, install HPE HC StoreVirtual Status Provider on the remote vCenter server where HPE OneView for VMware vCenter is installed.

Procedure 1. On the management VM, navigate to C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn \PostDeployment. 2. Copy the file HPE_HC_StoreVirtualStatusProvider.msi to the remote vCenter server where HPE OneView for VMware vCenter is installed. 3. On the remote vCenter server, launch the installer by double-clicking on the HPE_HC_StoreVirtualStatusProvider.msi file. 4. When the InstallShield Wizard is displayed, click Next. 5. Click Install. 6. Click Finish. Configuring the HC 380 using OneView InstantOn OneView InstantOn is the automated deployment tool that guides you through the steps to configure your HC 380 appliance. After you add all the configuration information in OneView InstantOn, you can deploy the system. Following successful completion of this configuration, you then perform the initial HC 380 Management UI configuration before you can begin deploying virtual machines.

Installing NVIDIA GRID Manager Software on vSphere 39 Launching OneView InstantOn

Procedure 1. Review the following guidelines for using OneView InstantOn: • Verify that your management VM contains the folder C:\ProgramData\Hewlett-Packard \StoreVirtual\InstantOn\PostDeployment.You may have to make the C:\ProgramData folder visible if it is hidden. Contact Hewlett Packard Enterprise Support if it does not contain the folder. • Do not run OneView InstantOn while running anything else on the system including Windows Update and configuring proxies. • Do not run OneView InstantOn while performing an HPE LeftHand OS upgrade. • OneView InstantOn allows you to perform initial deployments and expansions of existing deployments. After a successful deployment, you can launch the tool to expand the system or view the settings, but you cannot change the settings. If you want to redeploy the system, you must first perform a Quick Reset. • OneView InstantOn supports having one HC 380 cluster deployed for a vCenter datacenter. If you want to deploy multiple HC 380 clusters, each cluster must be deployed to a different datacenter. • To complete the steps in this section, use the information you entered in the preinstallation worksheets. Note that some of the values in the screenshots vary from what you entered in the preinstallation worksheets. • Tool tips and error information are available when you hover in a text box on a screen in OneView InstantOn. The information might take a few seconds to display. • The OneView InstantOn version shown in the figures in this document might differ from what is installed on your system. The content of the screens is the same, however. In addition, the values used in the figures may not match the values that you should use in your environment. See HPE HC 380 Software and Firmware Compatibility Matrix at http://www.hpe.com/support/ hc380CMenfor appropriate versions. • To navigate through OneView InstantOn, you can click Next, or click a location in the left navigation pane. The information that you enter on a screen is automatically saved. Before you click the Deploy button, you can go back to any screen and change or add information. • If the system seems unresponsive, do not attempt to restart it. If processing is occurring, allow it to complete. 2. When you connect to the management VM through Remote Desktop using a laptop or workstation, OneView InstantOn should start automatically. If it does not start automatically, navigate to the desktop and click the OneView InstantOn shortcut to start the application.

40 Launching OneView InstantOn Completing the Introduction screen

Procedure 1. Review the information on the screen. If you have not already done so, complete the worksheets in the section "Preinstallation worksheets." 2. On the Introduction screen, accept the End User License Agreements, and then click Next.

Completing the vCenter screen On the vCenter screen, select the instance of vCenter that you will use: • Select Local if the system will use a local (included) copy of vCenter. • Select Remote if the system will connect to vCenter currently running on another server. For more information about a Remote vCenter installation, see "Appendix F: Remote vCenter setup."

NOTE: CloudSystem supports a local vCenter deployment only.

Use one of the following sections, depending on whether you select Local or Remote.

IMPORTANT: • You cannot change the datacenter or cluster name after completing a new system installation or expansion. • Ensure that the datacenter you specify for the deployment does not have an HC 380 cluster already deployed. If it does, OneView Instant On will not succeed. • Ensure that you follow VMware naming convention when creating datacenter and cluster names. If special characters are used in the names, OneView InstantOn will hang.

Completing the Introduction screen 41 Selecting a local vCenter

Procedure 1. In the License field, enter the VMware vCenter Server 6 Standard license and click Apply. 2. In the Cluster field, specify the datacenter and cluster on VMware vCenter that will be used for the system hosts and VSA storage. Use the default datacenter/cluster of hpe-hc-dc/hpe-hc-clus, or specify your own datacenter and cluster names. To create a datacenter and cluster:

a) Click New. The Create New vCenter Datastore/Cluster window opens. b) Click Create. c) Enter a new datacenter and cluster name. d) Select this cluster name in the Cluster field on the vCenter screen by clicking the down arrow and selecting it from the drop-down list. 3. Click Next to continue to the Health screen.

Selecting a remote vCenter

Prerequisites • Ensure that vCenter is not running on the Management VM of another HC 380 system. Otherwise, problems could occur during deployment. • Ensure that the VMware vCenter user has access to the OneView for VMware vCenter Storage Administrator Portal. Otherwise, OneView InstantOn deployment will not complete successfully.

Procedure 1. Under Access, enter the following information: • IP address of the remote vCenter server • VMware vCenter credentials (username and password) • SSO default port 2. In the Management VM ESXi Connectivity section, enter the IPv4 address (subnet mask and gateway) for the Management VM on the system. OneView InstantOn verifies • The remote server can be accessed • The remote instance of OneView for VMware vCenter is installed at the correct, minimum version The health icon next to Access changes to green when verifications are successful. No license is required for OneView for VMware vCenter. Your switched network may require you to complete the Management VM ESXi Connectivity section before the vCenter verification in the Access section is marked successful. 3. After the Access and Management VM ESXi Connectivity sections report green status, apply a license for the remote instance of VMware vCenter if it is not already licensed. If it is already licensed, skip this field. 4. In the Cluster field, specify the datacenter and cluster on VMware vCenter that will be used for the system hosts and VSA storage. Use the default datacenter/cluster of hpe-hc-dc/hpe-hc-clus, or specify your own datacenter and cluster names.

42 Selecting a local vCenter To create a datacenter and cluster:

a) Click New. The Create New vCenter Datastore/Cluster window opens. b) Enter a datacenter and cluster name. c) Click Create. d) Select this cluster name in the Cluster field on the vCenter screen by clicking the down arrow and selecting it from the drop-down list. 5. Click Next to continue to the Health screen.

Completing the Health screen

Procedure 1. On the Health screen, verify that the expected nodes appear and that the present health is green for all nodes. If the health is red, verify that the cabling is correct. If only the local node is listed, you might need to restart OneView InstantOn. Also, since the management node must be booted before the other nodes, you might need to reboot the other nodes to ensure that this node is powered on first. You can deploy multiple systems at the same time. Up to 16 systems can be deployed simultaneously. By default, the system from which you are accessing the Management VM to complete deployment is selected.

Completing the Health screen 43 2. Select the systems you want to deploy. 3. Click Next.

Completing the IP Assignments screen

Procedure 1. On the IP Assignments screen, enter the appropriate network settings, using the information collected in the preinstallation worksheets. After you provide the starting IP address in the range, OneView InstantOn displays the ending IP address based upon how many IP addresses are needed in the range. If you selected Remote on the vCenter screen, the Starting IP address field for the ESXi network is prepopulated with the IP address you entered on that screen.

44 Completing the IP Assignments screen 2. Click Next.

Completing the Credentials screen

Procedure 1. On the Credentials screen, enter the preferred StoreVirtual credentials. You use these credentials to access the StoreVirtual Centralized Management Console when you apply the StoreVirtual licenses. Credential requirements: • User name: ◦ Must contain 3–30 characters. ◦ Must begin with a letter (a-z, A-Z). ◦ May contain ASCII letters, numbers, asterisks (*), underscores (_), or hyphens (-). ◦ Cannot contain the equal sign (=). • Password: ◦ Must contain 5–40 characters. ◦ May contain most ASCII characters, UTF-8 characters, and the multi-byte character set. ◦ Cannot contain spaces, periods (.), colons (:), semi-colons (;), forward or backward slashes (\ /), commas (,), single quote (‘), or equal sign (=).

Completing the Credentials screen 45 NOTE: You can change these credentials later using the StoreVirtual Centralized Management Console.

2. Click Next.

Completing the Settings screen Use the information collected in the preinstallation worksheets to complete the fields of the Settings screen.

Procedure 1. In the General Settings section, enter the following information: • Preferred storage system name • Storage network domain name server address • Network time protocol server address

2. In the Mail Settings section, enter the following information: • Storage network mail server and port • Sender email • Recipient email 3. If you are deploying a two-node system, a Quorum Settings section appears which enables a file- share path to be specified for the cluster quorum witness file. Enter a name for the NFS file share that will store the Quorum Witness file.

46 Completing the Settings screen Use the following example for the NFS file share name: 172.28.0.99:/Witness.

The NFS file share must meet the following requirements: • It must be a network location that has a network path different from the iSCSI path. • It must have write permissions.

If you prefer to use a VSA Failover Manager, enter an invalid NFS share IP address, and OneView InstantOn will create a Virtual Manager that can be configured in the StoreVirtual CMC as a post- deployment step. For more information, see "Appendix G: Management group quorum consideration." 4. Click Next .

Completing the Review Configuration Screen

Procedure 1. On the Review Configuration screen, ensure the information that you entered is correct. If you need to make changes, use the links in the left navigation pane to revisit the other screens.

2. When you are ready to proceed, click Deploy. After you click Deploy, the Deploy in Progress screen is displayed with a countdown timer. To view more information, click Details. Be aware that if you are deploying multiple systems, the time to complete deployment increases.

Completing the Review Configuration Screen 47 IMPORTANT: Do not close OneView InstantOn during the deployment process. If the deployment does not complete or hangs, see "Troubleshooting." You can also perform a quick reset. For more information, see Quick-reset.

When the deployment successfully completes, the Next Steps screen appears.

Completing the Next Steps screen On the Next Steps screen, start the process of applying licenses. To view help documentation and complete the installation with fully licensed VSAs, the management VM must have internet access.

Procedure 1. Apply the VMware licenses. 2. Apply the StoreVirtual licenses. 3. Click Finish.

NOTE: To view the licensing links again, restart OneView InstantOn and launch the Next Steps screen by clicking the Next Steps link in the left navigation pane.

Applying VMware licenses The VMware vSphere license on each HC 380 node is a trial license that is valid for 60 days from the date on which the node ships from the factory. Apply a purchased license before the trial license expires, or restrictions will occur. To apply your license for each of the ESXi nodes, use the following steps.

Procedure 1. Launch vCenter using the link Launch vCenter Web Client under the section License VMware software. 2. Log in using your vCenter credentials. Use vCenter credentials only; do not use a Windows account. For local vCenter, use the default credentials: • User name: [email protected] • Password: HyperConv!234 3. Apply the VMware vSphere host licenses for each ESXi host using the following steps: a) In the Administration section, select Licensing > Licenses. b) Select the Licenses tab, and click Create New Licenses (green plus icon). c) In the Enter license keys page, type or copy and paste a license key. d) Click Next and follow the onscreen instructions. e) On the Assets tab, select Hosts. f) Select the host (or Shift+click to select multiple hosts), right-click, then select Assign License. g) In the Assign License dialog, select the license you entered in step c and click OK. After you apply the license to a vSphere host, the Action field for the host changes to a green check box.

48 Completing the Next Steps screen Applying StoreVirtual licenses The StoreVirtual VSA license installed on each HC 380 node is a trial license valid for 60 days from the date the deployment completes. Apply the StoreVirtual VSA license that is included in your HC 380 purchase before the trial license expires, or restrictions will occur. Use the following steps to obtain the StoreVirtual VSA license key from the Hewlett Packard Enterprise Licensing Portal and apply it using the StoreVirtual Centralized Management Console.

Procedure 1. Under "License StoreVirtual VSAs" on the Next Steps screen, click Launch the HPE Licensing Portal (https://myenterpriselicense.hpe.com). Log in to the portal. 2. Use the StoreVirtual VSA MAC addresses under the Feature Key heading on the Next Steps screen to obtain license keys: a) Copy the MAC address by right-clicking the screen on top of the value that you want to copy and selecting copy from the context menu. b) Paste the information into a text editor. c) Copy and paste the information from the text editor into the Hewlett Packard Enterprise licensing portal. 3. Under "Manage advanced StoreVirtual features" on the Next Steps screen, click Launch the HP StoreVirtual Centralized Management Console. 4. Enter the license keys you obtained from the Hewlett Packard Enterprise licensing portal. For more information about StoreVirtual licensing, see “Registering advanced features” in HPE StoreVirtual Storage User Guide found on the Hewlett Packard Enterprise website. Quick-reset Perform a quick-reset in the following circumstances: • OneView InstantOn fails during initial system setup and no user-created virtual machines have been deployed. • An existing system must be returned to the factory state for a specific reuse purpose (demo system).

IMPORTANT: If your system includes user-created virtual machines (anything other than the StoreVirtual VSA VMs and the Management VM), shut down and remove the user-created virtual machines before starting a quick-reset. This includes the HPE-HC-mgmtui and HPE-HC-oneview VMs if they have been deployed. If you do not perform this step, the quick-reset might hang. The only way to resolve the issue is to repeat the quick-reset operation or complete the standard USB-based node recovery.

During a quick reset, a set of scripts is executed on all nodes of a Hyper Converged 380 system to return each node to the factory state. On each ESXi host, all virtual machines are removed and any existing datastores are unmounted and removed. After executing a quick reset, run OneView InstantOn again to return your system to a functional state.

Applying StoreVirtual licenses 49 Quick reset guidelines • The quick reset deletes all user-created datastores and virtual machines. Hewlett Packard Enterprise recommends that you back up user-created virtual machines by moving them to another storage device or hypervisor. • The quick reset does not upgrade your Hyper Converged system to a newer version, nor will it upgrade (or downgrade) the ESXi version. • All logs are stored in /scratch/log/kickstart/.

Performing a Quickreset

NOTE: The files required to perform a Quickreset are located in the datastore of each server in the "recovery/quickreset" folder. The files must remain in the local datastore for subsequent use.

The following procedures are required to complete a Quickreset and must be completed in the listed order.

Procedure 1. Remove or disable in the system BIOS all PCI adapters added to the system after it was deployed, not including network interface cards. 2. Power off and delete VMs. 3. Run the Quickreset script on each node. 4. Verify Quickreset completed successfully. 5. Transfer OneView InstantOn deployment files . 6. Remove iLOs from control of HPE OneView. 7. Run OneView InstantOn.

Powering off and deleting VMs For Quickreset to execute successfully, power off and delete all virtual machines other than: • StoreVirtual VSA VMs of each node • HPE HC380 Management VM (Windows Server 2012) To power off and delete VMs, use one of the following: • vSphere Web Client • VMware PowerCLI Using vSphere Web Client to remove VMs On a running cluster, use the vSphere Web Client to power off and delete all VMs except those named "SVVSA-" and "HPE-HC-mgmt-". Using VMware PowerCLI to remove VMs If there are numerous VMs, consider running VMware PowerCLI commands from the HC 380 Management VM.

Procedure 1. Open the VMWare PowerCLI command window on the HC 380 Management VM. 2. Connect to vCenter Server. If your vCenter server is local, use the following command:

50 Quick reset guidelines connect-viserver -server localhost -user [email protected] - password If your vCenter server is remote, use the command: connect-viserver -server -user [email protected] -password

For a remote vCenter at 172.28.0.222:

3. Get the names of the vSphere clusters using the following command: Get-Cluster.

From the list of clusters, identify the cluster for the environment to be reset. For the example output, the cluster is Bravo-Cluster12. 4. List all VMs of the vSphere cluster except the VSA VMs and the HPE-HC-mgmt-* VM, and verify that the list contains the VMs to be deleted. Use the following command (including the trailing hyphen in the HPE-HC-mgmt- string):

Get-vm –location | foreach {$_.Name} | findstr /v "SVVSA" | findstr /v "HPE-HC-mgmt-" The following example shows an HC 380 cluster named “Bravo-Cluster12."

Verify that the resulting list includes the VMs to be deleted. 5. To power off the VMs, use the following command: Get-vm –location | foreach {$_.Name} | findstr /v "SVVSA" | findstr /v "HPE-HC-mgmt-" | foreach {stop-vm -vm $_ -confirm:$false}

Use the following example.

6. After the VMs are powered down, remove them using the following command: Get-vm –location | foreach {$_.Name} | findstr /v "SVVSA" | findstr /v "HPE-HC-mgmt-" | foreach {remove-vm -vm $_ -DeletePermanently - confirm:$false}

Running the Quickreset script on each node Quickreset is launched on all nodes manually. Perform the following steps on each node.

Running the Quickreset script on each node 51 Quickreset can be run on each node simultaneously. (Hewlett Packard Enterprise recommends first starting Quickreset on the system hosting the HC 380 Management VM.)

Procedure 1. Launch the iLO Remote Console and activate the ESXi Shell by pressing Alt+F1. Log in to ESXi. Alternatively, an SSH session to the host can be used, but it will disconnect as the process runs when networking is reset. Using the console allows you to monitor progress. 2. Run the Quickreset script by entering the following command: # /vmfs/volumes/datastore*/recovery/quickreset/HPE-HC-quickreset.sh (This can be abbreviated to # /v*/v*/d*/r*/q*/*q*sh) The following (example) output appears. At the prompt enter Y, y, N, or n [root@:~] /v*/v*/d*/r*/q*/*q*sh

HPE Hyper Converged 380 Quickreset Utility ------

HC380 server information: Appliance node for CloudSystem Serial number is CZ1234ABCD StoreVirtual VSA Entitlement Order Number is PR12345678

WARNING: Node configuration will be reset Do you wish to continue? (Yy or Nn):y Quickreset in progress, wait 10-15 minutes (longer if there are many datastores mounted). Server will power-off when reset is complete [root@H:~] 3. The script performs its activities in the background and logs output to a file. There will be no output visible on the console, but you can monitor progress of the reset with the following command. # tail -f /scratch/log/kickstart/nodereset.log 4. After approximately 10-15 minutes, the system will automatically power off. You may encounter issues during the Quickreset process. • If the Quickreset progress stalls (there are no new messages output for considerable time when monitoring the nodereset.log file), or if the command prompt does not reappear after issuing the HPE- HC-quickreset.sh command, power off the nodes using iLO and then restart the Quickreset when the nodes reboot. Quickreset redeploys the Management VM on each node which takes several minutes. The log file reports when this step is occurring. • The following error messages may be seen on the console when Quickreset runs: Volume “” cannot be unmounted. Reason: Busy No volume with uuid ‘’ was found These errors can be ignored.

52 Configuring the system Error messages in the “nodereset.log” file may be seen and can usually be ignored. Verify success of Quickreset by looking for the presence of file SUCCESS in /scratch/log/kickstart when the node reboots.

Verifying Quickreset completed successfully To configure a new cluster, choose which server will be used for connecting to the HC 380 Management VM and for running OneView InstantOn. This server is used to move the OneView InstantOn post- deployment files from the server to the HC 380 Management VM. Perform the following steps with the chosen server:

Procedure 1. Browse to iLO and launch the iLO Remote Console. 2. Power on only the server that will run the HC 380 Management VM and OneView InstantOn. 3. Wait several minutes for the boot to complete, for the HC 380 Management VM to start, and for the vCenter Server to start up. 4. After the ESXi boots (while waiting for the VM and vCenter Server), activate the ESXi Shell (press Alt +F1) and log in using the following credentials: • User name: root • Password: HyperConv!234 You might need to first enable use of the ESXi Shell using the console Troubleshooting Options menu. 5. Verify that Quickreset completed successfully. List the /scratch/log/kickstart directory and check for the presence of a file named "SUCCESS". 6. If the SUCCESS file is present, log out of the ESXi Shell and return to the main console by pressing the Alt-F2 keys. Hewlett Packard Enterprise recommends that the ESXi Shell and SSH access are disabled using the Troubleshooting Options menu on the console. If the file SUCCESS is not present, but file FAILURE is, then an error occurred. If an error occurred, first examine several files to identify errors. It might also be necessary to look at other files in the directory. /scratch/log/kickstart/validation.txt /scratch/log/kickstart/post_nodereset.log /scratch/log/kickstart/system.info If the cause of an error cannot be resolved, retry the Quickreset. If that does not succeed, perform the USB Recovery to reinstall the server. For more information about the USB Recovery, see the HPE Hyper Converged User Guide at http://www.hpe.com/support/hc380UGEen.

Transferring OneView InstantOn deployment files

Procedure 1. Connect a workstation or laptop to the HC 380 management VM. For more information about connecting, see Configuring a laptop or workstation to access the system. 2. Log in to the HC 380 Management VM using Remote Desktop. 3. Click the Desktop tile on the Start menu.

Verifying Quickreset completed successfully 53 IMPORTANT: Wait for OneView InstantOn to launch before proceeding with the next step.

4. Launch a PowerShell window and enter the command: d:\pdconfig.ps1 The following output is displayed.

5. If you use version 1.1 Update 2, allow the pdconfig.ps1 script to close OneView InstantOn. If you use an earlier version, close OneView InstantOn. 6. Close the PowerShell window.

Removing iLOs from control of HPE OneView During the deployment of the HC 380, HPE OneView took control of the iLOs on the HC 380 servers. This association must be removed to allow the new deployment to complete successfully. When iLO is under the control of an HPE OneView: • A message similar to the following is displayed on the iLO page: Warning: This system is being managed by HPE OneView. Changes made locally in iLO will be out of sync with the centralized settings and could affect the behavior of the remote management system. • An HPE OneView page is added to the iLO navigation tree.

Procedure 1. Navigate to the HPE OneView page.

54 Removing iLOs from control of HPE OneView 2. Click the Delete button in the Delete this remote manager configuration from this iLO section. A warning message similar to the following appears: "Proceed with this deletion only if this iLO is no longer under the control of HPE OneView." 3. Click OK. The HPE OneView page is removed from the iLO navigation tree.

For more information, see the HPE iLO 4 User Guide on the Hewlett Packard Enterprise website.

Running OneView InstantOn after a Quickreset The remaining servers are now powered on. OneView InstantOn is used to initialize the environment again. After a Quickreset, OneView InstantOn is executed in the same way as it was in the original deployment. No information about the previous cluster environment is retained. All initialization tasks must be performed again. If the collection of nodes is a mixture of appliance and expansion types, then all nodes can be selected, and the cluster recreated with all them. After a Quickreset, the expansion nodes do not need to be added separately to an existing cluster. To run OneView InstantOn again after a Quickreset, see "Configuring the system".

Cluster deployment stalls after a Quickreset

Symptom Cluster deployment with HPE OneView InstantOn stalls after a Quickreset. The system reports the issue as a vSphere (High Availability) HA Configuration error.

Running OneView InstantOn after a Quickreset 55 Cause The stall is known to occur when the default datacenter and cluster names are selected in HPE OneView InstantOn. There are no visible signs of the HA Configuration error. Check for and resolve vSphere HA-related issues. The following procedure can be completed while HPE OneView InstantOn is completing its configuration.

Action

Procedure 1. Using the management VM, launch vSphere Client and log in to the VMware vCenter server using the following values. (substitute appropriate values if a remote vCenter Server is being used)

• IP address/Name: localhost • User name: [email protected] • Password: HyperConv!234 The Client might launch slower than normal. You might see that some alarms reported against the cluster name, hosts, and VMs. These alarms are reporting HA problems with the configuration. You might also see messages in the Recent Tasks window reporting an error when migrating the Management VM to shared storage. 2. Resolve the issues by disabling and enabling vSphere HA: a) Right-click the cluster name in the left pane of the client, and select Edit Settings…. b) Uncheck the Turn On vSphere HA item. c) Click OK. The alarms clear and migration of the Management VM begins. Progress is displayed in the Recent Tasks window. d) When the VM migration is complete, right-click the cluster name and select Edit Settings… again. e) Check the Turn On vSphere HA item. 3. Click OK. The servers configure vSphere HA. The remaining HPE OneView InstantOn deployment steps continue. Installing HC 380 Management UI

IMPORTANT: In this procedure, you specify a new password for the HC380 administrator account that is between 8 and 50 characters. Enter the new password carefully. If you mistype and do not recall what you entered, the user interface is unusable.

NOTE: • Accessing HC 380 Management UI requires the Mozilla Firefox web browser. • Before beginning this step, launch vCenter to determine if a reboot is required. Also, check for any errors with the high availability configuration. For more information about resolving the errors, see "Cluster deployment stalls after a Quickreset."

After a successful configuration using OneView InstantOn, you are prompted to install HC 380 Management UI. If you do not receive a pop-up window, check the task bar to see if the window appears minimized. If so, maximize the window.

56 Installing HC 380 Management UI It can take up to an hour before the HC 380 Management UI window is displayed, and even longer when using a 1GbE network connection with a remote vCenter. Also, between the time the Next Steps screen is displayed and the HC 380 Management UI window is displayed, OneView InstantOn is still configuring the system. Wait for these steps to complete. The HC 380 Management UI deploys and configures HPE OneView as a part of its first-time setup. It does not support using an already configured HPE OneView appliance.

Procedure 1. When the configuration dialog appears, provide a password for the administrator account (between 8 and 50 characters long) and accept the end-user license agreement. To verify the password characters, click the eye icon. You cannot change the password without reinstalling the Management UI.

2. Specify the ESXI management network addresses to be used by HC 380 Management UI. To complete the fields, use the information collected in the preinstallation worksheets.

a) Enter an unused IP address into the IP address field. b) Enter the subnet mask value into the Subnet mask field. c) Enter the gateway value into the Gateway field. 3. Provide the vCenter administrator username and password. For a local vCenter, use the following values:

Configuring the system 57 • User name: [email protected] • Password: Hyperconv!234

4. Click Submit. You may have to wait an hour or longer, depending on how many nodes are being deployed. The user interface may seem unresponsive, but you must allow the processing to continue. Do not attempt to close the windows by clicking the x button. View the deployment log by launching Powershell and entering the following commands: CD c:\programdata\hewlett-packard\storevirtual\instanton\log Get-Content -wait ./PostDeployment.log When the configuration successfully completes, a dialog appears that provides a hypertext link to the HC 380 Management UI.

58 Configuring the system Completing the initial HC 380 Management UI configuration

To complete the initial HC 380 Management UI configuration, complete the following steps.

Procedure 1. Perform the initial HC 380 Management UI setup. 2. Configure LDAP or Active Directory. 3. Create datastores. Performing the initial HC 380 Management UI setup Most of the setup values are populated during the configuration using the OneView InstantOn tool, and you can use the Settings area in HC 380 Management UI to verify or update the values.

Procedure 1. Using Mozilla Firefox, log in to HC 380 Management UI using Administrator and the new password that you set in the procedure "Installing HC 380 Management UI." A new screen appears to notify you that your HC 380 is ready to be shaped. 2. Click Setup. The Setup screen appears. 3. Click the pencil icon for the Identity section. 4. Provide the Embedded OneView IPv4 address for the HPE OneView configuration. This address was collected in the preinstallation worksheets. 5. Verify and update the values in the other fields, then click OK. 6. Click the pencil icon for the vCenter section. 7. Verify the vCenter IP address and click Connect. This address was configured during the OneView InstantOn deployment and shown on the Next Steps screen. 8. Click Trust. 9. In the vCenter Access screen, click OK. 10. To connect the HC380 to an LDAP server or Active Directory server, click the pencil icon for the Directory section. For more information, see "Configuring LDAP or Active Directory." 11. Click Setup in the Nodes section. 12. Specify the iLO credentials and iLO passwords for each hypervisor host, as shown in the preinstallation worksheets. Ensure that the iLO matches the correct host. If your environment uses common credentials for iLO, provide the common user name and password and enable each host to use the common credentials. 13. Click OK. 14. Click Submit. This step takes several minutes to complete.

Completing the initial HC 380 Management UI configuration 59 Configuring LDAP or Active Directory The HC 380 Management UI, when used in conjunction with LDAP or AD, can restrict users so that they only see their own VMs. If HC 380 is not configured with LDAP or AD, this functionality is not available. If the HC 380 is configured with LDAP or AD, use the following steps to connect to an LDAP or AD server:

Procedure 1. Click Connect in the Directory section. 2. Select LDAP for a Linux server or Active Directory for a Windows server. 3. Provide the fully qualified domain name or IP address for the LDAP or Active Directory server host and click Connect. The HC 380 appliance downloads the essential certificate. 4. Read through the certificate, and click Trust. 5. For the LDAP server, provide login credentials along with the Base Domain Name and click Verify. Base Domain Name example: DC=hpe DC=com 6. Provide a user name and password with access to the directory and click OK. The list of directory groups appears. 7. Click the plus sign next to each directory group to associate groups in the directory with the HC380 user roles.

NOTE: • Verify connectivity between the HC 380 and your AD server. For AD Certificate Services, HC 380 uses the default port (636) to connect to the AD server using SSL. • If the directory server is added as a user in the registered groups, do not prefix the domain name before the username (domainname\username).

User roles system access Before using the system, Hewlett Packard Enterprise recommends that the following user groups are added to the Active Directory/LDAP and users added to each group. For information about adding the user groups to your server, see the documentation for your server. The following table defines the list of user groups and the access rights of each group.

User Access

Infrastructure Administrator • Systemwide configuration: view and edit, Dashboard • VM Actions: view activity • VM-Sizes: add and edit • VM- Images: add and edit

Virtual Administrator • VM Actions: view activity • VM-Sizes: add and edit • VM-Images: add and edit Table Continued

60 Configuring LDAP or Active Directory User Access

Virtual User • VM: actions • View: activity

Read-only User • Systemwide configuration-View • Dashboard • View: activity • VM-Sizes: view • VM-Images: view • VM: view

Creating datastores The initial configuration setup and process only utilize a portion of the total available storage. To utilize the remaining storage in your HC 380, you must create datastores.

Procedure 1. Open a browser and navigate to the vSphere Web Client. The login window appears. 2. Enter your user name and password for the vSphere Web Client. 3. Click Login. 4. To familiarize yourself with the layout of the vSphere Web Client, review the information on the Getting Started tab.

5. In the Navigator, select vCenter. 6. In the Navigator, select Hosts and Clusters > Cluster. 7. Select the specific cluster for which you want to create a datastore. The Summary tab for the selected cluster appears.

Creating datastores 61 8. Select the Manage tab, and then select HP Management. The Actions menu appears on the right side of the window. The vSphere Web Client may not always refresh quickly. If you are not seeing what is expected, click the Refresh icon in the top menu bar or the disk refresh icon on the right side of the window. 9. From the Actions menu, select Create Datastore. The Create Datastore wizard appears. Alternatively, right-click the cluster name and select All HP Management Actions > Create Datastores .

10. Select the default location, and then click Next. 11. On the Select storage screen, select the applicable storage pool. a) Select the size and number of datastores you want to create. b) Select NETWORK_RAID_10 in the RAID level drop-down box. 12. Click Next.

62 Completing the initial HC 380 Management UI configuration The storage window appears.

13. Enter a unique name for the new datastore, and then click Next. 14. On the Validation screen, verify that the information entered is correct. If so, click Next. If not, click Back and return to the applicable screen to edit it. 15. On the Ready to complete window, click Finish to create the datastore.

Completing the initial HC 380 Management UI configuration 63 Installing CloudSystem

If you ordered CloudSystem, the CloudSystem installation utility is pre-installed on the system. Before you install CloudSystem, you must have successfully completed the steps to configure the HC 380 using OneView InstantOn. The CloudSystem installer assumes the following: • The HC 380 system has not been modified (networking, storage, or other system) since the OneView InstantOn deployment was completed. • The user has an understanding of the CloudSystem networking requirements and has completed HPE Helion CloudSystem 9.0 Network Planning Guide found on the Hewlett Packard Enterprise website. • The preinstallation worksheet has been completed for CloudSystem. • Networking configurations in the top-of-rack (TOR) switches have been completed. If the Data Center Management (DCM) network VLAN is not plumbed through the top-of-rack switches, communication with vCenter or ESXi hosts may be lost, and the installation process may fail. • The MySQL JDBC driver has been downloaded and accessible from the management VM. • Each ESXi server has SSH enabled. Limitations • Only local vCenter (running at the HC 380 Management VM) is supported for CloudSystem installation. The installation process is expecting specific free physical network interface cards (NICs) on the ESXi hosts managed by vCenter. • The ESXi management network must reside on a VLAN-tagged network. This network is a CloudSystem requirement. • When expanding nodes, manual steps are required to connect to the environment enabled for CloudSystem. The installation process migrates the management network from a flat to VLAN-tagged network, and from a standard vSphere switch to a distributed vSphere switch. During the expansion process, the networking must be migrated in a similar process. Running the installation utility

Overview The CloudSystem installation process can take several hours to complete, so adequate preparation can help limit the amount of rework. This guide assumes that the user is familiar with • The overall installation process as referenced in HPE Helion CloudSystem 9.0 Installation Guide • The background information provided in HPE Helion CloudSystem 9.0 Administrator Guide • HPE Helion CloudSystem 9.0 Network Planning Guide These guides can be found on the Hewlett Packard Enterprise website. For the HC 380, CloudSystem can be installed in a hands-off manner after the prerequisite data is provided by the user. The CloudSystem deployment process will spawn a process that performs the following: • Modifies vCenter networking for the ESXi hosts • Creates an additional compute cluster in vCenter • Allocates storage from VSA, deploys CloudSystem • Configures some of the networking components inside CloudSystem Before beginning the installation process, you should have completed the preinstallation worksheet for CloudSystem.

64 Installing CloudSystem Post-deployment options are available to upgrade to CloudSystem 9.01 and 9.02 and change CloudSystem passwords. The installer has been designed so that it can be launched multiple times. If an error causes the CloudSystem installer to fail and that situation is resolved, rerunning the tool allows it to begin where it stopped.

Downloading the MySQL JDBC driver Before starting the installation, download the MySQL JDBC driver and have it available on your staging environment.

Procedure 1. Download the MySQL Connector/J (JDBC driver) package from the Hewlett Packard Enterprise Website. This website requires a Hewlett-Packard Enterprise Passport account. 2. Extract libmysql-java_5.1.32-1_all.deb from the zip file and add it to your staging environment.

Enabling SSH on each ESXi server

Procedure 1. Using iLO, connect to the ESXi server. 2. Launch the iLO Integrated Remote Console. 3. To access the Customize System/View Logs menu, press F2. 4. Type the username and password. 5. Access the Troubleshooting Mode Options menu. 6. Enable SSH. 7. To exit the Troubleshooting Mode Options menu, press Esc. 8. To close the Customize System/View Logs menu, press Esc. 9. Repeat these steps for each ESXi server.

Running the installation utility The CloudSystem installer can be launched from the management VM, by using a connected laptop or workstation. For more information about connecting a laptop or workstation, see "Configuring a laptop or workstation to access the system." Before launching the installation utility, you should have completed the preinstallation worksheet for CloudSystem. If you encounter issues during the installation process, see "Troubleshooting CloudSystem." Launch the installation utility by using the CloudSystem 9.01 Installation desktop icon or by executing the file C:\Program Files\HPE\HPECS9Setup\CloudSystem9.exe.

Completing the Introduction screen

Procedure 1. On the Introduction screen, review the content and accept the vSphere PowerCLI End User License Agreement. 2. Click Next to proceed to the Installation Options screen.

Downloading the MySQL JDBC driver 65 Completing the Installation Options screen

Procedure 1. On the Installation Options screen, specify whether you are installing the Foundation or Enterprise version of CloudSystem. The Enterprise version is a superset of Foundation and includes the Cloud Service Automation and Operations Orchestration products. 2. Specify the number of management servers for CloudSystem 9. Select three management servers if the environment has four or more nodes. 3. Specify the location of the MySQL JDBC driver by clicking Browse and selecting the location of the libmysql-java_5.1.32-1_all.deb driver file. 4. Click Next to proceed to the Core VLAN IDs screen. After installation, you have the opportunity to enter the license for the enterprise components.

66 Completing the Installation Options screen Completing the Core VLAN IDs screen

Procedure 1. On the Core VLAN IDs screen, provide networking VLAN IDs for each of the five CloudSystem networks. These values can be between 1 and 4095. The Data Center Management Network VLAN coincides with your management network. VLANs for the Storage trunk must represent unused VLANs in your environment. Select any nonused ID to represent these networks. 2. Click Next to proceed to the Networks screen.

Completing the Core VLAN IDs screen 67 Completing the Networks screen

Procedure 1. On the Networks screen, specify up to four Provider and four Tenant VLAN identifiers and the corresponding networking subnet range (in CIDR format). You can also specify the external networking VLAN identifier and the corresponding CIDR networking value for the external VLAN. See HPE Helion CloudSystem 9.0 Network Planning Guide for more details on provider and tenant networks. This guide is found on the Hewlett Packard Enterprise website. 2. Click Next to proceed to the IP Assignments screen.

68 Completing the Networks screen Completing the IP Assignments screen

Procedure 1. On the IP Assignments screen, specify the range of IP addresses used for the CloudSystem management VMs on the Data Center Management (DCM) and Consumer Access Network (CAN) networks. The values for the DCM network should be on the same subnet as the networking addresses supplied in OneView InstantOn for the ESXi Network Components in the IP Assignments screen. 2. Click Next to proceed to the Credentials screen.

Completing the IP Assignments screen 69 Completing the Credentials screen

Procedure 1. On the Credentials screen, provide the vCenter administrator and VSA administrator credentials. 2. Click Next to proceed to the Appliance Network Settings screen.

70 Completing the Credentials screen Completing the Appliance Network Settings screen

Procedure 1. On the Appliance Network Settings screen, set the Data Center Management and Consumer Access Network subnet DNS values. These subnets are reflected in the appliance devices below. The appliance IPs are determined from IP values given in the IP Assignments screen. 2. Click Next to proceed to the Review Configuration screen.

Completing the Review Configuration screen

Procedure 1. In the Review Configuration screen, review all selected values and verify that no outstanding data is missing or in a conflicted state. 2. When all items are denoted with a green square, click Deploy to launch the CloudSystem installer.

Completing the Appliance Network Settings screen 71 The Deploy screen appears to provide feedback on the installation process. A CloudSystem Enterprise installation can take about 2 hours to complete. Details of the installation process are shown in the Details window.

Completing the Next Steps screen The Next Steps screen notifies you of the completion status of the installation and provides a list of items to complete while additional processing occurs.

72 Completing the Next Steps screen Procedure 1. Perform one or all of the following: • View the CloudSystem 9 documentation. • Open the CloudSystem 9 Operations Console. • Open the OpenStack user portal. • Launch the CS9.02 upgrade script. The script will upgrade CloudSystem 9.00 to 9.02. The upgrade process takes around 2 hours. You can monitor the progress of the upgrade by accessing the log file http://DCM_UA1_IP_address: 8086/update-logs/cs-update-install-status.log. You can also choose to perform the upgrade at a later time by using the upgrade script. See "Upgrading CloudSystem" for more information. 2. Click Next.

After you finish using the Next Steps screen, you must disable SSH on each ESXi server before using the product.

Disabling SSH on each ESXi server

Procedure 1. Using iLO, connect to the ESXi server. 2. Launch the iLO Integrated Remote Console. 3. To access the Customize System/View Logs menu, press F2. 4. Type the username and password. 5. Access the Troubleshooting Mode Options menu. 6. Disable SSH. 7. To exit the Troubleshooting Mode Options menu, press Esc. 8. To close the Customize System/View Logs menu, press Esc.

Disabling SSH on each ESXi server 73 Upgrading CloudSystem If you did not upgrade CloudSystem using the installation utility, you can use the information in this section.

Prerequisites for upgrading CloudSystem Before beginning the update: • Download the file HC380-CloudSystem_9.01_Installer.zip. • Set the environment variable CS9INSTALL. The attributes for this variable are set during the initial installation.

Upgrading CloudSystem 9.0 to 9.01 and 9.02

Procedure 1. On the management node (on which the installation utility was run), launch PowerShell. 2. Locate the file runCS9update.ps1 by typing cd $env:CS9INSTALL. The file is located at C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\CS9Install\Installer or %CS9INSTALL%\Installer. 3. Run the file by typing .\Installer\runCS9update.ps1 . 4. In the HPE Pre-Expansion Credentials dialog, type the cloudadmin password and click Next. 5. In the HPE Pre-Expansion Credentials dialog, type the vCenter administrator password and click Next. The upgrade process proceeds and exits upon completion. You can monitor the progress of the upgrade by accessing the log file http://DCM_UA1_IP_address:8086/update-logs/cs-update- install-status.log. Tenant and Provider Networking

Tenant

Procedure 1. Complete the following steps in the CloudSystem Operations Console: a) Point the browser to the URL of the Operations Console. The default IP is specified on the Appliance Hostnames panel in the installation and deployment utility. b) Login as admin user, using the default password unset. c) Select Menu > Networking > Tenant Networks. d) Add the Segmentation ID range (segmentation is the same as VLAN) with the Add Segmentation ID Range button. If there is only one VLAN, you can enter it by itself. 2. Complete the following steps in the Horizon console: a) Point the browser to the URL of the Horizon Console. The default IP is specified on the Appliance Hostnames panel in the installation and deployment utility. b) Login as admin user, default is admin/unset. c) Select Project > Network > Networks and then + Create Network d) On the Network tab, supply the network name (convention suggests Tenant). Keep Admin State set to UP.

74 Upgrading CloudSystem e) On the Subnet tab, provide the Subnet Name, Network Address CIDR, and Gateway IP. Keep IP Version set to IPv4. f) On the Subnet Detail tab, provide the range of network addresses. g) Click Create when finished.

Provider

Procedure 1. Launch the CloudSystem Operations Console. 2. Select Menu > Networking > Provider Networks and then Add Provider Network. 3. Supply the network name (convention suggestions Provider) and Segmentation (VLAN) ID. 4. Use the default demo project unless you have defined a different project. 5. Add the subnet details (subnet name, CIDR, gateway, and IP range (Allocation Pools)).

External CloudSystem 9.0 supports a single external network and the respective VLAN that was used during the csdeploy setup.

Procedure 1. Verify that the external network is in the OneView CS9CloudData network set. 2. In the Horizon console, perform the following steps: a) Select Admin > System > Networks and then + Create Network. b) Fill out the content similar to what is illustrated (Flat network type and external physical network).

c) Click Create Network. d) Create the external subnet: I. Click on the hyperlink name of the newly created external network to open the network detail. II. Click +Create Subnet. III. Provide the External network subnet name, network address CIDR, and Gateway IP.

Provider 75 IV. On the Subnet Detail, either enable or disable DHCP, and provide the Allocation pools, DNS Name Server, and Host Routes. V. Click Create. This step will fail if the External network has not been set up with at least one IP address that can be used as a floating IP address. 3. In the Horizon console, perform the following steps: a) Select Project > Compute > Access and Security. b) Under the Floating IPs tab, click Allocate IP to Project. This step assumes that a range of IP addresses has been allocated to the External network. c) Click Allocate IP. Validating CloudSystem To check the status of the nodes in a trio, use the Monitoring Dashboard.

Procedure 1. Open a browser and point to ma1, i.e. http://10.100.3.10 2. Accept the EULA and click Update. 3. Log in to the CloudSystem Operations Console, default is admin with password of unset. 4. Expand the Menu drop-down then select General > Monitoring Dashboard. 5. To launch the Monitoring Dashboard, click the link. A new tab opens for HPE Helion Openstack. Log in with the admin user. All services and servers displayed on the screen should be in a green status. If any are not green, check the alarms to see if what problems need to be resolved or acknowledged. Check HPE Helion CloudSystem 9.0 Troubleshooting Guide for reference. This guide is found on the Hewlett Packard Enterprise website. The monitoring dashboard may be disabled until the JDBC driver has been downloaded and installed after all of the CloudSystem 9.0 appliance VMs have been brought up. Create router

Procedure 1. Launch the Horizon console. 2. Select Project > Network > Routers and then + Create Router. 3. Provide a name for the router and click Create Router. 4. Select the router name hyperlink and then the + Add Interface button. Select the appropriate tenant subnet (repeat for additional tenant and provider subnets). Do not select the External network subnet in this step. Click Add Interface when finished. 5. Select Project > Network > Routers, for the router you just created, select Set Gateway and then select the External network. Finally, select Set Gateway to confirm selection. 6. Select Project > Network > Network Topology to view a network topology similar to the one shown below.

76 Validating CloudSystem Installing CloudSystem 77 Expanding the system

Prerequisites to expanding the HC 380 system When adding an expansion system, you can use the OneView InstantOn utility to add one or more nodes at a time, up to a 16-node configuration. Before you begin the expansion, ensure that the following prerequisites are complete: • You have completed the preinstallation worksheet. • The 1Gb and 10Gb switches are configured for IPv6. • The switches and additional appliance nodes have been racked and cabled correctly. • If you use the VDI configuration and you purchased an expansion node with nVIDIA Tesla M60 GPUs, ensure that you have licensed and installed the nVIDIA Tesla M60 GPUs. For more information, see "Licensing and installing nVIDIA Tesla M60 GPUs." • If you purchased expansion nodes with newer generation processors, you must enable VMware Enhanced vMotion Compatibility (EVC) support before performing the expansion. For more information, see "Enabling VMware Enhanced vMotion Compatibility support." For more information, see "Configuring the system." Enabling VMware Enhanced vMotion Compatibility support If you purchased expansion nodes with newer generation processors, you must enable VMware Enhanced vMotion Compatibility (EVC) support before performing the expansion. For example, if your current cluster configuration uses Intel® "Haswell" E5-2600 v3 processors, you must enable EVC before adding nodes that contain Intel® Xeon® E5-2600 v4 processors.

NOTE: The HC 380 system does not support expanding the cluster by adding nodes based on processors older than what is in the current configuration. For example, if the cluster is based on Intel® "Broadwell" processor nodes, the cluster may not be expanded by adding Intel® "Haswell" processor nodes.

Procedure 1. To enable EVC mode, launch VMware vSphere Web Client > Home > Hosts and Clusters. 2. Right-click on the cluster and select Settings > Manage > VMware EVC. 3. In the Change EVC Mode dialog, select Enable EVC for Intel Hosts and select Intel® "Haswell" Generation from the VMware EVC Mode drop-down list. 4. Click OK. 5. To verify that EVC mode is enabled, click the cluster and select the Manage tab. 6. Under the Configuration heading, select VMware EVC. If properly enabled, the message VMware EVC is Enabled appears with some additional details.

78 Expanding the system Expanding the HC 380 using OneView InstantOn

IMPORTANT: If you installed CloudSystem, do not use these steps to expand your system. Use the steps in "Expanding CloudSystem."

• Adding an expansion unit causes the HC 380 nodes to can impact performance. Hewlett Packard Enterprise suggests that you perform the expansion during a time when the system is not under heavy processing load. • Verify that your management VM contains the folder C:\ProgramData\Hewlett-Packard\StoreVirtual \InstantOn\PostDeployment. Contact Hewlett Packard Enterprise Support if it does not. • While OneView InstantOn is running, do not run anything else on the system including running Windows Update and configuring proxies. • Do not run OneView InstantOn while performing an HPE LeftHand OS upgrade. • Tool tips and error information are available when you hover in a text box on a screen in OneView InstantOn. The information might take a few seconds to display. • The OneView InstantOn version shown in the figures in this document may be different from what is installed on your system. The content of the screens is the same. • To navigate through OneView InstantOn, you can click Next, or click a location in the left navigation pane. The information that you enter on a screen is automatically saved and you can go back to that screen and change or add information. • If the system seems unresponsive, do not attempt to restart it. If processing is occurring, allow it to complete.

Procedure 1. Access the HC 380 management VM on your existing cluster and click the OneView InstantOn icon. 2. On the Introduction screen, accept the End User License Agreements, and then click Next. 3. On the vCenter screen, verify that the availability is green, and click Next.

Expanding the HC 380 using OneView InstantOn 79 If vCenter is not available, a possible issue may be that the license is not applied. Address any issue before continuing. 4. On the Health screen, verify that the expansion nodes that you are expecting appear in the list of available appliances. If an expansion node does not appear in the list, ensure that it is powered up and has completed the boot process. Ensure that the health is green for all system nodes. If the health is red, possible causes are incorrect cabling or nodes not being fully seated in the rack. Address any issues before continuing. Multiple appliances can be added to a single cluster simultaneously, up to a total of 16 appliances in the cluster. For instance, if you already have three systems in your existing cluster, you can expand by adding a further 13 systems. 5. Select the systems you want to add to your existing cluster and then click Next.

6. On the IP Assignments screen, enter the appropriate information that matches your existing cluster. For more information, see "Preinstallation worksheets." If you are expanding your system by adding one node or multiple nodes, you need only enter the starting IP address. OneView InstantOn automatically assigns the remaining, contiguous IP addresses.

NOTE: • If you manually configured a VLAN ID for the ESXi management network, you must remove it while completing the system expansion. You can add it again when the expansion is complete. • IP addresses for the expansion system or expansion nodes must be on the same subnet as the existing system. • OneView InstantOn automatically populates the Subnet and Gateway fields using data from the initial system configuration.

80 Expanding the system 7. Click Next. On the Credentials screen, enter your existing StoreVirtual credentials, and then click Next. 8. On the Review Configuration screen, ensure the information that you entered is correct. To make changes, use links in the left navigation pane to visit the previous screens. After you check the settings, click Deploy. If you are deploying multiple systems, the time to complete deployment increases.

IMPORTANT: Do not close OneView InstantOn during the deployment process. If the deployment does not complete, see "Troubleshooting".

Expanding the system 81 After the deployment is complete, the Next Steps screen is displayed. 9. Complete the Next Steps screen. The user interface configuration begins. 10. Respond to the prompt for the IP address and the user name and password for HC 380 Management UI. 11. Provide the iLO IP address for each new node by using the HC 380 Management User Interface. In the interface, select Settings in the navigation pane. Under Nodes, click on the hyperlinked word Setup in the message This HC380 has N node that need to be setup. Provide the iLO IP address for each node and click Submit. 12. Before you use the storage of the expansion node or system, refresh the HPE Storage data cache. The refresh ensures that OneView for VMware vCenter has information about the storage before you use it. a) Open the vCenter web client. b) Select the cluster that was created during system deployment. c) Select Manage > HP Management. d) Next to the Actions menu, double-click the Refresh icon. Click Yes when prompted to refresh data now.

82 Expanding the system e) Verify that the refresh completed by hovering over the Refresh icon to display the last time the HPE Storage data was updated. If prompted to refresh date, select No. f) Refresh vCenter server if needed so that the new node appears.

NOTE: If you need the licensing links again, restart OneView InstantOn, and navigate to the Next Steps screen using the navigation pane.

Expanding CloudSystem

Compute node expansion When CloudSystem is installed on an HC 380, the default networking is adjusted to suit the CloudSystem requirements. For compute nodes, the management network, or Data Center Management (DCM) which uses portgroup ESXcomp on the vSwitch1 standard switch in vCenter, is assigned a VLAN ID and migrated to a vCenter distributed switch CS9_Compute_DVS and portgroup dvESXcomp (management hosts are migrated to CS9_Mgmt_DVS and portgroup dvESXmgmt). For management nodes, the DCM network which uses portgroup ESXmgmt on the vSwitch1 standard switch in vCenter is assigned a VLAN ID and migrated to a vCenter distributed switch CS9_Mgmt_DVS and portgroup dvESXmgmt. The OneView InstantOn expansion process currently does not provide an automated way to add a compute node. Manual steps are required to configure a new node into the compute host cluster. There are four key steps to follow, and five for compute. 1. Preexpansion preparation of the network configuration for the new node. 2. Configure the new node in CMC. 3. Configure the volume in vCenter. 4. Configure the new node on the virtual distributed switch of vCenter. 5. (Compute only) Activate the compute host cluster.

Pre-expansion preparation

Procedure 1. Prepare the IP address, netmask, gateway, and VLANs of the new node for ESXmgmt, vMotion, HostStorage2, and HostStorage3 networks. The values are in the same range and values as the existing nodes. 2. Open up the port for vmnic0 of the new node in the Top of Rack switch to access the DHCP address through vSphere client on the laptop or desktop connected to the node. 3. Launch vSphere client and connect to the DHCP IP address of the new node as root user. a) Power off the HPE-HC-management VM. b) Delete the VM from the disk. The VSA VM is the only VM left and is powered off.

Expanding CloudSystem 83 c) Edit networks of the ESXi host: I. Add the Storage VLAN to VSAeth0. II. Add the DCM network VLAN and configure the IP, Subnet, and Gateway of ESXmgmt. III. Add the vMotion VLAN and configure the IP, Subnet, and Gateway of vMotion. IV. Add the Storage VLAN and configure the IP, Subnet, and Gateway of HostStorage2. V. Add the Storage VLAN and configure the IP, Subnet, and Gateway of HostStorage3. d) Remove the dash "-" at the end of the VSA VM name. e) Verify that you can ping the ESXmgmt IP. f) Exit the vSphere client. 4. Launch the vSphere client and connect to the management VM IP, for example, 192.168.42.100. a) Add the new ESXi node to the CloudSystem compute cluster using the ESXmgmt IP. b) Power on the new VSA VM. c) Once fully booted, launch the console of the VM. I. Type start at the login prompt. II. Go to Network TCP/IP Settings > eth0 > Provide the hostname, SN, and GW. Use the Tab key to navigate. III. Log out of the session and close the console.

Configuring the new node in the CMC

Procedure 1. Launch CMC and login as the admin user. a) On the Toolbar, select Find > Find Systems… > Add. b) Enter the VSA IP address you configured on step 4c on the pre-expansion. The new node shows up on the Available Systems. 2. Click Log in to view. 3. Right click the new node and select Add to Existing Management Group … 4. Confirm the Group Name and click Add. It may take a minute to finish. 5. After the new node shows up on the management group, right click and select Add to Existing Cluster… 6. Confirm the Cluster name and click OK. Click OK again on the WARNING pop-up message. The new node shows up on the Storage Systems group. 7. Go back to vCenter and get the iSCSI name of the new node using either of the following steps: • Select the server and go to Configuration > Hardware > Storage Adapters and select the iSCSI Software Adapter. Locate the iSCSI name under Details. • Click Properties… hyperlink under Details then click Configure… to copy the iSCSI Name. 8. Go back to the CMC and add the server. 9. Right click on Servers and select New Server…. 10. In the New Server dialog, perform the following steps: a) Use the ESXmgmt IP of the new node for the Name and CHAP Name. b) Keep the two iSCSI security checkboxes enabled (default). c) For the CHAP passwords, ensure that Target and Initiator are 12 characters long and are unique. 11. Present the volumes to the newly added server by right-clicking the newly added server and selecting Assign and Unassign Volumes and Snapshots… . 12. Assign the volumes by performing the following steps:

84 Configuring the new node in the CMC a) For the new compute host, select only the VSA management volume. b) For the new management host, select both CS9 and VSA management volumes. 13. Get the iSCSI name of the volume to be configured on vCenter. a) Select the volume. b) On the Details tab under Target Information, copy the value for the iSCSI Name field on the Target Information. This value is the iSCSI server IQN you will use later in vCenter.

Configuring the volume in vCenter

Procedure 1. Configure the presented volume on vCenter. a) Select the new node, go to Configuration > Hardware > Storage Adapters. Select the iSCSI Software Adapter then Details > Properties… . b) On the General tab, select CHAP…. I. Select Use CHAP on both options. II. Use the same name configured on the CMC (step 10a of the CMC setup section). III. Use the same Target and Initiator passwords. 2. On the Dynamic Discovery tab a) Select Add… . b) Provide the iSCSI Server IP. You can obtain this IP in two ways. • From the other nodes configured on vCenter. • From the CMC, navigate to the VSA cluster then go to the iSCSI tab. The iSCSI Server IP is the value of the Virtual IP. c) Click CHAP… and verify that both Inherit from Parent checkboxes are selected. 3. The Static Discovery tab updates. If it does not update, perform these steps: a) Select Add…. b) Provide the iSCSI Server IP similar to the Dynamic Discovery. c) Provide on the iSCSI Target Name the IQN you obtained in step 13 of the CMC setup section. d) Click CHAP… and verify that both Inherit from Parent checkboxes are selected. e) After closing the Properties dialog, perform a Rescan. The iSCSI device is displayed on the Details dialog. f) Go to the storage and verify that the datastore also shows up on the newly added node.

Configure the virtual distributed switches of vCenter

Procedure 1. Use the following steps to configure the new Compute node to the virtual distributed switches: a) Using the vSphere Client, go to Inventory > Networking. b) Right click CS9_Compute_DVS and click Add Host…. I. Select the new node. II. Select physical adapters vmnic6 and vmnic7. III. Set the destination port group of vmk3 to dvESXcomp. IV. Click Next on the Virtual Machine Networking. V. Click Finish. c) Right click Comp-CloudData-Trunk and click Add Host….

Configuring the volume in vCenter 85 I. Select the new node. II. Select the physical adapters vmnic8 and vmnic9. III. Click Next on Network Connectivity. IV. Click Next on Virtual Machine Networking. V. Click Finish. 2. Use the following steps to configure the new Management node to the virtual distributed switches: a) Using the vSphere Client, go to Inventory > Networking. b) Right click CS9_Mgmt_DVS and click Add Host… . I. Select the new node. II. Select physical adapters vmnic6 and vmnic7. III. Set the destination port group of vmk3 to dvESXmgmt. IV. Click Next on the Virtual Machine Networking. V. Click Finish. c) Right hpcs-dataXXXXXX distributed vswitch and click Add Host…. I. Select the new node. II. Select the physical adapters vmnic8 and vmnic9. III. Click Next on Network Connectivity. IV. Click Next on Virtual Machine Networking. V. Click Finish. d) Right hpcs-storageXXXXXX distributed vswitch and click Add Host…. I. Select the new node. II. Click Yes to the Warning pop-up (since no physical adapter was selected). III. Click Next on Network Connectivity. IV. Click Next on Virtual Machine Networking. V. Click Finish. e) Enable HA and DRS on the management cluster.

Activate the compute host cluster (compute node only) When activating the compute cluster in CS9, there are two scenarios to consider. Scenario 1: The new host is on the same CS9 compute cluster as the existing hosts. In this scenario, the new host needs to join the existing cluster.

Procedure 1. All deployed instances need to be terminated or migrated first. 2. From the CloudSystem Ops Console, deactivate the compute cluster. 3. Activate the same compute cluster with the new host included. 4. Re-deploy the terminated instances or migrate them back to the newly active compute cluster. Scenario 2: The new host is on a different CS9 compute cluster. The steps configured earlier for the new compute host configure it on the same cluster as existing ones. 1. Create a new compute cluster. 2. Configure the CS9 virtual distributed switches on the new cluster. 3. Migrate the new compute host into the new cluster. 4. From the CloudSystem Ops Console, verify that the new cluster got discovered. 5. Activate the new compute cluster with the new host.

86 Activate the compute host cluster (compute node only) Troubleshooting

Troubleshooting OneView InstantOn

Certificate error when launching vCenter web client Symptom When launching the vCenter web client on the Next Steps screen of the OneView InstantOn wizard, a certificate error is displayed when you attempt to log in. Action You can still log in and use the vCenter web client. The certificate error will not affect use of the system. To resolve the certificate error:

Procedure 1. After logging in, go to the Home tab of the vCenter web client and select HP Management Administration. 2. On the OneView for vCenter window, select Install a signed certificate. 3. Follow the steps on the Certificate Management window that displays. You can choose to install a self- signed certificate or a certificate signed by a trusted authority.

HC 380 nodes are not discovered Symptom OneView InstantOn is not able to discover the HC 380 nodes. Cause IPv6 may not be enabled for the 1Gb or 10Gb switches. Action Enable IPv6 for the 1Gb or 10Gb switches. For more information, see "IPv6 enablement at switch level."

OneView InstantOn hangs during deployment Symptom While configuring the system using OneView InstantOn, the deployment process hangs or does not complete. Action Perform one or more of the following actions. • Verify that your firewall is configured correctly. • Ensure that the VLAN tags are configured correctly. • Ping the affected IPv4 addresses to investigate cause. • Verify that you are not attempting to deploy the HC 380 cluster in a vCenter datacenter that already contains an HC 380 cluster. • Ensure that the switches are configured correctly. • If the connection to a node does not open, troubleshoot that specific node.

Troubleshooting 87 • If there are IP address conflicts, ensure that the IP addresses that are validated on the IP Assignment screen are not in use. • If you are unable to resolve the issue or you discover another issue, contact Hewlett Packard Enterprise Support. vCenter license status on Health screen is red Symptom OneView InstantOn does not proceed with deployment because the vCenter license status on the Health screen is red. Action Perform the following steps:

Procedure 1. Apply a valid vCenter and vSphere license in vCenter. 2. Close and restart vCenter. OneView InstantOn detects the new license and allows deployment.

OneView InstantOn progress indicator appears to hang Symptom During installation, the progress indicator (countdown timer) in OneView InstantOn might appear to hang at various times. Action No action is required. Do not cancel or stop the deployment. The system configuration is continuing although the indicator might not update. The countdown timer will resume shortly.

"Invalid username and password" error appears when you specify a local vCenter Symptom When you select a local vCenter, the default credentials for VMware vCenter are automatically populated in the Username and Password fields. At times, the following error message appears: Invalid Username and Password Action This issue occurs intermittently. Perform one or both of the following actions: • Reenter your username and password. • If the issue persists, reboot the Management VM.

Application performance on management VM might decrease Symptom Application performance on the Management VM might decrease when OneView InstantOn is performing a system health check. Action Wait for the OneView InstantOn health check to complete.

88 vCenter license status on Health screen is red "The page cannot be displayed" error message appears Symptom When attempting to access the Online Help or the Software Depot link under Upgrades, the following error message appears: The page cannot be displayed. Action Configure the proxy server for Internet Explorer on the Management VM.

OneView InstantOn hangs with error message "0:02 Adding SAN to vCenter" Symptom OneView InstantOn hangs during the last two seconds of the deployment process with the following message: 0:02 Adding SAN to vCenter. Cause This issue usually occurs when you specify a remote vCenter installation. One possible cause is that the default vCenter user, [email protected], does not have full access to OneView for vCenter and to the Storage Administration Portal. If it does not have access to both applications, the error message appears. Action Ensure that the user [email protected] has full access to OneView for vCenter and the Storage Administrator Portal. Troubleshooting CloudSystem

Connection error to vCenter server Symptom The following error message appears during installation: Could not connect to vCenter server at …Initialization failed…cannot proceed. Action Perform one or both of the following actions: • Verify that the server and username values are correct. • Verify that the vCenter server service is running.

Issue tagging DCM VLAN portgroups Symptom Unable to tag the DCM network VLAN to the ESXmgmt portgroups. Cause The installation utility is trying to assign the DCM network VLAN to the ESXmgmt portgroup on the standard switch on each of the ESXi hosts. If the DCM VLAN is not configured through the TORs, then loss of connectivity may keep this task from completing properly. Action

Procedure 1. Obtain the DCM VLAN.

"The page cannot be displayed" error message appears 89 This information was obtained for the CloudSystem preinstallation worksheet. 2. Make sure that the DCM VLAN is enabled in the TOR switches. 3. Execute the following steps on the OneView InstantOn management VM. a) If not already done, configure a laptop or workstation to access the management VM. b) Migrate the management VM to node 1 if it is not already there. This migration should have been performed automatically by OneView InstantOn after the Next Steps page appears. c) Migrate the management VM disk away from the VSA SAN storage to a local datastore (for example, datastore1). If this migration is not done, connectivity problems might occur and the management VM will stop functioning properly. d) Connect to each host (except the management VM) directly with vSphere client and set ESXmgmt portgroup to have VLAN x: esxcli network vswitch standard portgroup set -p ESXmgmt --vlan-id e) On the management VM (include both commands on the same line, separated by ';') esxcli network vswitch standard portgroup set -p ESXmgmt --vlan-id ; esxcli network vswitch standard portgroup set -p mgmtVMNetwork --vlan-id If you lose connection to the networks, you may have to use iLO to restore connectivity.

Unable to migrate host to management cluster Symptom The installation utility created the CS9Mgmt cluster in vCenter but is unable to migrate one of the ESXi nodes from the default cluster to the new management cluster. Cause The installation utility is trying to create the CS9Mgmt cluster in vCenter and then migrate one of the ESXi nodes from the default OneView InstantOn cluster to this new CloudSystem management cluster. Because the installation utility runs on the management VM, the VM cannot be turned off and put into maintenance mode for moving the host to the management cluster. Action If the utility encounters problems while attempting to move the host to the management cluster, you must perform the following steps:

Procedure 1. Create a cluster (CS9Compute). 2. Disable HA on the original cluster. 3. For all hosts (except ESXi host that is the management VM), perform the following steps: a) Shut down the guest OS for the VSA VM and wait about a minute. You may have to use the StoreVirtual CLI commands (for example, cliq getVolumeInfo) to assess the status of volumes before this step can complete. b) Move the host to maintenance mode. c) Migrate the host to the new CS9Compute cluster. d) Move the host out of maintenance mode. e) Power on VSA VM. 4. Enable HA on CS9Compute if there is more than one node. 5. Enable DRS on CS9Compute node.

90 Unable to migrate host to management cluster Both the management and compute clusters must have DRS enabled. If the compute cluster no longer satisfies the minimum requirements for HA, it is disabled.

Trouble setting up storage Symptom Connectivity issues may occur when the installation utility attempts to allocate disk space from the VSA cluster and make it shared between all ESXi hosts. Explanation The Installer is trying to allocate as close to 8TB of disk space as possible from the VSA cluster and make it shared between all ESXi hosts. If the credentials are no longer viable, there may be connectivity issues. After creating a volume in VSA, then an attempt is made to register the volume with the ESXi hosts. The volume is assumed to be created with RAID-10. Action Use OneView for vCenter to allocate a datastore, or use the following steps to create it manually:

Procedure 1. Launch the StoreVirtual Centralized Management Console. 2. Log in to the VSA Cluster (as specified in OneView InstantOn, default is HP-HyperConv-XXX). Credentials were supplied during the OneView InstantOn setup. 3. Create a CS9MgmtDatastore volume with as close to 8TB of space (RAID-10) as possible. 4. Share the volume with all servers. 5. Launch vSphere and connect to the vCenter server. 6. Pick the CS9 management host and click Rescan All, then Add Storage (CS9MgmtDatastore). 7. On the remaining hosts in compute clusters, either click Rescan All, or wait for the datastore to be visible. If the factory datastores [remote-install-location] still exist, remove them from each ESXi host. If the factory datastores exist when CloudSystem tries to activate compute nodes, the activation fails.

Distributed switches were not created as expected Symptom Distributed switches were not created. Explanation On the CS9 management hosts, the CS9_Mgmt_DVS distributed switch is created with two uplink ports and attached to vmnic6 and vmnic7. Two portgroups, dvESXmgmt and dvmgmtVMNetwork, are created on the switch with the DCM VLAN. The ESXmgmt portgroup on the standard switch vSwitch1 is migrated to the dvESXmgmt portgroup. On the compute hosts, the CS9_Compute_DVS switch is created with the dvESXcomp portgroup with two uplinks and attached to vmnic6 or vmnic7. The ESXmgmt portgroup on standard vSwitch1 is migrated to this dvESXcomp portgroup. The Comp-CloudData-Trunk distributed switch is created with two uplinks on vmnic8 or vmnic9. No portgroups are created on the Comp-CloudData-Trunk at this time. Action Migrate ESXmgmt on vSwitch0 to CS9_Mgmt_DVS using the following steps.

Procedure 1. Create CS9_Mgmt_DVS w/ dvESXMgmt portgroup (VLAN ). 2. Navigate to Inventory -> Networking, right-click on CS9_Mgmt_DVS, and select Manage Hosts.

Trouble setting up storage 91 3. Pick all hosts in the management cluster (for a two-node solution, there is a single management node). 4. Select vmnic6 or vmnic7 for these hosts. 5. Assign DV destinations for the ESXmgmt standard portgroup to the corresponding dvESXmgmt distributed port group on each respective host. Do not change the other port groups. Distributed Compute Switches Migrate ESXmgmt on vSwitch0 to CS9_Compute_DVS using the following steps. 1. Create CS9_Compute_DVS with dvESXcomp portgroup (VLAN ). 2. Right-click CS9_Comp_DVS, manage hosts. 3. Pick all hosts in compute cluster and verify vmnic6 or vmnic7 selected for these hosts. 4. Assign dv destinations for the ESXmgmt standard portgroup to the corresponding dvESXComp distributed portgroup on each host. 5. Create Comp-CloudData-Trunk on vmnic8 or vmnic9 for all compute hosts (no portgroups).

Foundation or Enterprise zip files on the datastore supplied by the factory image are not found Symptom When you attempt to upload OVA images, the Foundation or Enterprise zip files on the datastore supplied by the factory image are not found. Action

Procedure 1. On the ESXi server that is the management VM, browse to the datastore1 datastore. 2. Locate the cloud folder and download the Enterprise and Foundation zip files to a temporary location on the management appliance. 3. Unzip the Enterprise and Foundation zip files. 4. Use vSphere to upload the cs*.ova files (cs-mgmt, cs-cloud, cs-sdn, cs-monitoring, cs-update, cs- enterprise, cs-ovsvapp).

Foundation or Enterprise zip files can not be unzipped Symptom When you attempt to upload OVA images, the Foundation or Enterprise zip files can not be unzipped. Action

Procedure 1. On the ESXi server that is the management VM, browse to the datastore1 datastore. 2. Locate the cloud folder and download the Enterprise and Foundation zip files to a temporary location on the management appliance. 3. Unzip the Enterprise and Foundation zip files. 4. Use vSphere to upload the cs*.ova files (cs-mgmt, cs-cloud, cs-sdn, cs-monitoring, cs-update, cs- enterprise, cs-ovsvapp).

Storage is not available for OVA images Symptom When you attempt to upload OVA images, storage is not available.

92 Foundation or Enterprise zip files on the datastore supplied by the factory image are not found Action

Procedure 1. On the ESXi server that is the management VM, browse to the datastore1 datastore. 2. Locate the cloud folder and download the Enterprise and Foundation zip files to a temporary location on the management appliance. 3. Unzip the Enterprise and Foundation zip files. 4. Use vSphere to upload the cs*.ova files (cs-mgmt, cs-cloud, cs-sdn, cs-monitoring, cs-update, cs- enterprise, cs-ovsvapp).

Issues creating first management appliance Symptom The installation utility is unsuccessful at creating the first management appliance. Explanation This issue can occur if the installation utility is not able to create the deployer.conf file that is used by csstart.exe. Action

Procedure 1. Navigate to the %CS9INSTALL%\Foundation folder. 2. Locate the csstart.exe file. 3. Edit the deployer.conf file to match the configuration. 4. Launch csstart.exe start --eula-accepted After about 10 minutes the first management appliance is available. 5. Reset the EULA so that on first login you are prompted to accept the license.

CloudSystem was not deployed successfully Symptom CloudSystem was not deployed successfully. Action See the HPE Helion CloudSystem 9.0 Troubleshooting Guide on the Hewlett Packard Enterprise website.

Could not update the hpcs-data* distributed switch Symptom The hpcs-data* distributed switch could not be updated during the installation. Cause After CloudSystem is deployed, the installer tries to increase the number of uplinks to two on the hpcs- data* distributed switch for each ESXi compute host. Action

Issues creating first management appliance 93 Procedure 1. Make sure that hpcs-data* has two uplinks assigned for each management host. 2. Assign vmnic8 or vmnic9 for each management host. 3. Set the portgroup to have a route based on IP hash and both uplinks in active mode.

Could not register vCenter with CloudSystem Symptom The installation process could not register vCenter with CloudSystem. Cause The installer is trying to register vCenter with the CloudSystem to allow for compute nodes to be activated. Action Perform one or both of the following actions: • Verify that vCenter is accessible. • Use the following steps to register vCenter with CloudSystem. 1. Launch the CloudSystem 9 Operations console. 2. Select Menu > System > Integrated Tools. 3. Select Register vCenter. 4. Provide the smgmt01 values. The default values are listed, but your values may vary: ◦ vCenter Name: smgmt01 ◦ vCenter Details: 172.28.10.10 ◦ vCenter Administrator: [email protected] ◦ vCenter Password: Password!234 5. Click Register. The vCenter count increases, the CS9 busy graphic ends, and the notification count increases. 6. Expand the notifications (the bell icon near the top right of the browser window) to view a notification that indicates that the vCenter was registered successfully.

Could not activate compute nodes Symptom Could not activate compute nodes. Explanation The installer is trying to add any hosts in the compute cluster as activated nodes in CloudSystem. Potential problems could occur if vCenter is not registered or it fails to find an OVA image. Action Perform one or both of the following actions: • Follow the steps below to activate the compute nodes. • Refer to HPE Helion CloudSystem 9.0 Administrator Guide located on the Hewlett Packard Enterprise website.

Procedure 1. Launch vSphere, and verify the following:

94 Could not register vCenter with CloudSystem • that each compute host is alive and not in maintenance mode. • that the Comp-CloudData-Trunk distributed switch has been created and uplinks assigned for each compute host. 2. In the CS9 Operations Console, select Menu > Compute > Compute Nodes. 3. For each compute cluster, perform the following: a) Select a cluster (e.g. sHosts01) by clicking on the circled checkmark next to the cluster name. b) Click the Activate button. c) Select Distributed vSwitch Name option and supply the value Comp-CloudData-Trunk. d) Select the Queue for Activation button. e) Select Complete Activations. f) Click Confirm Activations. An ovsvapp- VM instance is created for each compute host. This step can take several minutes and will depend on the number of compute nodes. If the activation fails, use the following troubleshooting options: • Sometimes the activation has completed but may show a RED status. Try refreshing the browser and observe whether it changes to GREEN. • Verify that enough DCM IP addresses are allocated. • If any of the compute hosts are in maintenance mode or the management appliance (ma1) cannot maintain a network connection to a compute host, the ovsvapp deploy will fail. • Use the Menu General Logging Dashboard to help diagnose any problems. • View additional information in the white paper entitled "HPE Helion CloudSystem 9.0 networking configuration and troubleshooting" found on the Hewlett Packard Enterprise website.

Could not create Tenant VLAN Segment Ranges Symptom Could not create tenant VLAN segment ranges. Explanation The installer is attempting to create segment ranges in CloudSystem for VLANs listed in the Tenant VLAN section of the user interface. Potential problems could occur if the OpenStack services are not all running. Action Running deploy a second time from the UI will usually fix most problems. For more information, see "Tenant and provider networking."

Could not create Tenant and Provider VLAN Networks Symptom Could not create Tenant and Provider VLAN Networks. Explanation The installer is creating the Tenant and Provider VLAN networks specified in the UI. Potential problems could be that Tenant segment ranges were not created or all OpenStack services are not running. Other setup problems could occur if the VLAN is not valid. Action Re-run the CloudSystem installation utility or refer to "Tenant and provider networking."

Could not add Subnets to Networks Symptom

Could not create Tenant VLAN Segment Ranges 95 Could not add subnets to networks. Explanation The installer is attempting to create a subnet for a given VLAN from the CIDR specified in the user interface. Potential causes could be that the VLAN does not exist or the CIDR overlaps with another network. Action For more information, see "Tenant and provider networking."

Could not create router Symptom Could not create router. Explanation The installer is attempting to create a router in CloudSystem based on any Tenant, Provider, and External networks specified in the user interface. It will only run once, so any new networks or changes will have to be manually updated. Action For more information, see "Create router."

Could not update to 9.02 Symptom Could not update to CloudSystem 9.02. Action Perform one or both of the following actions. • Run the program. • Perform a manual update by referring to HPE Helion CloudSystem 9.0 Installation Guide located on the Hewlett Packard Enterprise website.

Could not update passwords Symptom Could not update the administrator password for CloudSystem. Action Hewlett Packard Enterprise recommends that you back up your system and take a snapshot of CloudSystem before changing. If the process fails, see HPE Helion CloudSystem 9.0 Administrator Guide for the manual update process or to restore to a previous snapshot. The guide is located on the Hewlett Packard Enterprise website.

VSA volumes did not stabilize in 10 minutes…cannot continue Symptom VSA VM cannot be powered off. Explanation Before creating the CS9 Management cluster and moving the management VM to this cluster, the utility powers off the VSA VM on the management node. If the VSA volumes are pending activity, the VM cannot be powered off until the VSA storage reaches a stable state.

96 Could not create router Action For more information, see "Trouble migrating host to management cluster."

VM did not power off - Symptom VSA VM could not be powered off on the management node. Explanation Before creating the CS9 Management cluster and moving the management VM to this cluster, the VSA VM must be powered off on the management node. If the VSA volume could not be powered off, then the VSA volume has some pending activity. Action The VM will have to be powered off manually. For more information, see "Trouble migrating host to management cluster."

Problem finding original vCenter cluster Symptom Installation utility is not able to locate original cluster name. Explanation When trying to create the CS9Compute cluster and migrate all future compute nodes to this cluster, if the installation utility cannot find the original cluster name as specified during the OneView InstantOn deployment, it cannot proceed. Action Ensure that the original cluster name (e.g. HC-CLUS) that was specified during the OneView InstantOn deployment is in place, including the ESXi hosts in that cluster.

Could not find the VSA VM on host Symptom The installation utility cannot locate the VSA VM on the specified host. Explanation When trying to migrate ESXi hosts to the new compute cluster, the utility must be able to shut down the VSA VM on each respective host. If the utility is not able to locate it (if the name does not match the “SVVSA-*” pattern, for instance), the system cannot shut it down and allow the respective host to enter maintenance mode prior to migrating the host to the new cluster. Action Ensure that the utility can locate the VSA VM on each respective host.

Deploy CloudSystem

Procedure 1. Ensure that the management appliance is active. 2. Ensure that you have downloaded the MySQL JDBC library file (libmysql-java_5.1.32-1_all.deb). 3. Deploy the .deb file on the management appliance. 4. Log into the management appliance and launch csdeploy.

VM did not power off - 97 Appendix A: Network switch configuration

Hewlett Packard Enterprise switches This section describes how to configure a 3-node HC 380 appliance with a pair of HPE 5900A-48XG-4QSFP+ switches.

Network cabling The following table shows an example 10GbE networking connectivity to two HPE 5900AF-48XG-4QSFP + switches configured with HPE Intelligent Resilient Framework (IRF)+.

Switch Port Device Port Comment HPE 1 HC 380 Node 1 FlexibleLOM1 5900AF-48XG-4QSFP+ - 1 HPE 1 HC 380 Node 1 FlexibleLOM2 5900AF-48XG-4QSFP+ - 2 HPE 2 HC 380 Node 2 FlexibleLOM1 5900AF-48XG-4QSFP+ - 1 HPE 2 HC 380 Node 2 FlexibleLOM2 5900AF-48XG-4QSFP+ - 2 HPE 3 HC 380 Node 3 FlexibleLOM1 5900AF-48XG-4QSFP+ -1 HPE 3 HC 380 Node 3 FlexibleLOM2 5900AF-48XG-4QSFP+ -2 HPE 51 HPE 5900AF-48XG-4QSFP+ 51 IRF Link 5900AF-48XG-4QSFP+ -1 -2 HPE 52 HPE 5900AF-48XG-4QSFP+ 52 IRF Link 5900AF-48XG-4QSFP+ -1 -2

In addition to the 10GbE connectivity needs, the HPE Integrated Lights Out connection on each HC 380 node also needs to be connected and configured on the same management network. There are several ways to do this, for example connecting the 1GbE connections to an existing 1GbE switch that is on the same Layer 2 management network (VLAN) as the 10GbE components. You could also insert an X120 1G SFP RJ45 Transceiver (JD089B) in to the HPE 5900AF-48XG-4QSFP+ switch and use ports on the switches for the iLOs as well. The following table shows the 1GbE RJ45 connections required for a three-node 10GbE based HC 380 solution.

98 Appendix A: Network switch configuration Device Port HC 380 Node 1 iLO HC 380 Node 2 iLO HC 380 Node 3 iLO

Configuring the switches The following procedures describe an example of how to configure the two HPE 5900AF-48XG-4QSFP+ switches for use in an HC380 for the general virtualization configuration. Although two 10GbE based switches are used here in this example, the same steps can be leveraged for a 1GbE based HC380 solution using an HPE 5900AF-48G-4XG-2QSFP+ switch, taking into account the additional 1GbE connections that are required. For more information about the connectivity requirements, see "Configuring the network switches." The example below assumes you already have a 1GbE infrastructure where the iLO from each HC380 node is connected. This 1GbE infrastructure must be able to communicate to the HPE 5900AF-48XG-4QSFP+ switches referenced in this section. Hewlett Packard Enterprise requires that all connections from the HC380 (iLO, 10GbE, or 1GbE) be connected to the same layer 2 and layer 3 networks.

Connecting to the serial console port For each switch in the configuration, initial setup will be done over the serial port. This section shows how to connect to the serial port. This task requires a part number 5185-8627 RJ45-DB9 console cable to connect the serial port and a serial terminal or serial terminal emulator (Laptop for example) with an available serial port (DB9) connector.

Procedure 1. Connect a serial terminal or laptop or server running a terminal emulator to the serial port of the HPE 5900AF-48XG-4QSP+ switch. The Serial port is located on the back (power side) of the switch. 2. On the laptop or server, open up a terminal emulation program (i.e. Tera Term) and select the serial connection option with the serial port configuration. Bits per second: 9600 Data bits: 8 Parity: None Stop bits: 1 Flow control: None Emulation: VT100 3. Press OK to open the connection.

IRF configuration To set up the initial configuration for the HPE 5900AF-48XG-4QSFP+ Switch 1 via the serial port, complete the following steps:

Procedure 1. Configure the switch.

Configuring the switches 99 On initial boot and connection to the serial or console port of the switch, the Comware setup should automatically start and attempt to enter Automatic configuration. Automatic configuration attempt: 1. Interface used: M-GigabitEthernet0/0/0. Enable DHCP client on M-GigabitEthernet0/0/0. Automatic configuration is running, press CTRL_C or CTRL_D to break. 2. Press ctrl and c or ctrl and d to stop the automatic configurations. Whenever the instructions call for network configuration in "system view" context, if at the prompt, issue the system-view command to get to the [HPE] prompt. Automatic configuration is aborted. Line aux0 is available.Press ENTER to get started. system-view System View: return to User View with Ctrl+Z. 3. Configure the IRF ports. [HPE] interface range FortyGigE 1/0/51 to FortyGigE 1/0/52 [HPE-if-range] shutdown [HPE-if-range] quit [HPE] irf-port 1/1 [HPE-irf-port1/1] port group interface FortyGigE 1/0/51 [HPE-irf-port1/1] port group interface FortyGigE 1/0/52 [HPE-irf-port1/1] quit [HPE]save The current configuration will be written to the device. Are you sure? [Y/ N]:y Please input the file name(*.cfg)[flash:/startup.cfg] (To leave the existing filename unchanged, press the enter key):

HPE 5900AF-48XG-4QSFP+ Switch 1 To set up the initial configuration for the HPE 5900AF-48XG-4QSFP+ Switch 1 via the serial port, complete the following steps: 1. Configure the switch. On initial boot and connection to the serial or console port of the switch, the Comware setup should automatically start and attempt to enter Automatic configuration. Automatic configuration attempt: 1. Interface used: M-GigabitEthernet0/0/0. Enable DHCP client on M-GigabitEthernet0/0/0. Automatic configuration is running, press CTRL_C or CTRL_D to break. 2. Press ctrl and c or ctrl and d to stop the automatic configurations. Whenever the instructions call for network configuration in "system view" context, if at the prompt, issue the system-view command to get to the [HPE] prompt. Automatic configuration is aborted. Line aux0 is available.Press ENTER to get started. system-view

100 HPE 5900AF-48XG-4QSFP+ Switch 1 System View: return to User View with Ctrl+Z. 3. Configure the IRF ports. [HPE] interface range FortyGigE 1/0/51 to FortyGigE 1/0/52 [HPE-if-range] shutdown [HPE-if-range] quit

[HPE] irf-port 1/1 [HPE-irf-port1/1] port group interface FortyGigE 1/0/51 [HPE-irf-port1/1] port group interface FortyGigE 1/0/52 [HPE-irf-port1/1] quit

[HPE]save The current configuration will be written to the device. Are you sure? [Y/ N]:y Please input the file name(*.cfg)[flash:/startup.cfg] (To leave the existing filename unchanged, press the enter key):

HPE 5900AF-48XG-4QSFP+ Switch 2 To set up the initial configuration for the HPE 5900AF-48XG-4QSFP+ Switch 2 via the serial port, complete the following steps:

Procedure 1. Configure the switch. On initial boot and connection to the serial or console port of the switch, the Comware setup should automatically start and attempt to enter Automatic configuration. Automatic configuration attempt: 1. Interface used: M-GigabitEthernet0/0/0. Enable DHCP client on M-GigabitEthernet0/0/0. Automatic configuration is running, press CTRL_C or CTRL_D to break. 2. Press ctrl and c or ctrl and d to stop the automatic configurations. Whenever the instructions call for network configuration in "system view" context, if at the prompt, issue the system-view command to get to the [HPE] prompt. Automatic configuration is aborted. Line aux0 is available.Press ENTER to get started. system-view System View: return to User View with Ctrl+Z. 3. Change the IRF member ID and reboot the switch. [HPE] irf member 1 renumber 2 Renumbering the member ID may result in configuration change or loss. Continue?[Y/N] Y [HPE] save

HPE 5900AF-48XG-4QSFP+ Switch 2 101 The current configuration will be written to the device. Are you sure? [Y/ N]:y Please input the file name(*.cfg)[flash:/startup.cfg] (To leave the existing filename unchanged, press the enter key): Validating file. Please wait... Saved the current configuration to mainboard device successfully. [HPE] quit reboot Start to check configuration with next startup configuration file, please wait...... DONE!This command will reboot the device. Continue? [Y/N]:yNow rebooting, please wait... 4. Still connected to switch 2 via the serial port, once the switch reboot is complete, configure the IRF ports. system-view [HPE] interface range FortyGigE 2/0/51 FortyGigE 2/0/52 [HPE-if-range] shutdown [HPE-if-range] quit [HPE] irf-port 2/2 [HPE-irf-port2/2] port group interface FortyGigE 2/0/51 [HPE-irf-port2/2] port group interface FortyGigE 2/0/52 [HPE-irf-port2/2] quit [HPE] irf-port-configuration active [HPE] interface range FortyGigE 2/0/51 FortyGigE 2/0/52 [HPE-if-range] undo shutdown [HPE-if-range] quit [HPE] save The current configuration will be written to the device. Are you sure? [Y/ N]:y Please input the file name(*.cfg)[flash:/startup.cfg] (To leave the existing filename unchanged, press the enter key): Validating file. Please wait... Saved the current configuration to mainboard device successfully.

Configure IRF priority Configure the domain and IRF parameters. The <> value is an arbitrary number, but must be unique from other IRF domains. Switch prompts will not be displayed in the remaining switch configuration information in this section of the documentation. From system-view, run the following commands. system-view irf domain <> irf member 1 priority 32

102 Configure IRF priority irf member 2 priority 30 irf mac-address persistent always

Configure Multi-Active Detection (MAD) and Remote access to the switch Hewlett Packard Enterprise recommends that you implement a multi-active detection (MAD) mechanism to detect the presence of multiple identical IRF fabrics, handle collisions, and recover from faults in the unlikely event of an IRF split or failure. For more information, see HPE 5920 and 5900 Switch Series IRF Configuration Guide. Configuration of MAD is not covered in this document.

Configure the VLANs To create the necessary VLANs, complete the following step on both switches. From system-view, run the following commands. vlan <> name MGMT-VLAN quit vlan <> name vMotion-VLAN quit vlan <> name iSCSI-VLAN quit vlan <> name VM-Production-VLAN1 quit save

Configure IP addresses There are several ways, both inband and out of band, to assign an IP address to the IRF switch pair to manage it. The following example will demonstrate how to configure inband management using the <> vlan interface. From system-view, run the following commands. interface Vlan-interface <> ip address <> <> quit save

Create the public keys on the switch2 From system-view, run the following commands. public-key local create rsa Input the modulus length [default = 1024]:Enter public-key local create dsa Input the modulus length [default = 1024]:Enter public-key local create ecdsa secp256r1 SSH to the switch using <>, username admin and the password <>.

Configure Multi-Active Detection (MAD) and Remote access to the switch 103 Configure time and NTP Configure time and NTP on the switch. From system-view, run the following commands. system-view

clock protocol none return

clock datetime 05:58:00 05/21/2015 system-view

ntp-service unicast-server <> priority clock protocol ntp

save

Add individual port descriptions for troubleshooting To add individual port descriptions for troubleshooting activity and verification, from system-view, run the following commands. interface Ten-GigabitEthernet 1/0/1 description HC380-node1-FLEXLom-1 quit interface Ten-GigabitEthernet 1/0/2 description HC380-node2-FLEXLom-1 quit interface Ten-GigabitEthernet 1/0/3 description HC380-node3-FLEXLom-1 quit interface Ten-GigabitEthernet 2/0/1 description HC380-node1-FLEXLom-1 quit interface Ten-GigabitEthernet 2/0/2 description HC380-node2-FLEXLom-2 quit interface Ten-GigabitEthernet 2/0/3 description HC380-node3-FLEXLom-3 quit interface FortyGigE 1/0/51 description Switch1-IRF-Switch2-IRF-2/0/51 quit interface FortyGigE 1/0/52 description Switch1-IRF-Switch2-IRF-2/0/52

104 Configure time and NTP quit interface FortyGigE 2/0/51 description Switch2-IRF-Switch1-IRF-1/0/51 quit interface FortyGigE 2/0/52 description Switch2-IRF-Switch1-IRF-1/0/52 quit save

Configure VLANs on nodes To configure the VLANs for each node in the configuration, run the following commands. interface range Ten-GigabitEthernet 1/0/1 to Ten-GigabitEthernet 1/0/3 port link-type trunk undo port trunk permit vlan 1 port trunk pvid vlan <> port trunk permit vlan <> <> <> <> quit interface range Ten-GigabitEthernet 2/0/1 to Ten-GigabitEthernet 2/0/3 port link-type trunk undo port trunk permit vlan 1 port trunk pvid vlan <> port trunk permit vlan <> <> <> <> quit Cisco Nexus networking This section describes how to configure a 3-node HC 380 appliance with a pair of Cisco Nexus 5672UP switches.

Network cabling Use the following table as a guide to configure the network cabling. The table provides an example of the 10GbE networking connectivity to Cisco Nexus 5672UP switches. Connectivity between the switches is provided by an upstream device to avoid the need for any spanning-tree protocol support. This configuration on the Cisco Nexus 5672UP switches will be covered in this section of the document. A Cisco virtual Port Channel configuration could be used to connect the switches to an upstream device as well, but that configuration is not shown in this example. Note that the connections from the switches to the HC 380 nodes are not supported to be in a port channel/virtual port channel configuration.

Configure VLANs on nodes 105 Switch Port Device Port Cisco Nexus 5672UP - 1 1/1 HC 380 Node 1 LOM1 Cisco Nexus 5672UP - 2 1/1 HC 380 Node 1 LOM2 Cisco Nexus 5672UP - 1 1/2 HC 380 Node 2 LOM1 Cisco Nexus 5672UP - 2 1/2 HC 380 Node 2 LOM2 Cisco Nexus 5672UP -1 1/3 HC 380 Node 3 LOM1 Cisco Nexus 5672UP -2 1/3 HC 380 Node 3 LOM2 Cisco Nexus 5672UP -1 2/5 (17,18,19,20) Customer upstream switch Cisco Nexus 5672UP -1 2/6 (21,22,23,24) Customer upstream switch Cisco Nexus 5672UP -1 2/5 (17,18,19,20) Customer upstream switch Cisco Nexus 5672UP -1 2/6 (21,22,23,24) Customer upstream switch

In addition to the 10GbE connectivity needs, the HPE Integrated Lights Out connection on each node also needs to be connected and configured on the same management network. There are several ways to do this, for example connecting the 1GbE connections to an existing 1GbE switch that is on the same Layer 2 management network (VLAN) as the 10GbE components. You could also insert an X120 1G SFP RJ45 Transceiver (JD089B) in to the HPE 5900AF-48XG-4QSFP+ switch and use ports on the switches for the iLOs as well. The following table shows the 1GbE RJ45 connections required for a three-node 10GbE-based HC 380 solution.

Device Port HC 380 Node 1 iLO HC 380 Node 2 iLO HC 380 Node 3 iLO

Configuring the switches The following procedures describe an example of how to configure the two Cisco Nexus 5672UP switches for use in an HC 380 for the general virtualization configuration. Although two 10GbE based switches are used here in this example, the same steps can be leveraged for a 1GbE based HC 380 solution using any Cisco Nexus OS 7 1GbE based device, taking into account the additional 1GbE connections that are required. For more information about the connectivity requirements, see "Configuring the network switches." The example below assumes you already have a 1GbE infrastructure where your iLO from each HC 380 node are connected. This 1GbE infrastructure must be able to communicate to the Cisco Nexus 5672UP switches we are setting up below. Hewlett Packard Enterprise requires that all connections from the HC 380 (iLO, 10GbE, and/or 1GbE) be connected to the same layer 2 and layer 3 networks.

Connecting to the serial console port For each switch in the configuration, initial setup will be done over the serial port. This section will show you how to connect to the serial port.

106 Configuring the switches Procedure 1. This task will require a Cisco RJ45 to DB9 console cable to connect the serial port as well as a serial terminal or serial terminal emulator (Laptop for example) with an available serial port (DB9) connector. 2. Connect a serial terminal or laptop/server running a terminal emulator to the serial port of the Cisco Nexus 5672UP switch. The serial port is located on the front (power side) of the switch. 3. On the laptop/server, open up a terminal emulation program (i.e. Tera Term) and select the serial connection option with the serial port configuration Bits per second: 9600 Data bits: 8 Parity: None Stop bits: 1 Flow control: None Emulation: VT100 4. Press OK to open the connection.

Cisco Nexus 56128P switch 1 To set up the initial configuration for the Cisco Nexus 56128P switch 1, complete the following steps. On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning (PoAP). Exit PoAP by typing yes and performing the steps needed in the basic configuration dialog. Abort Power on Auto Provisioning and continue with normal setup? (yes/no) [n]: yes Enter the password for "admin": <> Confirm the password for "admin": <> Would you like to enter the basic configuration dialog (yes/no): yes Create another login account (yes/no) [n]: Enter Configure read-only SNMP community string (yes/no) [n]: Enter Configure read-write SNMP community string (yes/no) [n]: Enter Enter the switch name: <> Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter Mgmt0 IPv4 address: <> Mgmt0 IPv4 netmask: <> Configure the default gateway? (yes/no) [y]: Enter IPv4 address of the default gateway: <> Enable the telnet service? (yes/no) [n]: Enter Enable the ssh service? (yes/no) [y]: Enter Type of ssh key you would like to generate (dsa/rsa): rsa Number of rsa key bits <768-2048>: 2048 Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address: <>

Cisco Nexus 56128P switch 1 107 Configure default interface layer (L3/L2) [L2]: Enter Configure default switchport interface state (shut/noshut) [noshut]: shut Enter basic FC configurations (yes/no) [n]: Enter Would you like to edit the configuration? (yes/no) [n]: Enter Review the configuration summary before enabling the configuration. Use this configuration and save it? (yes/no) [y]: Enter

Cisco Nexus 56128P switch 2 To set up the initial configuration for the Cisco Nexus 56128P switch 2, complete the following steps. On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning (PoAP). Exit PoAP by typing yes and performing the steps needed in the basic configuration dialog. Abort Power on Auto Provisioning and continue with normal setup? (yes/no) [n]: yes Enter the password for "admin": <> Confirm the password for "admin": <> Would you like to enter the basic configuration dialog (yes/no): yes Create another login account (yes/no) [n]: Enter Configure read-only SNMP community string (yes/no) [n]: Enter Configure read-write SNMP community string (yes/no) [n]: Enter Enter the switch name: <> Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter Mgmt0 IPv4 address: <> Mgmt0 IPv4 netmask: <> Configure the default gateway? (yes/no) [y]: Enter IPv4 address of the default gateway: <> Enable the telnet service? (yes/no) [n]: Enter Enable the ssh service? (yes/no) [y]: Enter Type of ssh key you would like to generate (dsa/rsa): rsa Number of rsa key bits <768-2048>: 2048 Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address: <> Configure default interface layer (L3/L2) [L2]: Enter Configure default switchport interface state (shut/noshut) [noshut]: shut Enter basic FC configurations (yes/no) [n]: Enter Would you like to edit the configuration? (yes/no) [n]: Enter Review the configuration summary before enabling the configuration. Use this configuration and save it? (yes/no) [y]: Enter

108 Cisco Nexus 56128P switch 2 Configure global values Complete this section on both Cisco Nexus 56128P switch 1 and Cisco Nexus 56128P switch 2. Over the serial port on the switch, run the following commands in configure terminal mode to set up default values for the solution. switch# configure terminal Enter configuration commands, one per line. End with CNTL/Z. switch(config)# policy-map type network-qos jumbo switch(config-pmap-nq)# class type network-qos class-default switch(config-pmap-nq-c)# mtu 9216 switch(config-pmap-nq-c)# exit switch(config-pmap-nq)# exit switch(config)# system qos switch(config-sys-qos)# service-policy type network-qos jumbo snet02(config-sys-qos)# exit switch(config)# copy running-config startup-config Switch prompts are not displayed in the rest of the switch configuration in this section of the documentation.

Create VLANs Complete this section on both Cisco Nexus 56128P switch 1 and Cisco Nexus 56128P switch 2. From the configuration terminal mode, run the following commands to create the needed VLANs for the solution. vlan <> name MGMT-VLAN exit vlan <> name vMotion-VLAN exit vlan <> name iSCSI-VLAN exit vlan <> name VM-Production-VLAN1 exit copy running-config startup-config

Add individual port descriptions for troubleshooting To add individual port descriptions for troubleshooting activity and verification, complete the following steps in configure Terminal mode for each switch. Cisco Nexus 56128p Switch 1 From configure terminal, run the following commands.

Configure global values 109 interface Ethernet 1/1 description HC380-node1-FLEXLom-1 exit interface Ethernet 1/2 description HC380-node2-FLEXLom-1 exit interface Ethernet 1/3 description HC380-node3-FLEXLom-1 exit copy running-config startup-config Cisco Nexus 56128p Switch 2 From configure terminal, run the following commands. interface Ethernet 1/1 description HC380-node1-FLEXLom-2 exit interface Ethernet 1/2 description HC380-node2-FLEXLom-2 exit interface Ethernet 1/3 description HC380-node3-FLEXLom-2 exit copy running-config startup-config

Create port profiles Complete this section on both Cisco Nexus 56128P switch 1 and Cisco Nexus 56128P switch 2. From configure terminal, run the following commands to create port profiles which will be used to simplify ongoing network administration and configuration. port-profile type ethernet HC380-Nodes switchport mode trunk switchport trunk native vlan <> switchport trunk allowed vlan <>, <>, <>, <> spanning-tree port type edge trunk state enabled exit copy running-config startup-config

Add port profiles to port channels Complete this section on both Cisco Nexus 56128P switch 1 and Cisco Nexus 56128P switch 2. From configure terminal, run the following commands to associate the port channels with the appropriate port-profile. interface Ethernet 1/1-3

110 Create port profiles inherit port-profile HC380-Nodes exit copy running-config startup-config

Validating the switch configuration Now that the switches are configured, ensure that your systems are wired properly to the switches and powered on with the operating system running. This section assumes that you are on the management VM.

Test IPv6 connectivity to the iLO of each HC 380 node In order to test IPv6 connectivity to the iLO of each HC 380 node, perform the following from the HC 380 management VM.

Procedure 1. Open a web browser to the iLO of the first HC380 node and log in. Use either a username and password combination you previously set up on the iLO or the Administrator user with the password on the Toe Tag on the HC 380 node 2. Once logged in, on the iLO Overview screen, copy or record the Link-Local IPv6 Address. 3. From the Management VM, open the Start menu and open Command Prompt. 4. Type the following command, replacing <> with the Link Local IPv6 Address you obtained from step 2. It is normal for the first few ping attempts to result in "Destination host unreachable" as the system is still trying to discover the Link-Local IPV6 address of the iLO C:\Program Files (x86)\VMware\VMware vSphere CLI> ping <> Pinging fe80::5265:f3ff:fe63:ca62 with 32 bytes of data: Destination host unreachable. Destination host unreachable. Reply from fe80::5265:f3ff:fe63:ca62: time<1ms Reply from fe80::5265:f3ff:fe63:ca62: time<1ms Ping statistics for fe80::5265:f3ff:fe63:ca62: Packets: Sent = 4, Received = 2, Lost = 2 (50% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Program Files (x86)\VMware\VMware vSphere CLI> 5. Test connectivity after the ping command is successful by opening Internet Explorer. Ensure you have the opening and closing [ ] around the Link-Local IPv6 Address. 6. Repeat this section for the iLO of each HC 380 node in the configuration.

Troubleshoot IPv6 connectivity to the iLO of each HC 380 node If you are not able to ping the Link-Local IPv6 Address of an iLO, perform the following steps.

Procedure 1. Ensure IPv6 is enabled on the iLO. a) Log into the iLO and browse to Network -> iLO Dedicated Network Port. b) Click on the IPv6 tab and ensure that the following options are enabled:

Validating the switch configuration 111 • iLO Client Applications use IPv6 first • Stateless Address Auto Configuration (SLAAC) • DHCPv6 in Stateful Mode (Address) • DHCPv6 in Stateless Mode (Other) c) Reboot the iLO if you have to enable or change an option. 2. Ensure that the iLO IP addresses for the HC380 nodes are accessible from the ESXi management network. 3. Ensure IPv6 is enabled on your network. This not only includes the ToR switches which we demonstrated earlier on how to configure, but also the entire network infrastructure between the ToR Switches and the 1GbE switches which have the iLO connections. a) IPv6 is enabled by default on a Comware based device, however it can be disabled using ACLs. b) If the rest of the network is another switch vendor, they could also have IPv6 disabled. To help eliminate a Comware-based ToR being used by the HC 380 as the source of blocking, you can try the following if you have access to the network switch. I. Log into the Comware-based switch and enter system view. Execute the following command to enable IPv6 on your management vlan-interface. [HPE] interface Vlan-interface <> [HPE-Vlan-interface] ipv6 address auto link-local [HPE-Vlan-interface] quit II. Obtain the IPv6 address. Record or copy the link-local value (in boldface) below. [HPE] display ipv6 interface vlan <> Vlan-interface1 current state: UP Line protocol current state: UP IPv6 is enabled, link-local address is FE80::D27E:28FF:FECF:5B5B No global unicast address configured Joined group address(es): c) Try to ping the network switch from the management VM. <> is the IPv6 Link-Local address of the switch. Try to ping the network switch from the management VM. <> is the IPv6 Link-Local address of the switch.

It is normal for the first few ping attempts to result in "Destination host unreachable" as the system is still trying to discover the Link-Local IPV6 address of the switch C:\Program Files (x86)\VMware\VMware vSphere CLI>ping <> Pinging fe80::d27e:28ff:fecf:5b5b with 32 bytes of data: Destination host unreachable. Destination host unreachable. Reply from fe80::d27e:28ff:fecf:5b5b: time=2ms Reply from fe80::d27e:28ff:fecf:5b5b: time=1ms Ping statistics for fe80::d27e:28ff:fecf:5b5b: Packets: Sent = 4, Received = 2, Lost = 2 (50% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Program Files (x86)\VMware\VMware vSphere CLI>

112 Appendix A: Network switch configuration d) If the ping is successful, then that network switch is most likely configured for IPv6. Retry this same test on the next level switch if possible, all the way to the 1GbE switch until you are able to find the point in the network IPv6 is disabled or blocked. e) Ensure you remove the ipv6 link-local address on the switch if the network administrators do not want it enabled. [HPE] interface Vlan-interface <> [HPE-Vlan-interface] undo ipv6 address auto link-local [HPE-Vlan-interface] quit

Test IPv6 connectivity to VMware ESXi hosts To test IPv6 connectivity to each VMware ESXi node, perform the following from the management VM.

Procedure 1. Open a web browser to the iLO of the first HC380 node and log in. Use either a username and password combination you previously set up on the iLO or the Administrator user with the password on the Toe Tag on the HC380 node. 2. Once logged in, open a remote console to the system. Accept any security warnings that appear. 3. Press Alt + F1 at the same time and log into the esxcli command interface with the username root and password HyperConv!234. 4. Execute the following command and record the entry for vmk3: root@node1:~] esxcli network ip interface ipv6 address list Interface Address Netmask Type Status vmk0 fe80::5265:f3ff:fe63:c3bc 64 STATIC PREFERRED vmk2 fe80::250:56ff:fe6e:b544 64 STATIC PREFERRED vmk4 fe80::250:56ff:fe6c:323f 64 STATIC PREFERRED vmk2 fe80::250:56ff:fe64:59ad 64 STATIC PREFERRED vmk3 fe80::250:56ff:fe6f:4049 64 STATIC PREFERRED 5. From the Management VM, open the Start menu and open Command Prompt. 6. Type the following command, replacing <> with the Link Local IPv6 Address you obtained from step 4. It is normal for the first couple ping attempts to result in "Destination host unreachable" as the system is still trying to discover the Link-Local IPV6 address of the iLO C:\Program Files (x86)\VMware\VMware vSphere CLI> ping <> Pinging fe80::250:56ff:fe6f:4049 with 32 bytes of data: Reply from fe80::250:56ff:fe6f:4049: time<1ms Reply from fe80::250:56ff:fe6f:4049: time<1ms Reply from fe80::250:56ff:fe6f:4049: time<1ms Reply from fe80::250:56ff:fe6f:4049: time<1ms Ping statistics for fe80::250:56ff:fe6f:4049: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Program Files (x86)\VMware\VMware vSphere CLI> 7. Repeat this section for each HC 380 node in the configuration.

Test IPv6 connectivity to VMware ESXi hosts 113 Troubleshoot IPv6 connectivity to VMware ESXi on each HC 380 node If you are not able to ping the Link-Local IPv6 Address of a VMware ESXi node, perform the following steps.

Procedure 1. IPv6 should be enabled by from the Factory on the ESXi node. If you do not see the IPv6 IP address on the log-in screen, then the node is not in a factory state then a recovery procedure should be run on the node. 2. Ensure your HC380 Nodes and their iLO are on the same Layer 2 and Layer 3 management network/ VLAN. It is not supported to have them on separate Layer 2 or Layer 3 configurations. 3. Ensure IPv6 is enabled on your network. a) IPv6 is enabled by default on a Comware based device, however it can be disabled via ACLs. To help eliminate a Comware based switch being used by the HC380 as the source of blocking, try the following if you have access to the network switch. I. Log into the Comware based switch and enter system view. Execute the following command to enable IPv6 on your management vlan-interface.[HPE] interface Vlan-interface <> [HPE-Vlan-interface] ipv6 address auto link-local [HPE-Vlan-interface] quit II. Ping the link-level IPv6 Address of the ESXi node (<>)[HPE] ping ipv6 -i Vlan-interface <> <>) Ping6(56 data bytes) FE80::D27E:28FF:FECF:5B5B --> FE80::250:56FF:FE6F:4049, press CTRL_C to break 56 bytes from FE80::250:56FF:FE6F:4049, icmp_seq=0 hlim=64 time=2.029 ms 56 bytes from FE80::250:56FF:FE6F:4049, icmp_seq=1 hlim=64 time=1.741 ms 56 bytes from FE80::250:56FF:FE6F:4049, icmp_seq=2 hlim=64 time=1.545 ms 56 bytes from FE80::250:56FF:FE6F:4049, icmp_seq=3 hlim=64 time=1.703 ms 56 bytes from FE80::250:56FF:FE6F:4049, icmp_seq=4 hlim=64 time=1.636 ms --- Ping6 statistics for fe80::250:56ff:fe6f:4049 --- 5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss round- trip min/avg/max/std-dev = 1.545/1.731/2.029/0.163 ms %Jan 30 15:42:35:466 2011 mars10net PING/6/PING_STATIS_INFO: Ping6 statistics for fe80::250:56ff:fe6f:4049: 5 packet(s) transmitted, 5 packet(s) received, 0.0% packet loss, round-trip min/avg/max/std-dev = 1.545/1.731/2.029/0.163 ms. b) If the ping is successful, then that network switch is IPv6 enabled. Retry this same test on the next level switch if possible, all the way to the 1GbE switch until you are able to find the point in the network IPv6 is disabled or blocked. If the ping is successful, then that network switch is IPv6 enabled. Retry this same test on the next level switch if possible, all the way to the 1GbE switch until you are able to find the point in the network IPv6 is disabled or blocked. c) Ensure you remove the ipv6 link-local address on the switch if the network administrators do not want it enabled. Ensure you remove the ipv6 link-local address on the switch if the network administrators do not want it enabled.

[HPE] interface Vlan-interface <> [HPE-Vlan-interface] undo ipv6 address auto link-local [HPE-Vlan-interface] quit

114 Troubleshoot IPv6 connectivity to VMware ESXi on each HC 380 node Uplink into existing network infrastructure Depending on the available network infrastructure, several methods and features can be used to uplink the HPE 5900AF-48XG-4QSFP+ or Cisco Nexus 5672UP switches configured above to an existing network infrastructure. Hewlett Packard Enterprise recommends using bridge aggregations (HPE) or virtual port channels (Cisco) to uplink the switches into the infrastructure. More information can be located in HP 5920 and 5900 Layer 2 - LAN Switching Configuration Guide or Cisco Nexus 5600 Series NX-OS Interfaces Configuration Guide, Release 7.x. Configuration worksheet The configuration worksheet contains a list of values that are required to complete this deployment guide. Before beginning deployment of an HC 380, ensure the configuration worksheet is completed with the correct and validated values by the customer. You are free to expand the example configuration worksheet as needed to suite your needs. Global Networking

Description Variable Value Management Network VLAN <> Management Network Netmask <> Management Network Gateway <> Management Network NTP <> Server 1 vMotion Network VLAN <> iSCSI Network VLAN <> VM Production Network VLAN <>

Description Variable Value Network Switch 1 Management <> IP Network Switch 2 Management <> IP Network Switch 1 Hostname <> Network Switch 2 Hostname <> Network Switch Admin Password < > Network Switch IRF or VPC <> Domain ID

Uplink into existing network infrastructure 115 Appendix B: CloudSystem 9.0 Management Host Networking

116 Appendix B: CloudSystem 9.0 Management Host Networking Appendix C: CloudSystem 9.0 Compute Host Networking

Appendix C: CloudSystem 9.0 Compute Host Networking 117 Appendix D: CloudSystem 9.0 Consoles

While this document assumes you are familiar with CloudSystem 9.0, determining which console to access can sometimes become confusing. Refer to this chart for clarification and default credentials.

Console name Console image

CS9 Operations Console (admin/unset) IP: Management Appliance DCM VIP

HPE Helion OpenStack Horizon Console (admin/unset) IP: Cloud Controller Appliance DCM VIP

Management Appliance (cloudadmin/cloudadmin) IP: CloudSystem Appliance DCM VIP

Table Continued

118 Appendix D: CloudSystem 9.0 Consoles Console name Console image

MarketPlace Portal (consumer/cloud) IP: Enterprise Appliance CAN VIP

CSA Management Console (admin/cloud) IP: Enterprise Appliance CAN VIP

Operations Orchestration (administrator/unset) IP: Enterprise Appliance CAN VIP

Appendix D: CloudSystem 9.0 Consoles 119 Appendix E: CloudSystem Network Diagram

The CloudSystem introduces several new networks that are required to have a set of VLAN configurations in your top of rack (TOR) switches. This section gives a brief overview, but you should review HPE Helion CloudSystem 9.0 Network Planning Guide available on the Hewlett Packard Enterprise website before deploying CloudSystem. The VLANs need to be unique and between 1 and 4095. Note that the CloudSystem installation utility does not perform any modifications to TOR switches and assumes the configuration has already been completed. The following figure shows the CloudSystem networking overview as it relates to the HC 380 platform.

120 Appendix E: CloudSystem Network Diagram Appendix F: Remote vCenter setup

NOTE: A remote vCenter setup is not supported with CloudSystem.

The HC 380 node includes a built-in Management VM on which VMware vCenter Server is pre-installed. In OneView InstantOn, this is considered a local vCenter setup. With OneView InstantOn, you can deploy the system storage to an external instance of VMware vCenter Server that you provide (meaning, the software is not installed on the HC 380 Management VM). In OneView InstantOn, this is considered as a remote vCenter setup. This remote setup allows you to centrally manage multiple remote sites or deployments while reducing vCenter licensing costs. There are a couple of deployment options for the remote vCenter setup. One option is a Windows server on which OneView for VMware vCenter is installed that is on the same 1 GbE or 10 GbE network as the HC 380 system, as shown below.

Item Description 1 Windows server running VMware vCenter Server and OneView for VMware vCenter 2 1 or 10 GbE network 3 HC 380 cluster (with built-in Management VM)

If you have a vCenter Server Appliance (vCSA), the vCSA should be on a local network with the Windows server running OneView for VMware vCenter. The Windows server is on the 1 GbE or 10 GbE network with the system, as shown below. Currently, OneView for VMware vCenter is not supported on the vCSA.

Appendix F: Remote vCenter setup 121 Item Description 1 Windows server running OneView for VMware vCenter 2 vCenter Server Appliance (vCSA) 3 1 GbE or 10 GbE network 4 HC 380 cluster (with built-in Management VM)

122 Appendix F: Remote vCenter setup Appendix G: Management group quorum consideration

If you are deploying a two-node system, OneView InstantOn displays the Quorum Setting field on the Settings screen. You must enter an NFS file share as the Quorum Witness for the StoreVirtual management group. Within a management group, managers are storage systems that govern the activity of all the storage systems in the group. Managers use a voting algorithm to coordinate storage system behavior. In this voting algorithm, a strict majority of managers (a quorum) must be running and communicating with each other to ensure the management group functions. An odd number of managers is recommended to ensure that a majority is easily maintained. An even number of managers can result is a state where no majority exists and potentially make the management group unavailable. Quorum Witness is the method to maintain quorum in a 2-node system. For more information, see "Working with managers and quorum" in HPE StoreVirtual Storage User Guide found on the Hewlett Packard Enterprise website. When you expand a 2-node cluster with Quorum Witness to 3 nodes, the Quorum Witness is no longer required. The following table shows how quorum is configured by OneView InstantOn during deployment or expansion of a management group depending on the number of hosts.

Resultant number of Quorum Witness Virtual Manager Number of Regular nodes in cluster Managers 2 Yes Not applicable 2 2 Not applicable1 Yes2 2 3 No No 3 4 No No 3 5 No No 5 >5 No No 5 1 When deploying a 2-node system, if OneView InstantOn is unsuccessful in connecting to the NFS file share, a virtual Manager is installed on the management group. In a 2-node system, the quorum Witness is considered the best method for maintaining high availability in the management group. 2 After deployment, open the StoreVirtual CMC to verify the type of manager configured. If needed, use the StoreVirtual CMC to configure the management group with the quorum witness.

Appendix G: Management group quorum consideration 123 Appendix H: IP addresses for sample cluster

ESXi management network IP addresses worksheet The following table shows the required IP addresses and sample values for the ESXi management network.

Addresses Purpose Coun Example IP Example CIDR Example t VLAN ID1 W.X.Y.n HC 380 Management UI 1 172.28.0.1 255.255.240.0 VM W.X.Y.n+1 HC 380 OneView VM 1 172.28.0.2 255.255.240.0 W.X.Y.n+2 HC 380 Management VM 1 172.28.0.32 255.255.240.0

W.X.Y.n+3 - ESXi Node 1 to 16 172.28.0.42 255.255.240.0 W.X.Y.n+18 ESXi Node 16 172.28.0.192

W.X.Y.n+19 - CloudSystem Mgmt and 25 172.28.0.20 255.255.240.0 410 W.X.Y.n+43 Compute VMs1 172.28.0.44 Table Continued

124 Appendix H: IP addresses for sample cluster Addresses Purpose Coun Example IP Example CIDR Example t VLAN ID1

W.X.Y.n+44 - DCM Management 3 172.28.0.45 255.255.240.0 410 W.X.Y.n+46 Appliance VIP1 172.28.0.46 DCM Cloud Controller 172.28.0.47 VIP1 DCM Enterprise Appliance VIP1

W.X.Y.n+110 Node 1 iLO 16 172.28.0.111 255.255.240.0 W.X.Y.n+111 172.28.0.112 W.X.Y.n+112 Node 2 iLO 172.28.0.113 W.X.Y.n+113 172.28.0.114 W.X.Y.n+114 Node 3 iLO 172.28.0.115 W.X.Y.n+115 172.28.0.116 W.X.Y.n+116 Node 4 iLO 172.28.0.117 W.X.Y.n+117 172.28.0.118 W.X.Y.n+118 Node 5 iLO 172.28.0.119 W.X.Y.n+119 172.28.0.120 W.X.Y.n+120 Node 6 iLO 172.28.0.121 W.X.Y.n+121 172.28.0.122 W.X.Y.n+122 Node 7 iLO 172.28.0.123 W.X.Y.n+123 172.28.0.124 W.X.Y.n+124 Node 8 iLO 172.28.0.125 W.X.Y.n+125 172.28.0.126 Node 9 iLO Node 10 iLO Node 11 iLO Node 12 iLO Node 13 iLO Node 14 iLO Node 15 iLO

Appendix H: IP addresses for sample cluster 125 Addresses Purpose Coun Example IP Example CIDR Example t VLAN ID1

Node 16 iLO

1 Required for CloudSystem only 2 Must be contiguous

Customer worksheet The following table can be used by the customer to identify the IP addresses needed for the ESXi management network.

Addresses Purpose Cou Customer IP Customer Customer nt CIDR VLAN ID1 W.X.Y.n HC 380 Management UI 1 VM W.X.Y.n+1 HC 380 OneView VM 1 W.X.Y.n+2 HC 380 Management VM 2 1 W.X.Y.n+3 - ESXi Node 1 to ESXi Node 16 W.X.Y.n+18 16 2 W.X.Y.n+19 - CloudSystem Mgmt and 25 W.X.Y.n+43 Compute VMs1 Table Continued

126 Appendix H: IP addresses for sample cluster Addresses Purpose Cou Customer IP Customer Customer nt CIDR VLAN ID1

W.X.Y.n+44 - DCM Management 3 W.X.Y.n+46 Appliance VIP1 DCM Cloud Controller VIP1 DCM Enterprise Appliance VIP1

W.X.Y.n+110 Node 1 iLO 16 W.X.Y.n+111 SN: W.X.Y.n+112 Node 2 iLO W.X.Y.n+113 SN: W.X.Y.n+114 Node 3 iLO W.X.Y.n+115 SN: W.X.Y.n+116 Node 4 iLO W.X.Y.n+117 SN: W.X.Y.n+118 Node 5 iLO W.X.Y.n+119 SN: W.X.Y.n+120 Node 6 iLO W.X.Y.n+121 SN: W.X.Y.n+122 Node 7 iLO W.X.Y.n+123 SN: W.X.Y.n+124 Node 8 iLO W.X.Y.n+125 SN: Node 9 iLO SN: Node 10 iLO SN: Node 11 iLO SN: Node 12 iLO SN: Node 13 iLO SN: Node 14 iLO SN: Node 15 iLO

Appendix H: IP addresses for sample cluster 127 Addresses Purpose Cou Customer IP Customer Customer nt CIDR VLAN ID1

SN: Node 16 iLO SN:

1 Required for CloudSystem only 2 Must be contiguous vSphere vMotion network IP addresses worksheet The following table shows the required IP addresses and sample values for the vSphere vMotion network.

Addresses Purpose Count Example IP Example CIDR Example VLAN ID

vSphere Node 1 - Node 16 IP 16 172.28.20.1 255.255.255.0 111 vMotion addresses1 172.28.20.16

1 Must be contiguous

Customer worksheet The following table can be used by the customer to identify the IP addresses needed for the vSphere vMotion network.

Addresses Purpose Count Customer IP Customer Customer CIDR VLAN ID vSphere Node 1 - Node 16 IP 16 vMotion addresses1 1 Must be contiguous Storage network IP addresses worksheet The following table shows the required IP addresses and sample values for the storage network.

Addresses Purpose Count Example IP Example CIDR Example VLAN ID

1 Storage Node 1 - Node 16 IP addresses 50 172.28.30.1 255.255.255.0 112 172.28.30.50

1 Must be contiguous

Customer worksheet The following table can be used by the customer to identify the IP addresses needed for the storage network.

128 vSphere vMotion network IP addresses worksheet Addresses Purpose Count Customer IP Customer Customer CIDR VLAN ID Storage Node 1 - Node 16 IP 50 addresses1 1 Must be contiguous CloudSystem network IP addresses worksheet The following table shows the required IP addresses and sample values for the CloudSystem management network. If you are not planning to install CloudSytem, you do not need this worksheet.

Purpose Count Example IP Example CIDR Example VLAN ID Cloud Management 1 610 Network

CAN IP Range 3 192.168.10.1 255.255.255.0 710 192.168.10.3

CAN Cloud Controller VIP 1 192.168.10.4 255.255.255.0 710 CAN Enterprise Appliance 1 192.168.10.5 255.255.255.0 710 VIP Provider Network 1 1 192.168.11.0 255.255.255.0 750 Provider Network 2 1 192.168.11.1 255.255.255.0 751 Provider Network 3 1 192.168.11.2 255.255.255.0 752 Provider Network 4 1 192.168.11.3 255.255.255.0 753 Tenant Network 1 1 192.168.14.0 255.255.255.0 801 Tenant Network 2 1 192.168.14.1 255.255.255.0 802 Tenant Network 3 1 192.168.14.2 255.255.255.0 803 Tenant Network 4 1 192.168.14.3 255.255.255.0 804 External Network 1 192.68.12.0 255.255.255.0 850

Customer worksheet The following table can be used by the customer to identify the IP addresses needed for the CloudSystem network.

Purpose Count Customer IP Customer CIDR Customer VLAN ID Cloud Management Network 1 CAN IP Range 3 CAN Cloud Controller VIP 1 CAN Enterprise Appliance 1 VIP Provider Network 1 1 Table Continued

CloudSystem network IP addresses worksheet 129 Purpose Count Customer IP Customer CIDR Customer VLAN ID Provider Network 2 1 Provider Network 3 1 Provider Network 4 1 Tenant Network 1 1 Tenant Network 2 1 Tenant Network 3 1 Tenant Network 4 1 External Network 1

130 Appendix H: IP addresses for sample cluster Specifications

HC 380 specifications For environmental, mechanical, and power supply specifications for the server nodes, see HPE ProLiant DL380 Gen9 Server User Guide found at the Hewlett Packard Enterprise website. For environmental, mechanical, and power supply specification for the server rack, see the server rack documentation.

Specifications 131 Support and other resources

Accessing Hewlett Packard Enterprise Support • For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: http://www.hpe.com/assistance • To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website: http://www.hpe.com/support/hpesc

Information to collect • Technical support registration number (if applicable) • Product name, model or version, and serial number • Operating system name and version • Firmware version • Error messages • Product-specific reports and logs • Add-on products or components • Third-party products or components Accessing updates • Some software products provide a mechanism for accessing software updates through the product interface. Review your product documentation to identify the recommended software update method. • To download product updates:

Hewlett Packard Enterprise Support Center www.hpe.com/support/hpesc Hewlett Packard Enterprise Support Center: Software www.hpe.com/support/downloads downloads Software Depot www.hpe.com/support/softwaredepot • To subscribe to eNewsletters and alerts: www.hpe.com/support/e-updates • To view and update your entitlements, and to link your contracts and warranties with your profile, go to the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials page: www.hpe.com/support/AccessToSupportMaterials

IMPORTANT: Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise Support Center. You must have an HP Passport set up with relevant entitlements.

Customer self repair Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a CSR part needs to be replaced, it will be shipped directly to you so that you can install it at your

132 Support and other resources convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider or go to the CSR website: http://www.hpe.com/support/selfrepair Remote support Remote support is available with supported devices as part of your warranty or contractual support agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for remote support. If your product includes additional remote support details, use search to locate that information.

Remote support and Proactive Care information HPE Get Connected www.hpe.com/services/getconnected HPE Proactive Care services www.hpe.com/services/proactivecare HPE Proactive Care service: www.hpe.com/services/proactivecaresupportedproducts Supported products list HPE Proactive Care advanced www.hpe.com/services/ service: Supported products list proactivecareadvancedsupportedproducts

Proactive Care customer information Proactive Care central www.hpe.com/services/proactivecarecentral Proactive Care service activation www.hpe.com/services/proactivecarecentralgetstarted Warranty information To view the warranty for your product, see the Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products document, available at the Hewlett Packard Enterprise Support Center: www.hpe.com/support/Safety-Compliance-EnterpriseProducts

Additional warranty information HPE ProLiant and x86 Servers and Options www.hpe.com/support/ProLiantServers-Warranties HPE Enterprise Servers www.hpe.com/support/EnterpriseServers-Warranties HPE Storage Products www.hpe.com/support/Storage-Warranties HPE Networking Products www.hpe.com/support/Networking-Warranties Regulatory information To view the regulatory information for your product, view the Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center: www.hpe.com/support/Safety-Compliance-EnterpriseProducts

Remote support 133 Additional regulatory information Hewlett Packard Enterprise is committed to providing our customers with information about the chemical substances in our products as needed to comply with legal requirements such as REACH (Regulation EC No 1907/2006 of the European Parliament and the Council). A chemical information report for this product can be found at: www.hpe.com/info/reach For Hewlett Packard Enterprise product environmental and safety information and compliance data, including RoHS and REACH, see: www.hpe.com/info/ecodata For Hewlett Packard Enterprise environmental information, including company programs, product recycling, and energy efficiency, see: www.hpe.com/info/environment Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback ([email protected]). When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page.

134 Documentation feedback Acronyms and abbreviations

CAN consumer access network CIDR classless inter-domain routing CLM cloud management network DCM data center management DNS domain name system DRS Distributed Resource Scheduler EVC Enhanced vMotion Compatibility (EVC) HA high availability HDD hard disk drive or hard drive iLO Integrated Lights-Out IRF Intelligent Resilient Framework LDAP Lightweight Directory Access Protocol LOM LAN on Motherboard NFS network file system NTP network time protocol SSD solid-state device SSO single sign-on TMRA

Acronyms and abbreviations 135 recommended ambient operating temperature ToR top of rack VDI virtual desktop infrastructure VLAN virtual local-area network VM Virtual Machine VSA VMware vSphere Storage Appliance

136 Acronyms and abbreviations

Recommended publications