Using Oracle OpenStack with Oracle VM Server for SPARC

Leverage the Benefits of Oracle VM Server for SPARC in Oracle OpenStack Environment

WHITE PAPER / NOVEMBER 13, 2018

DISCLAIMER The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

2 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Table of Contents

Introduction ...... 6

Using This Document ...... 7

Release ...... 7

Before You Begin ...... 7

Known Issues ...... 8

FAQs ...... 10

Security Precautions and Recommendations ...... 11

Securing Oracle VM Server for SPARC General OpenStack Components ...... 11

OpenStack Nova: Oracle VM Server for SPARC Security-Related Configuration ...... 11

Using Your Own SSL/TLS Certificates ...... 11

AUTHENTICATION (RSA) ...... 11

Virtual Machine Management Service Security – Related Configuration Options ...... 16

Security Warnings ...... 16

Oracle OpenStack Administration for Oracle VM Server for SPARC ...... 16

Adding a Compute Node By Using the add-compute-node Utility ...... 17

Adding Logical Domain Flavors ...... 19

Creating Basic Networks ...... 20

Creating and Uploading Glance Images ...... 20

3 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

How to Create a WAN Boot Configuration Image for Glance ...... 23

Running OpenStack Commands ...... 26

Configuring a Demonstration Environment ...... 27

Verifying That a Remote Agent Is Functional ...... 27

Log Files ...... 28

Configuration Files ...... 29

ml2_conf.ini (Neutron ML2) Configuration Options ...... 31

Troubleshooting Oracle VM Server for SPARC for Oracle OpenStack ...... 44

Using Log Files ...... 44

Validating Your Nova Compute Environment ...... 48

Troubleshooting and Debugging the VMMS ...... 48

Troubleshooting VM Deployment Issues ...... 49

Instance Failed To Spawn: ConnectionError: ('Connection aborted', BadStatusLine("''",)) ...... 50

Creating Basic Neutron Networks in Oracle OpenStack On Oracle Linux 5.0 (O3L) ...... 50

Serial Console Instance Access Failing ...... 50

Demonstration Environment Build Quick Start ...... 51

Controller Node Pre-requisites ...... 51

Oracle VM Server for SPARC OpenStack Compute Node(s) Pre-Requisites ...... 52

Oracle VM Server for SPARC OpenStack Guest Domain(s) Pre-Requisites ...... 52

Obtaining Required Software...... 52

4 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Creating a Single-Node Demonstration Environment ...... 53

Build and Configure the OpenStack Controller ...... 53

Build and configure the Oracle SPARC OpenStack compute node ...... 57

Deploy the OpenStack Controller Services on the Virtual Machine ...... 58

Access and Configure the OpenStack Environment ...... 59

Adding a Second SPARC Nova Compute Node...... 63

Appendix ...... 66

Example Configuration Files ...... 66

5 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

INTRODUCTION OpenStack is a popular choice for managing private cloud solutions for sensitive workloads with strict security and privacy requirements while providing low latency connectivity. Oracle OpenStack answers these requirements as well as increases ROI by easily supporting mission-critical business applications and driving down the costs of integration, operations, and support.

Oracle OpenStack now provides the broadest offering across both x86 and SPARC environments, and offer greater flexibility supporting KVM hypervisor, Docker containers, and Oracle VM Server for SPARC. Customers are able to leverage a lot of the Oracle SPARC RAS features, utilization capabilities, security and flexibility that Oracle VM Server for SPARC offers while offering the intuitive uniform management interface of OpenStack.

This document will assist with the setup of a demonstration environment to test the OpenStack Nova driver for Oracle VM Server for SPARC by using provided templates and container images that can be downloaded from Oracle.

The procedures explained are for a demonstration environment and portions can be used in the deployment in a production deployment. In the case for production environment, please take into consideration proper security precautions and practices.

6 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

USING THIS DOCUMENT . Overview – Provides cloud administrators with detailed information and procedures that describe the installation and configuration of an OpenStack Nova compute node with the Oracle VM Server for SPARC software. . Audience – Cloud administrators who manage cloud services on SPARC servers. . Required knowledge – Cloud administrators on these servers must have a working knowledge of systems, the Oracle Solaris (Oracle Solaris OS), and OpenStack.

RELEASE NOTES

Before You Begin

ORACLE VM SERVER FOR SPARC OPENSTACK NOVA DRIVER AND UTILITIES 3.0 FEATURES

The following features are part of the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 software:

. Guest domain Operating Systems: Oracle Solaris 10 and Oracle Solaris 11 operating systems . Live attach and detach operations: Volumes and networks. Note that the network is not automatically configured in the guest domain on attach. . Logical domain migration: Live migration and cross-CPU live migration . Network: Virtual LANs (vlan), “flat” networking, and WANBoot . OS installation: Instance creation/installation from both ISO and RAW images . Storage: Fibre Channel, iSCSI, NFS, and local storage on a compute node . VM lifecycle functions: Create, update, restart, and destroy logical domains

UNSUPPORTED ORACLE VM SERVER FOR SPARC OPENSTACK NOVA DRIVER AND UTILITIES 3.0 FEATURES

The following lists show the Oracle VM Server for SPARC OpenStack 1.0, Oracle Solaris OS, and Oracle VM Server for SPARC features that are not available with the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 software:

. Oracle VM Server for SPARC OpenStack 1.0.  Linux for SPARC guest domains  Nova evacuate capability  Nova rebuild capability . Oracle Solaris OS and Oracle VM Server for SPARC.  Image types such as unified archives (.uar) and qcow2  Pause VM capability  VxLAN  CHAP authentication when using iSCSI

WHAT'S NEW IN THIS RELEASE

The Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 software is a significant change from the Oracle VM Server for SPARC OpenStack 1.0 software that supported Oracle VM Server for SPARC. This release is compatible with the OpenStack Queens release, and requires Oracle Linux 7.x with Oracle OpenStack on Linux 5.0 to operate. It may be used with another OpenStack Queens distributions, however additional work may be required that is not covered by this documentation.

7 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 implements a new architecture to accommodate Oracle Solaris OS changes. As such, there is no upgrade path from Oracle VM Server for SPARC OpenStack 1.0 to Oracle VM Server for SPARC OpenStack 3.0.

Most features that are available in the 1.0 release are also available in this release. Some capabilities of the 1.0 driver are not present in this release. See the section Unsupported Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 Features in this document.

ORACLE VM SERVER FOR SPARC OPENSTACK NOVA DRIVER AND UTILITIES 3.0 SYSTEM REQUIREMENTS

You can find information about the recommended and minimum software component versions to use with the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 release in the Oracle VM Server for SPARC For Oracle OpenStack Non-Production Quick Start Guide.

DEPRECATED AND REMOVED ORACLE VM SERVER FOR SPARC OPENSTACK FEATURES

The following features have been deprecated in the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 software:

. Support for Linux for SPARC . Distributed Lock Management and Evacuation Support

Known Issues This section contains general issues and specific bugs concerning the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 software.

. When using storage on a compute node (rather than Cinder storage), occasionally a ZFS share will fail to be removed. If this occurs, some ZFS shares may need to be cleaned up by an administrator after confirming the ZVOL is no longer in use. . Some capabilities in the 1.0 driver are not present in this driver as described in Deprecated and Removed section above. . When using this driver with a third-party OpenStack Solution (that is, not Oracle Linux OpenStack 5.0 with Queens support), minor changes to vm_mode.py and hv_type.py are needed. In short, the hypervisor list needs to add “ldoms” as a valid hypervisor. Add LDM = “ldoms” and then add LDM to the ALL list in both files to enable Oracle VM for SPARC support within OpenStack/Nova.

BUGS AFFECTING THE ORACLE VM SERVER FOR SPARC OPENSTACK NOVA DRIVER AND UTILITIES 3.0 SOFTWARE

This section summarizes the bugs that you might encounter when using this version of the software. The most recent bugs are described first. Workarounds and recovery procedures are specified, if available.

GENERAL ISSUES

General Issues that might be encountered during the usage of the Oracle VM Server for SPARC with OpenStack.

. Cannot Type in the Console Window for a VM There is an OpenStack console-focus issue, not specific to the Oracle VM Server for SPARC OpenStack nova driver. To address this issue, click the blue bar at the top of the console window.

. Cannot Deploy EFI Images to Older Hardware Some older servers (such as UltraSPARC T2 servers) do not support EFI labels. As such, you must create VTOC labeled VM images to support old and new hardware. This issue also imposes disk size limitations.

. Cannot Set cpu-arch Property Value After Deployment

8 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

If the cpu-arch property is set on a VM, the nova driver cannot change the cpu-arch property value later. This issue occurs because flavor migration is not yet supported by the Oracle VM Server for SPARC OpenStack Nova driver.

. Oracle Solaris 10 Guest Domains: Automated Disk Expansion Is Supported Only with the ZFS Root You must use a ZFS root to use the automated disk expansion capability. File system and volume managers such as UFS, SVM, and VxFS are not supported by this capability.

. Console Logs Are Not Available After a Live Migration The vntsd console log is not migrated with the guest domain. As a result, these console logs are no longer available and only recent log entries appear.

. Mismatched MTUs on the Management Network Can Be Problematic You might experience problems with the message queue or other OpenStack services if your controller and compute nodes have mismatched MTUs on their management interfaces. These management interfaces are used for OpenStack management communications. A mismatched MTU configuration might have a compute node management network of 9000 bytes and controller node of 1500 bytes. Ensure that all hosts are aligned on their management network in terms of the MTU.

. Avoid Inline Comments in OpenStack Configuration Files You might experience problems if you add a comment (# to the end of a configuration line in an OpenStack configuration file. OpenStack interprets inline comments as part of the value. Ensure that you place comments on lines by themselves and that the comment line begins with a comment symbol (#). For example, the admin_password=welcome1 #my password configuration line is interpreted as specifying the password as welcome1 #my password. Use the following line to check a configuration file for inline comments:

# cat /etc/service/service.conf |egrep -v '^#' | grep '#'

. Instance Power State Reported as “No State” After Live Migration of Instance The power state of a running instance may occasionally change to “No State” when viewing the instance in the Horizon immediately after the successful completion of a live migration of the instance. The power state will self-correct a short time later. This does not impact the physical state of the instance and it will remain accessible until its power state refreshes.

. “Rebuild” Does Not Actually Rebuild the VM The rebuild operation is not yet supported by the Oracle VM Server for SPARC OpenStack Nova driver. Only the Nova evacuate operation is supported. If a user attempts to perform a rebuild operation, the VM's existing disk might be recycled and not “re-imaged.”

. The nova-compute Service Times Out Waiting for Cinder to Create a LUN When You Run create new volume If a Cinder volume is being created with an OS image on it, the OS image copy might take a long. Nova can time out waiting for Cinder to complete its task. The nova-compute service (outside of the Oracle VM Server for SPARC Nova driver) simply polls for a period of time and waits to see whether Cinder created the volume. Consider increasing the following value if you experience these “hangs” in your environment:

block_device_allocate_retries=360

Then, restart the nova-compute service.

9 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

. vfs_mountroot Panic Occurs on the First Boot of a Guest Domain If a vfs_mountroot panic occurs on the first boot of a guest domain, see the section Golden Image Limitations in the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 Administration Guide. . Serial Console Immediately Closes Its Connection The serial console validates that the URL in the browser matches the configuration. If the serial console closes the connection immediately, you might need to change the base_url property value to match the name of the controller node that you use to access the console. This name is likely to be the domain name of the system or another front end such as a load balancer or reverse proxy. The base_url property is specified on compute nodes in the /etc/nova/nova.conf. The following example nova.conf file shows the base_url property changed from an IP address to a host name that matches the controller name, cloud-controller-hostname:

[serial_console] enabled = true #base_url=ws:10.0.68.51:6083/ base_url=ws://cloud-controller-hostname:6083/ listen=$my_ip proxyclient_address=$my_ip serialproxy_host=10.0.68.51

After you make changes to the nova.conf configuration file, the nova-compute-ldoms service needs to be restarted:

# docker restart nova_compute_ldoms_1

FAQs . Are the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 features identical to those in the Oracle VM Server for SPARC OpenStack 1.0 release? No, but the feature lists are similar. See the section Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 Features in this document.

. Why is the Oracle VM Server for SPARC OpenStack version number 3.0 instead of 2.0? The 2.0 version of the driver was partially developed but not publicly released because of changes to the Oracle Solaris 11 OS that necessitated a new architecture.

. Can I upgrade from the 1.0 to the 3.0 release? No. For a number of technical reasons there is currently no supported upgrade path from the initial Oracle VM Server for SPARC OpenStack 1.0 release. You must first uninstall the Oracle VM Server for SPARC OpenStack 1.0 software and then install the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 software.

10 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

SECURITY PRECAUTIONS AND RECOMMENDATIONS

Securing Oracle VM Server for SPARC General OpenStack Components You can obtain guidance about security features to harden the following components:

. OpenStack General Security: https://docs.openstack.org/security-guide/ . Oracle VM Server for SPARC Compute Node Security: . Oracle SPARC Server ILOM Security: https://docs.oracle.com/cd/E37838_01/html/E61019/index.html . Oracle Solaris 11.4 Security: https://docs.oracle.com/cd/E37838_01/html/E61019/index.html . Oracle VM Server for SPARC Security: https://docs.oracle.com/cd/E93612_01/html/E93619/index.html

OpenStack Nova: Oracle VM Server for SPARC Security-Related Configuration In addition to performing any general hardening of the OpenStack configuration and services, perform following hardening tasks:

. Ensure that OpenStack communications are on a private network, not on public or cloud end-user-facing networks . Ensure that your configuration separates storage networks from other networks, such as administrative or management networks, public-facing networks, and so on. Implementing storage network separation improves performance and availability in your OpenStack environment. . Use TLS, which is enabled by default. . Use a separate security mechanism to secure the link between a compute node and a controller node when using the serial console capability. The serial console is disabled by default. See section Nova Serial Console is Not Secured End to End in this document.

The Oracle VM Server for SPARC OpenStack installation scripts configure a root CA on each controller node. The root CA provides certificate signing. You can use your own certificates.

Using Your Own SSL/TLS Certificates Authenticated and encrypted communications are enabled by default between the Oracle VM for SPARC OpenStack Nova Driver and the Virtual Machine Management Service agent (running on the SPARC compute node).

AUTHENTICATION (RSA)

Authentication is achieved using 4096 bit RSA Private and Public Keys. When using the add-compute-node utility a set of private/public keys are generated and used for all future communications.

You may configure the Oracle VM for SPARC OpenStack Nova Driver and Virtual Machine Management Service agent to consume your own custom set of public and private RSA keys as follows:

1. Oracle VM for SPARC OpenStack Nova Driver Configuration The private RSA signing key is specified in the nova.conf file of each nova_compute_ldoms instance. Each nova_compute_ldoms instance will refer to an individual nova.conf file called /etc/kolla/config/nova/nova-compute-ldoms-N.conf. The default value of signing_keys_file is /etc/vmms/client/signing_key.pem and may be altered as necessary. For example:

[ldoms]

11 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

signing_key_file = /path/to/custom/private.key

Please note, after making any changes to the nova-compute-ldoms-N.conf file, it is recommended to run the kollacli reconfigure command to apply these changes on the OpenStack Controller.

# kollacli reconfigure

2. Virtual Machine Management Service SPARC compute node configuration The public RSA verification key is specified within the service section of file /etc/vmms/config.ini. The default value of verification_key_file is /etc/vmms/verification_key.pem and may be altered as necessary. For example:

[service]

verification_key_file = /path/to/custom/public.key

After making any modifications to the config.ini file, restart the vmms-agent service needs to be restarted via SMF:

# svcadm restart vmms-agent

For already deployed containers, it is recommended to update the container configuration file (/etc/kolla/nova- compute-ldoms-N/nova.conf) to reflect these changes. Once updated, use docker to restart the container:

# docker restart nova_compute_ldoms_1

ENCRYPTION (TLS)

Encrypted communications are enabled by default and achieved using x509 TLSv1.2 compliant certificates signed by a root certificate authority (root CA) generated at install time. As this CA is created at install time the certificates can commonly be referred to as self-signed.

Encrypted communications can be disabled by setting the Nova configuration file option as follows:

[ldoms]

use_tls = False

When using the add-compute-node utility a number of TLS configuration files are generated automatically.

In addition to modifying the Nova configuration file, ensure the options on in the /etc/vmms/config.ini file on the Oracle SPARC Nova compute node has the “tls_key_file” and “tls_cert_file” entries commented out:

12 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

[service]

#tls_key_file = /path/to/custom/private.key

#tls_cert_file = /path/to/custom/signed_certificate.crt

After commenting out these two entries, restart the vmms-agent service needs to be restarted via SMF:

# svcadm restart vmms-agent

Each step and file are described so that they may alternatively be manually generated:

. A 4096 bit RSA private key /etc/vmms/client/signing_key.pem is generated with the add-compute- node utility. This key is used for authentication and reused as the TLS private key on the control node.

The file can alternatively be manually generated by running the following command on the OpenStack control node:

# /usr/bin/openssl genrsa -out /etc/vmms/client/signing_key.pem 4096

. A default root certificate authority /etc/vmms/client/rootCA.pem is generated on the control node. This is generated using the openssl req command with the Common Name(CN) set to the IP address of the control node.

The file can also be manually generated by running the following command on the OpenStack control node:

# /usr/bin/openssl req -x509 -new -nodes -sha256 -days 3650 -key /etc/vmms/client/signing_key.pem -out /etc/vmms/client/rootCA.pem -subj "/CN=192.168.1.1"

NOTE – Replace 192.168.1.1 with the IP address of your OpenStack Controller Node.

. A 4096 bit private TLS key (/etc/vmms/tls.key) is generated on the SPARC compute node.

The file can be manually generated by running the following command on the SPARC compute node:

# /usr/bin/openssl genrsa -out /etc/vmms/tls.key 4096

. A Certificate Signing Request (/etc/vmms/tls.csr) is generated on the SPARC compute node using the private key (/etc/vmms/tls.key). The Common Name(CN) value is set to the IP address of the compute node:

The file can be manually generated by running the following command on the SPARC compute node:

# /usr/bin/openssl req -new -key /etc/vmms/tls.key -out /etc/vmms/tls.csr -subj "/CN=192.168.1.2"

NOTE – Replace 192.168.1.2 with the IP address of your OpenStack compute node

13 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

. The Certificate Signing Request (/etc/vmms/tls.csr) is copied to the control node where a signed TLS certificate will be generated.

If using a corporate CA, you should submit the CSR to your security team for signing.

. Due to deprecation of the Common Name (CN) field a temporary configuration file (/etc/vmms/tls.ext) is generated. This file maps the CN to a Subject Alternative Name (SAN)

The file can be manually created by with the following content on the OpenStack control node:

# cat /etc/vmms/client/tls.ext

[req]

x509_extensions = x509_req

[x509_req]

basicConstraints = CA:FALSE\n')

keyUsage = nonRepudiation, digitalSignature, keyEncipherment, dataEncipherment

subjectAltName = @alt_names

subjectKeyIdentifier = hash

authorityKeyIdentifier = keyid,issuer

[alt_names]

IP = 192.169.1.1

NOTE – Replace 192.168.1.1 with the IP address of your OpenStack Controller Node

. The certificate is signed by the CA

If using a corporate CA, this stage would be performed by your security team.

The temporary openssl configuration file (/etc/vmms/tls.ext) is used with the rootCA private key (/etc/vmms/client/signing_key.pem) and root certificate file (/etc/vmms/client/rootCA.pem) to sign the request from the compute node (/etc/vmms/tls.csr). The signed TLS certificate file is stored as /etc/vmms/tls.crt.

The root certificate serial file (/etc/vmms/client/rootCA.srl) file is used for counting the number of certificates signed by the root certificate file (/etc/vmms/client/rootCA.pem). The root certificate serial file is generated using the -CAcreateserial option for openssl x509 command. Subsequent signed certificate calls are made use the - CAserial option.

From the OpenStack control node, run the following:

14 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# /usr/bin/openssl x509 -req -days 1095 -sha256 -CAserial /etc/vmms/client/rootCA.srl - in /etc/vmms/client/tls.csr -CA /etc/vmms/client/rootCA.pem - CAkey /etc/vmms/client/signing_key.pem -out /etc/vmms/tls.crt - extfile /etc/vmms/tls.ext -extensions x509_req

. The signed TLS certificate (/etc/vmms/client/tls.crt) is transferred to the compute node and stored as /etc/vmms/tls.crt ready for use

You may configure the Oracle VM for SPARC OpenStack Nova Driver and Virtual Machine Management Service agent to consume your own custom (manually generated) TLS files as follows:

1. Oracle VM for SPARC OpenStack Nova Driver Configuraton The Root Certificate Authority (rootCA) is specified in the nova.conf configuration file. Each nova_compute_ldoms instance will refer to an individual nova.conf file named /etc/kolla/config/nova/nova- compute-ldoms-N.conf.

[ldoms]

ca_bundle_path = /path/to/custom/rootCA.pem

2. Virtual Machine Management Service SPARC compute node configuration The private TLS Key and signed certificate is specified in the Virtual Machine Management Service configuration file /etc/vmms/config.ini as follows:

[service]

tls_key_file = /path/to/custom/private.key

tls_cert_file = /path/to/custom/signed_certificate.crt

After making any modifications to the config.ini file, restart the vmms-agent service needs to be restarted via SMF:

# svcadm restart vmms-agent

For already deployed containers, it is recommended to update the container configuration file (/etc/kolla/nova- compute-ldoms-N/nova.conf) to reflect these changes. Once updated, use docker to restart the container:

# docker restart nova_compute_ldoms_1

15 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Virtual Machine Management Service Security – Related Configuration Options The Virtual Machine Management Service (VMMS) is used to communicate between a controller node and a compute node.

Configure VMMS as follows:

. Use TLS v1.2 for all communications between the controller nodes and the compute nodes. . Ensure that any VMMS communications with the compute node are on a private network. . Use the provided configuration scripts to set up a root CA and use it for signing certificates. However, you can use your own CA and replace the certificates and CA bundle.

Security Warnings

NOVA SERIAL CONSOLE IS NOT SECURED END TO END

The OpenStack Queens release does not provide a mechanism to fully secure serial consoles. Therefore, the link between a compute node and each controller node that runs the nova-serialproxy service is not encrypted.

If you want use the serial console capability, use a separate mechanism such as IPSEC across the link to encrypt this traffic. Ensure that the link is not a public network link so that all serial console traffic is on a private, controlled network.

ORACLE OPENSTACK ADMINISTRATION FOR ORACLE VM SERVER FOR SPARC

16 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Adding a Compute Node By Using the add-compute-node Utility The add-compute-node utility enables you to simplify the tasks associated with adding SPARC nodes into your OpenStack cloud. This utility includes options to configure a TLS secure channel between the controller and compute node, install and configure the VMMS agent on the remote node, and start up a local container that runs the nova-compute service.

Note - The add-compute-node utility requires the paramiko Python module. Ensure that you install paramiko which is available in the ol7_openstack50 repository:

# yum install -y yum-utils

# yum-config-manager --enable ol7_openstack50

# yum install -y python-paramiko

The add-compute-node utility reads configuration data from a configuration file or from the command line. The value that you specify on the command line overrides the value in the configuration file.

add-compute-node [-h] [--agent-gid AGENT_GID]

[--agent-uid AGENT_UID] [--cdom-cores CDOM_CORES]

[--cdom-ram CDOM_RAM] [--config-file CONFIG_FILE]

[--create-nova-instance] [--disable-default-tls] [--disable-tls]

[--ldoms-vsw-net LDOMS_VSW_NET] [--ssh-key-file SSH_KEY_FILE]

[--ssh-password-file SSH_PASSWORD_FILE]

[--ssh-port SSH_PORT] [--ssh-timeout SSH_TIMEOUT]

[--ssh-username SSH_USERNAME] [--verify-default-rsa]

[--verify-default-tls] [--vmms-agent-ip VMMS_AGENT_IP]

-h, --help show this help message and exit

--config-file CONFIG_FILE

Config file

--vmms-agent-ip VMMS_AGENT_IP

VMMS Agent Compute node IP

Maps to VMMS_AGENT_IP option in add-compute.conf.

--ssh-username SSH_USERNAME

SSH Username

Maps to SSH_USERNAME in add-compute.conf

17 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

--ssh-password-file SSH_PASSWORD_FILE

File containing SSH Password

Maps to SSH_PASSWORD_FILE in add-compute.conf

--ssh-key-file SSH_KEY_FILE

SSH Key File

Maps to SSH_KEY_FILE in add-compute.conf

--ssh-port SSH_PORT SSH Port to use, default is 22

Maps to SSH_PORT in add-compute.conf

--ssh-timeout SSH_TIMEOUT

SSH Timeout in seconds, default is 600

Maps to SSH_TIMEOUT in add-compute.conf

--agent-gid AGENT_GID

GID for group vmms, default is 120

Maps to VMMS_GID in add-compute.conf

--agent-uid AGENT_UID

UID for user vmms, default is 120

Maps to VMMS_UID in add-compute.conf

--disable-tls TLS enabled by defualt, Specify to disable

Maps to DISABLE_TLS in add-compute.conf.

--create-nova-instance

Specifcally configure a new nova compute instance

for this compute node. Default is to not configure

Maps to CREATE_NOVA_INSTANCE in add-compute.conf

--ldoms-vsw-net LDOMS_VSW_NET

Default VSW NIC to use for primary-vsw on control domain

Only consumed if control domain is in factory default mode.

Default is net0

Maps to LDOMS_VSW_NET in add-compute.conf

--cdom-cores CDOM_CORES

18 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Default cores allocated to control domain.

Only consumed if control domain is in factory default mode.

Default is 1

Maps to CDOM_CORES in add-compute.conf

--cdom-ram CDOM_RAM Default RAM in GB allocated to control domain

Only consumed if control domain is in factory default mode.

Default is 16

Maps to CDOM_RAM in add-compute.conf

--verify-default-rsa Verify default RSA configuration.

--verify-default-tls Verify default self signed TLS configuration enabled.

--disable-default-tls

Verify TLS configuration is disabled.

For information about the configuration files, see the /opt/openstack-ldoms/etc/add-compute.conf.example example configuration file.

You can specify configuration options by using the add-compute-node command, by using a configuration file, or both. Configuration option precedence is as follows:

. Command line . Specified configuration file, –-config-file . Default configuration file, /opt/openstack-ldoms/etc/add-compute.conf

You can enable or disable the default security options on a configured compute node by using the –-verify-default-rsa, –- verify-default-tls, and –-disable-default-tls options to the add-compute-node utility. If you specify any of these options, only the security-related task or tasks are performed.

Note that the security-related options require that the agent IP and SSH credentials are provided to the add-compute-utility via command line options or a configuration file.

Adding Logical Domain Flavors The add-O3L-ldom-flavors utility adds a set of basic flavors to help you get started with Logical Domains.

# /opt/openstack-ldoms/bin/add-O3L-ldom-flavors

19 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Creating Basic Networks The create-O3L- enables you to create a basic Neutron network and matching subnetwork.

You must specify the location of the configuration file to use as an operand to the utility. The configuration file specifies the properties required to configure the networks.

Base your configuration file on the /opt/openstack-ldoms/etc/create-network.conf.example example configuration file.

You can create a flat or vlan network type.

1. (Optional) If creating a vlan network type you must configure ML2 networking. Ensure you set the network_vlan_ranges properties in the /etc/kolla/neutron-server/ml2_conf.ini file similar to the following example:

[ml2_type_vlan]

network_vlan_ranges = physnet:2:4000

If you modify the ml2_conf.ini file, restart the neutron-server container.

# docker restart neutron_server

2. Run the create-O3L-network utility

This utility processes the configuration file by ensuring that the specified network and subnetwork names do not exist already and then creating the networks.

# /opt/openstack-ldoms/bin/create-O3L-network create-network.conf

Creating and Uploading Glance Images This section describes the process to prepare a special-purpose logical domain for use as the source of a “golden OS image”.

As a word of caution, using a live, active, or production system for this process is strongly discouraged. Only use a system that is built to become a golden image and then discarded. The creation process is potentially destructive to the configuration of the source logical domain. If the host-ID of the source logical domain changes, the ldom-init utility might attempt to unconfigure or reconfigure some system configuration information such as network addresses, routes, host names, DNS entries, and so on. Such configuration changes may lead to the system not booting correctly.

After you install the ldom-init utility, shut down the domain to ensure that it is in a consistent state. When the domain is cleanly shutdown, you can capture a golden image of the source domain and redeploy it as an OpenStack guest domain.

The ldom-init utility starts automatically on the initial boot. The utility determines whether the host-ID of the current guest domain differs from the host-ID of the source domain where it was captured. If the host-ID values differ, ldom- init unconfigures the guest domain and then uses OpenStack metadata to reconfigure the logical domain's host name, IP addresses, subnet masks, default gateways, DNS name servers and so on.

20 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

GOLDEN IMAGE LIMITATIONS

There are some known limitations and recommendations when creating and deploying golden images for Oracle OpenStack on Oracle VM for SPARC. The following list outlines these:

. It is best when the root disk is at least eight Gbytes in size. Smaller disks can produce small cylinder sizes and this situation might limit the expansion potential of an image to 18.75 Gbytes. . Ensure that you use a smaller root disk than the root disk size specified by any flavor you will use to deliver the captured golden image. Namely, if you plan to deliver VM root disks that are a minimum of 16 Gbytes in size, use an 8 Gbyte to 15 Gbyte root for your golden image. . The ability to expand the root zpool depends on the disk layout. Slice expansion is attempted only for the largest slice that is followed immediately (contiguously) by free space. The contents of the slice are not considered when selecting which slice to expand. . Linux for SPARC is not supported. . You might receive a vfs_mountroot: cannot mount root error message or a similar panic on re-deployment if a LUN that has previously been used for ZFS has been overwritten with an image that you plan to capture. To avoid this issue, use empty LUNs, ZFS volumes, or files. However, if you cannot create a image in this way, ensure that the first and last ten MBytes of the disk have been cleared before installing the Oracle Solaris OS onto a LUN that you plan to capture for use as a golden image. . You can use only one disk when constructing a golden image. . By default, for server hardware that supports EFI GPT labels (T4 or newer), Oracle Solaris 11.1 or later will create an EFI GPT labeled OS root disk. For instructions for creating an SMI VTOC labeled OS root disk on these servers, to maintain backward compatability with older servers, please see the Oracle Solaris Boot Disk Compatibility Guide at https://docs.oracle.com/cd/E93612_01/html/E93617/bootdiskcompatibility.html.

CREATING A GOLDEN OS IMAGE FOR GLANCE

1. Create a guest domain where any network interfaces have a static IP address configuration. Please refer to the “Setting Up Guest Domains” in this document.

2. Obtain ldom-init ISO image. The ldom-init ISO image is delivered as part of the ldoms-openstack-nova tarball. Refer to the “Obtaining Required Software” section in the Oracle VM Server for SPARC OpenStack Quick Start Guide. Once you have the tarball, extract its contents as follows:

# cd /; tar -xvf /var/tmp/ldoms-openstack-nova-remote-3.0.0.1.tar.gz

3. Attach the ldom-init ISO image to your guest domain. From the guest domain, run the following commands:

# ldm add-vdsdev options=ro,slice /opt/openstack-ldoms/ldom-init/ldom-init-2.0.0.5.iso ldom-init@primary-vds0

# ldm add-vdisk ldom-init ldom-init@primary-vds0 your-new-ldom

21 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

4. Mount the image in the guest domain. For Oracle Solaris 11 Operating Systems, run the following:

# mount -F hsfs /dev/dsk/c1d1s0 /mnt

For Oracle Solaris 10 Operating Systems, run the following:

# mount -F hsfs /dev/dsk/c0d1s0 /mnt

5. Install the ldom-init utility package. The ldom-init utility mounts the OpenStack configuration drive when a new guest boots. The utility executes the driver’s re-initialization instructions and the guest’s metadata on the configuration drive.

# /mnt/setup

6. Perform a clean shutdown of the Oracle Solaris Operating System running on the guest domain.

# shutdown -i5 -g0 -y

7. Determine the disk back end volume of the guest domain.

# ldm list -o disk primary | grep myldom-vol1

myldom-vol0 /dev/zvol/dsk/ldompool/myldom-vol10

8. Capture the disk image to a file. Even of the guest domain backend volume is a block device (/dev/dsk), the gdd command requires the corresponding character device (/dev/rdsk) for the . Also, use the appropriate whole disk link that ends in dNs2 (slice 2) for device that have a VTOC label, and dN (the disk number) for devices that have an EDI label. For example, the myldom-vol10 disk volume is the input file and the output file is the sol11_3S12_simp-init.img image.

# gdd if=/dev/zvol/rdsk/ldompool/myldom-vol0 of=sol11_3s12_ldom-init.img bs=1048576 oflag=nocache conv=sparse

22 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

UNINSTALLING THE LDOM-INIT UTILITY

To uninstall the ldom-init utility, perform the following:

1. Remove the ldom-init package For Oracle Solaris 11:

# pkg uninstall ldom-init

For Oracle Solaris 10:

# pkgrm ldom-init

2. Remove the saved hostid file and directory. Run the following commands:

# rm /etc/ldom-init/saved_hostid

# rmdir /etc/ldom-init

How to Create a WAN Boot Configuration Image for Glance The Oracle VM for SPARC driver supports only WAN boot for network booting.

To use WAN boot to install a machine, you must create a WAN boot configuration image by using the mkwanbootcfg utility. This utility is included in the Nova driver package.

/opt/openstack-ldoms/bin/mkwanbootcfg [-h] --output-file filename --url WANboot-file

[--client-id WANboot-client-ID] [--hostname WANboot-hostname]

[--http-proxy WANboot-proxy] [--tftp-retries WANboot-TFTP-retries]

[--overwrite] [--version]

Only the –-output-file and –-output-file options are required.

CREATE A WAN BOOT CONFIGURATION IMAGE FOR GLANCE.

On the Nova node, run the following command to create a WAN boot image for Glance:

# opt/openstack-ldoms/bin/mkwanbootcfg --output-file /var/tmp/s11_wanboot.img -- url http://10.0.241.223:5555/cgi-bin/wanboot-cgi

The wanboot configuration /var/tmp/s11_wanboot.img image is now available to import into Glance.

To use the ldom-init tool to configure a WANBoot guest, the appropriate ldom-init package (found in the 'pkgs' directory on the ldom-init ISO) needs to be made available via the WANBoot infrastructure, and added to the appropriate JumpStart

23 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

profile, or Automated Installer (AI) manifest (The general setup and configuration, of a WANBoot infrastructure for Oracle Solaris 11, or Oracle Solaris 10, is not covered by this document).

1. Obtain ldom-init ISO image.

The ldom-init ISO image is delivered as part of the ldoms-openstack-nova tarball. See the section Obtaining the Softwar.

2. Mount the image a. Oracle Solaris 11 OS: # mount -F hsfs /dev/dsk/c1d1s0 /mnt

a. Oracle Solaris 10 OS:

# mount -F hsfs /dev/dsk/c0d1s0 /mnt

3. Select the correct ldom-init package from the 'pkgs' directory on the ldom-init ISO image a. Oracle Solaris 11 OS: /mnt/ldom-init.p5p

b. Oracle Solaris 10 OS:

/mnt/ldom-init.pkg

4. Make the package available via the WANBoot infrastructure a. Oracle Solaris 11 OS: Upload the ldom-init package to an appropriate IPS repo. b. Oracle Solaris 10 OS: Make the package available via the WANBoot web server, in an appropriate location, along with flash archive(s) etc. 5. Add the package to the appropriate profile in the WANBoot infrastructure a. Oracle Solaris 11 OS: Add the repo origin URL where the package was uploaded above, using a element, with the name="ovm" attribute, to the element of the appropriate Automated Installer (AI) manifest...

...and a element line similar to following, to the appropriate element, with the action="install" attribute in the same AI manifest.

pkg://ovm/openstack/ldoms/ldom-init

b. Oracle Solaris 10 OS: Add a line similar to following to the appropriate profile.

24 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

package ldom-init add http://wanbootwebserver/path/ldom-init.pkg timeout 5

IMPORTING AN IMAGE INTO GLANCE

To import an image into Glance, use the import-O3L-glance-image utility to import images to Glance on an Oracle OpenStack deployment on a Linux.

# /opt/openstack-ldoms/bin/import-O3L-glance-image

Usage : import-O3L-glance-image

image-file : Full path to image file to be uploaded

image-name : Name of give image in glance, not allowed

image-type : Specify image type, valid values are: 'raw' or 'iso'

e.g.

import-O3L-glance-image /tmp/s11-u3.iso S11-ISO ISO

The following example command, run on the Nova compute node, imports the /tmp/s11-u3.iso image file to Glance and names it S11-ISO. The image type is ISO:

# /opt/openstack-ldoms/bin/import-O3L-glance-image /tmp/s11-u3.iso S11-ISO ISO

MANUALLY UPLOAD AN IMAGE TO GLANCE ON THE CLOUD CONTROLLER

1. As a superuser on the cloud controller system, source the .profile file.

# . ~/.profile

2. Then, from the cloud controller system, upload the golden image.

# KEYSTONE_PASSWORD=`sudo grep keystone_admin_password /etc/kolla/passwords.yml | cut -f 2 -d' '`

# AUTH_URL=`sudo grep "^auth_url = " /etc/kolla/nova-conductor/nova.conf | head -1 | sed 's/ = /=/' | cut -f2 - d'='`

# openstack \

> --os-interface internal \

> --os-auth-url ${AUTH_URL} \

25 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

> --os-identity-api-version 3 \

> --os-project-domain-name default \

> --os-tenant-name admin \

> --username admin \

> --password ${KEYSTONE_PASSWORD} \

> --user-domain-name default \

> image create \

> --container-format bare \

> --disk-format raw \

> --public \

> --property architecture=sparc64 \

> --property cpu_arch=sparc64 \

> --property hw_architecture=sparc64 \

> --property hypervisor=ldoms \

> --property vm_mode=ldoms \

> --file /var/tmp/Oracle-Solaris-OS-version-name.img \

> "LDOM: Solaris 11.4"

Running OpenStack Commands The OpenStack client CLI is not installed on the container host, it is available only within the various containers. The run-O3L- openstack-cli convenience script enables you to run the OpenStack CLI from the container host

You can use this utility to perform arbitrary OpenStack CLI commands.

It extracts the keystone_admin_password from the /etc/kolla/passwords.yml file and executes the supplied OpenStack command in the kolla_toolbox container.

26 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

The following example command obtains a list of networks:

# /opt/openstacl-ldoms/bin/run-O3L-openstack-cli network list

+------+------+------+

| ID | Name | Subnets |

+------+------+------+

| 6275fd59-6022-4a5a-ab71-fc848e8a00f4 | demo-flat | 0fcb0c59-683f-4ef8-88f1-674a7ee4f11a |

+------+------+------+

The utility could also be used for manual uploading of Glance images. For example:

# /opt/openstacl-ldoms/bin/run-O3L-openstack-cli image create \

> --container-format bare \

> --disk-format raw \

> --public \

> --property architecture=sparc64 \

> --property cpu_arch=sparc64 \

> --property hw_architecture=sparc64 \

> --property hypervisor=ldoms \

> --property vm_mode=ldoms \

> --file /var/tmp/Oracle-Solaris-OS-version-name.img \

> "LDOM: Solaris 11.4"

Configuring a Demonstration Environment

For information about using the setup-O3L-openstack-ldoms to configure a demonstration environment, please refer to the Oracle VM for SPARC for Oracle OpenStack – Non-Production Quick Start Guide

Verifying That a Remote Agent Is Functional The verify-O3L-compute-node utility enables you to determine whether a VMMS Nova client can connect to a VMMS agent. This utility requires the IP address of the compute node where the VMMS agent runs.

27 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

This utility will attempt to extract required connection properties from the nova.conf configuration file of the nova-instance for the compute node IP provided.

# /opt/openstack-ldoms/bin/verify-O3L-compute-node 10.169.122.145

Connecting with parematers :

Host : 10.169.122.145

Port : 8100

Signing Key : /etc/vmms/client/signing_key.pem

Use TLS : True

CA Bundle : /etc/vmms/client/rootCA.pem

Successfully connected to agent

Log Files Log files are important to resolve issues that are encountered. This section identifies the location of the log files for each of the Oracle OpenStack components related to Oracle VM for SPARC for Oracle OpenStack.

NOVA COMPUTE SERVICE

For Oracle OpenStack for Linux 5.0 (O3L) distributions:

The Oracle OpenStack on Oracle Linux 5.0 Nova compute logs are stored in the /var/lib/docker/volumes/kolla_logs/_data/nova/ directory. Each entry has a number for the compute instance, and the IP address or host name is appended to the log file name. For example:

/var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-10.145.13.127.log

For generic OpenStack distributions:

You can specify the log file in the Nova configuration file for the compute instance, for example:

DEFAULT]

log_file = /var/log/nova/nova-cpu-ca-ldom100.log

VIRTUAL MACHINE MANAGEMENT SERVICE (VMMS) AGENT LOG

The VMMS agent log defaults to /var/log/vmms/vmms.log on the SPARC compute node(s).

The VMMS agent can be placed in debug mode using the SMF property:

28 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# svccfg -s svc:/application/vmms-agent:default setprop config/debug = true

# svcadm refresh svc:/application/vmms-agent:default

# svcadm restart svc:/application/vmms-agent:default

Configuration Files There are several configuration files that can be configured for OpenStack. This section covers their location, function, and details on modifications that can be done to these configuration files.

NOVA.CONF CONFIGURATION FILE OPTIONS

This section covers the nova.conf file configuration files for Oracle OpenStack, generic OpenStack, and specifics for Oracle VM Server for SPARC configuration details.

ORACLE OPENSTACK 5.0 All nova.conf configuration files are in the /etc/kolla directory. For example, the configuration file for the first configured Oracle VM Server for SPARC compute instance is the /etc/kolla/nova-compute-ldoms-1/nova.conf file.

GENERIC OPENSTACK The generic OpenStack Nova compute configuration file is/etc/nova/nova-cpu.conf. However as there can be more than one instance of the Oraclve VM Server for SPARC OpenStack Nova Driver running on a single node a nova-cpu.conf file is required for each Nova compute instance. To uniquely identify the nova-cpu.conf file associated with a SPARC compute node, the IP of the compute node is appended to the config file. e.g. for Compute node IP 192.168.1.5 it's configuration file would be /etc/nova/nova-cpu-192.168.1.5.conf.

GENERIC OPTIONS FOR NOVA For information about the generic Nova options, see the OpenStack Queens Release documentation at https://docs.openstack.org/nova/queens/configuration/config.html. Please note: the Nova remote driver relies on specific configuration in keystone_auth to be set correctly, to ensure certain functionality: [keystone_auth] auth_url = username = password = project_name = user_domain_name = project_name =

ORACLE VM SERVER FOR SPARC SPECIFIC CONFIGURATION The nova.conf file includes the following [ldoms] and [ldomsvsw] sections, with example values below, to help describe properties of the Oracle VM Server for SPARC OpenStack 3.0 driver:

[ldoms]

agent_host = 172.16.40.53

agent_port = 8100

signing_key_file = /path/to/rsa/private.key

use_tls = True

29 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

ca_bundle_path = /path/to/ca_bundle

wanboot_cleanup_interval = 60

configure_all_routes = False

admin_user = root

permit_root_logins = False

glancecache_dirname = /path/to/glance/cache

config_drive_path = $instances_path

[ldomsvsw]

physical_vsw_mapping=physnet1:primary-vsw0,physnet2:primary-vsw1,physnet3:primary-vsw2

default_vsw=primary-vsw0

physical_network_mtus=physnet1:1500,physnet2:9000

default_mtu = 1500

netboot_segments=1,50,200

[ldoms] section

agent_host = # this is the IP address or host name where the VMMS agent will run. ie. the remote SPARC compute node

agent_port = # this is the port the VMMS agent is running on. Should be 8100 unless configured otherwise in your environment.

signing_key_file = # this is used to define the private RSA key used to sign from this node, that are sent to the VMMS

use_tls = # this defines if a TLS channel should be used (highly recommended this is set to True)

ca_bundle_path = # this defines where the trusted CA bundle is kept for verification of the TLS certificates

wanboot_cleanup_interval = # how often we should attempt to clean up any old boot configuration from Logical Domains that have completed WANboot (ie. installation has completed). Value is in seconds.

configure_all_routes = # this defines if the Logical Domain guest should attempt to configure all default routes (not recommended in most cases) when multiple networks are added. Generally, only the first interface's default is used, otherwise network issues may ensue.

admin_user = # what user should be used if an SSH public key is injected

permit_root_logins = # Configure Oracle Solaris 10 or Oracle Solaris 11 to allow direct root logins via SSH, which may be beneficial in some specific cases, but should not be used in general for security reasons

30 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

glancecache_dirname = # where to store cached Glance images

config_drive_path = # where to store the configuration drive, used to configure the guest

[ldomsvsw] section

physical_vsw_mapping = # list of 'Physical network' names to Logical Domains virtual switch names. This allows for placement of vlans onto the chosen physical device (as defined by the Oracle VM Server for SPARC VSW's net-dev property on the SPARC compute node).

default_vsw = # If a 'Physical network' to Logical Domains virtual switch mapping is not defined, default to this virtual switch.

physical_network_mtus = # this defines a mapping of the 'Physical network' name to an MTU size. Note the Nova driver will confirm the matching VSW on the compute node supports the MTU sizes specified.

default_mtu = # when no MTU is specified for a 'Physical network', use this one instead

netboot_segments = # list of vlan segments that have WANboot servers on them. This allows an end user to pick several networks when using WANboot, without the first device being required to be on a boot network. The first vnet configured that is on a vlan that is on this list, will be used for the network boot.

ml2_conf.ini (Neutron ML2) Configuration Options This driver expects Neutron to be configured to use 'ML2': https://wiki.openstack.org/wiki/Neutron/ML2 and certain configuration settings are important to note in the "ml2_conf.ini" file (/etc/neutron/plugins/ml2/ml2_conf.ini) in particular.

Note that 'physnet1, physnet2, ..., physnetN' are arbitrary names that are used to map a logical name to a physical network. In the context of the Logical Domains Nova driver, these are mapped to virtual switches (which ultimately map to a physical device or link aggregate). These mappings can be interpreted differently from host to host, which may be useful on systems where there is not a common physical device or VSW name used across systems, to connect to certain VLANs.

For example, physnet1 may map to primary-vsw0 on a given Oracle VM Server for SPARC host, and primary-vsw0 may have a 'net-dev' set on it, of net0, or aggr0, or similar. On a second host, physnet1 may map to primary-vsw1, which might have a net-dev of 'net2'. Although consistency is generally a good thing, in some environments, these mappings may be inconsistent from host to host, and the flexibility in ML2 and the Logical Domains Nova driver exists, that allows for that.

It is also worth noting when defining vlans, a "Physical Network" must be defined. This is referring to 'physnet1' (or whatever name(s) are chosen). If these are not defined within the ml2_conf.ini correctly, you will likely experience failures when creating vlan based networks in OpenStack.

The following configuration items and sections are relevant and explained below:

ml2 section

This section is where you need to at minimum, specify the mechanism_drivers and tenant_network_types.

[ml2]

tenant_network_types = vlan

mechanism_drivers = openvswitch

31 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

The Nova driver only supports vlan and flat networks. VxLAN and GRE tunnels are not supported by this driver. We suggest you at least add 'vlan' to the tenant_network_types. You may define others, but attempts to build Logical Domains against unsupported types will fail.

We suggest you use the 'openvswitch' mechanism driver. Logical Domains does not use openvswitch, however the Nova driver will operate with this mechanism_driver in place.

ml2_type_flat section

This section requires a list of devices that support flat networks. Typically this is only one 'Physical network', and defining more than one may not have an effect.

[ml2_type_flat]

flat_networks = physnet1

ml2_type_vlan section

This section defines what VLANs a 'Physical network' supports. You can define simply the name (which implies all), or a range for each device. These can overlap, but when a 'Physical network' is selected to create a network, it is this section of the configuration that will be checked to verify that the selected VLAN number (referred to as the segmentation ID in OpenStack) is in fact valid to use with that 'Physical network'.

The format is 'Physical network name':'Start of supported vlans':'end of supported vlans'. The last two components are optional.

[ml2_type_vlan]

network_vlan_ranges = public:2:500, physnet1:2:3000, physnet2:2:4000, physnet3:900:1000

Other configuration options

You can read more about ML2 configuration options here: https://wiki.openstack.org/wiki/Ml2_conf.ini_File

VIRTUAL MACHINE MANAGEMENT SERVICE AGENT CONFIGURATION

The Virtual Machine Management Service (VMMS) agent is configured automatically when you run the add-compute- node utility. The server side VMMS agent uses the configuration file /etc/vmms/config.ini

The [service] section defines the following values:

tls_key_file = filename specifies the private key for TLS encrypted communications.

tls_cert_file = filename specifies the certificate for TLS encrypted communications.

32 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

listen_address = IP-address specifies the IP address on which the service should listen.

Only run this service on an administrative or secured network that is separate to VM guests, storage, and other network traffic.

[service]

tls_key_file = /etc/vmms/tls.key

tls_cert_file = /etc/vmms/tls.crt

listen_address = 10.169.122.145

port = 8100

log_filename = /var/log/vmms/vmms.log

suriadm_retries = 6

suriadm_retry_delay = 30

verification_key_file = /etc/vmms/verification_key.pem # rsa public key to verify signed messages

log_maxsize = 1048576

log_maxfiles = 3

[storage]

volume_reserved = 500 # in megabyte units

volume_path = 'rpool/vmms/storage/volumes'

volume_fstype = ''

volume_sparse = True

[ldoms]

service_domains = ('primary',)

default_cpu_arch = 'native'

default_vds_service = 'primary-vds0'

default_vsw_service = 'primary-vsw0'

33 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

console_log_dir = '/var/log/vntsd'

hostid_derived_from_uuid = True

guest_config_data_cache_ttl = 60

update_spconfig_enable = True

update_spconfig_delay = 60

update_spconfig_name = 'vmms'

vntsd_proxy_listen_address = configobject.String # establish type is str, value triggers inheritance, see init()

vntsd_proxy_idle_timeout = 120

vntsd_proxy_ssl_cert_file = ''

vntsd_proxy_ssl_key_file = ''

vntsd_proxy_log_filename = '/var/log/vmms/vntsd-proxy.log'

vntsd_proxy_log_maxsize = 1048576

vntsd_proxy_log_maxfiles = 3

Be aware that if any changes are made in the config.ini file, please restart the vmms-agent via SMF to apply changes:

# svcadm restart vmms-agent

setup-openstack-ldoms.conf.example File

This section describes how to use the /opt/openstack-ldoms/etc/setup-openstack-ldoms.conf.example file.

Note, if you enable any of the following options, you must use the file specified by ADD_COMPUTE_CONFIG_FILE because it describes compute connectivity information. This compute connectivity information is required by the following entries: CONFIGURE_VMMS_AGENT, VERIFY_DEFAULT_RSA, VERIFY_DEFAULT_TLS, and DISABLE_TLS.

If you enable CONFIGURE_VMMS_AGENT then values of the following will be ignored and those from ADD_COMPUTE_CONFIG_FILE used instead: VERIFY_DEFAULT_RSA, VERIFY_DEFAULT_TLS, DISABLE_TLS

The following are the contents of the /opt/openstack-ldoms/etc/setup-openstack-ldoms.conf.example file:

# What is this nodes Fully Qualified Domain Name e.g. hostname.ie.oracle.com

NODE_FQDN=

34 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# What is the IP for this node, e.g. hostname -i or ip addr show or -a

NODE_IP=

# Primary NIC for this node e.g eth0 or enp0s3(VirtualBox)

NODE_NIC=

# Second NIC for Neutron e.g. eth1 or enp0s8(VirtualBox)

NEUTRON_EXTERNAL_NIC=

# What is the default docker registry to use ?

# Default value is "docker.io", most scenarios this will not need changing.

# If O3L containers are on some other registry please specify here.

DOCKER_REGISTRY=

# What is the default docker namespace to user ?

# Default value is "oracle", most scenaries this will not need changing

# If O3L containers are stored under a different namespace please specify here.

DOCKER_NAMESPACE=

# What is the default docker container prefix ?

# Default is officially delivered packages on docker.io, prefix is

# ${DOCKER_NAMESPACE}/ol-openstack e.g. oracle/ol-openstack

# If O3L containers being used have a difference prefix please specify here.

DOCKER_CONTAINER_PREFIX=

# What is the default docker container tag ?

# Default is the officially delivered packages tag of "5.0" for OpenStack Queens

# In most cases this will not need to be changed.

DOCKER_CONTAINER_TAG=

35 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# What is the docker nova-compute-ldoms container prefix ?

# Default if left blank presumes you are retrieving from docker.io as follows :

# ${DOCKER_NAMESPACE}/ol-ovm-sparc-openstack

# If nova-compute-ldoms container beins used was downloaded from OTN then this

# should be set to : openstack-ldoms/ol-ovm-sparc-openstack

# If you have built nova container images using the delivered kolla build

# configuration files set value to openstack-ldoms/oraclelinux-source

DOCKER_NOVA_CONTAINER_PREFIX=

# What is the docker nova-compute-ldoms container tag ?

# Default if left blank is latest release of "3.0.0.1"

# In most cases this will not require setting unless you have either built

# a custom nova-compute-ldoms container with a different tag.

DOCKER_NOVA_CONTAINER_TAG=

# Is http_proxy required to get to outside world ?

# Default is no proxy, uncomment to set.

# PROXY=

# What domains do not need a proxy ?

# Default is nothing, uncomment to set.

# NO_PROXY=localhost,127.0.0.1

# Do you want to enable debug output to OpenStack service log files ?

# Default is False

DEBUG_LOG=False

# Remove docker images at start of run ? True or False

# This will include default docker.io images that were downloaded either

36 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# by a previous run of setup-openstack-ldoms.sh or by some other means.

# It will also attempt to remove images identified by configuration

# variables LDOMS_NOVA_CONTAINER_PREFIX and LDOMS_NOVA_CONTAINER_TAG

# e.g. Default Docker Images :

# REPOSITORY = "oracle/ol-openstack-*

# TAG = "5.0.1"

# Custom configuration identified images :

# REPOSITORY = "${LDOMS_NOVA_CONTAINER_PREFIX}*"

# TAG = "${LDOMS_NOVA_CONTAINER_TAG}"

#

# Default is to not remove images

DOCKER_IMAGES_REMOVE=False

# Do you want to overwrite custom nova.conf file ?

# When configuring OL3 a custom nova.conf file is generated for kolla to

# overwrite certain values in nova-compute-ldoms-1 nova.conf.

# If this file already exists, user may not want to overwrite it.

# Specify True here to overwrite it. Default is to not overwrite.

OVERWRITE_CUSTOM_NOVA_CONF=False

# What is Virtual Machine Management Service(VMMS) Agent IP ?

VMMS_AGENT_IP=

# Do you want the VMMS Agent to be automatically configured on your remote

# compute node ?

# If yes, set CONFIGURE_VMMS_AGENT=True and provide a

# pointer to the add-compute-node.conf configuration file to use.

# Default is False

CONFIGURE_VMMS_AGENT=False

37 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# Configuration file to be used for adding a new compute node for this setup.

# This file is used if CONFIGURE_VMMS_AGENT=True or

# if GENERATE_RSA_KEY_FILES=True and keys were generated

# In GENERATE_RSA_KEY_FILES scenario only used for SSH credentials to push

# verificaiton key to host.

ADD_COMPUTE_CONFIG_FILE=

# Do you want to ensure default RSA security is configured ?

# RSA security is configured by default when adding a compute node

# (add-compute-node).

# This option is provided to ensure default RSA is configured on an already

# configured compute node.

# Default is False, if set to True will perform the following :

# - Check if default signing/verification keys exist, if one or both are missing

# remove stale pem files and re-generate new default ones.

# - Regardless of generation, distribute verification key to default location

# on compute node.

# - If signing_Key_file option set in client config.ini, ensure it points to

# default RSA signing file

# - If verificaiton_key_file option set in server config.ini, ensure it points

# to default RSA verificationfile

# - Restart vmms-agent SMF on compute node

# NOTE: If True, communication to Compute node is required via SSH. SSH

# Credentials should be provided via a add-compute.conf file

# (ADD_COMPUTE_CONFIG_FILE). ONLY SSH credentials and HOST IP are

# consumed from this file.

# If CONFIGURE_VMMS_AGENT=True, this option is ignored, as default RSA will be

# configured via that option.

38 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

VERIFY_DEFAULT_RSA=False

# Do you want to ensure default self-signed TLS is configured ?

# TLS is enabled by default when adding a compute node (add-compute-node),

# unless user specifically chose to disable via add-compute.conf option

# DISABLE_TLS or add-compute-node cli --disable-default-tls.

# This option is provided to ensure default TLS is enabled on an already

# configured compute node.

# Default is False, if set to True will perform the following :

# - Check if default self-signed TLS Certificate/Key exist on compute node.

# If one or both are missing remove stale key/cert and re-generate new set

# of self-signed key/cert files.

# - Ensure use_tls option is set to "True" in client config.ini

# - Ensure tls_key_file and tls_cert_file options in server config.ini are set

# to default values.

# - Restart vmms-agent SMF on compute node

# NOTE: If True, communication to Compute node is required via SSH. SSH

# Credentials should be provided via a add-compute.conf file

# (ADD_COMPUTE_CONFIG_FILE). ONLY SSH credentials and HOST IP are

# consumed from this file.

# If CONFIGURE_VMMS_AGENT=True, this option is ignored, as default TLS will be

# configured via that option.

VERIFY_DEFAULT_TLS=False

# Do you want to disable TLS ?

# As noted TLS is enabled by default, however if you want to specifically

# disable TLS for this setup, set this option to True.

# You cannot specify VERIFY_DEFAULT_TLS=True and DISABLE_TLS=True at the

# same time.

39 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# Default is False. If specified to disbale tls following is performed

# - Ensure use_tls=False in client config.ini

# - Ensure tls_key_file and tls_cert_file options are removed from server

# config.ini file

# - Restart vmms-agent SMF on compute node

# NOTE: If True, communication to Compute node is required via SSH. SSH

# Credentials should be provided via a add-compute.conf file

# (ADD_COMPUTE_CONFIG_FILE). ONLY SSH credentials and HOST IP are

# consumed from this file.

# If CONFIGURE_VMMS_AGENT=True, this option is ignored, as disabling of

# TLS will be handed via add-compute-node's DISABLE_TLS configuration file

# option.

DISABLE_TLS=False

# Do you want to enable serial console ?

# Serial console access is not secure, so by default it is disabled.

# Set to True to enable serial console access whilst being aware of lack of

# security

ENABLE_SERIAL_CONSOLE=False

add-compute.conf File

The following are the contents of the /opt/openstack-ldoms/etc/add-compute.conf file:

# Options used by add-compute-node for configuring vmms-agent on compute node

# IP/FQDN of compute hostname/IP

VMMS_AGENT_IP=

# Username for SSH access to compute node

40 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

SSH_USERNAME=

# File containing Password for SSH access to compute node

SSH_PASSWORD_FILE=

# Port for SSH access to compute node

SSH_PORT=22

# SSH Key file to use for authorization

# Only supply of not providing SSH_PASSWORD_FILE

SSH_KEY_FILE=

# Timeout for SSH actions, default 10 minutes(600 Seconds)

SSH_TIMEOUT=600

# GID and UID for vmms user on VMMS_AGENT_HOST (Compute node), defaulting to 120 for both

VMMS_GID=120

VMMS_UID=120

# Default TLS Security settings is enabled by default.

# To specifically disable set DISABLE_TLS to True

DISABLE_TLS=False

# Create a new nova compute instance for this compute node

# Default is to not create a nova instance

CREATE_NOVA_INSTANCE=False

# The following three entries are only consumed if compute host control domain

# is in factory-default mode

41 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# What NIC to use for primary VSW on compute node

LDOMS_VSW_NET=net0

# Cores allocated to the control domain:

CDOM_CORES=1

# RAM (in Gigabytes) allocated to the control domain:

CDOM_RAM=16

create-network.conf.example File

The following are the contents of the /opt/openstack-ldoms/etc/create-network.conf.example file:

# Specify the name of your network

NETWORK_NAME=demo-network

# Choose the network type to create. Two valid types "vlan" or "flat"

NETWORK_TYPE=flat

# Specify the name of your subnet

SUBNET_NAME=demo-subnet

# Specify the subnet network address in cidr format e.g. 10.169.122.128/25

SUBNET_CIDR=

# Specify the gateway IP address e.g. 10.169.122.129

SUBNET_GATEWAY_IP=

# Specify starting IP address of your allocation pool range e.g. 10.169.122.180

42 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

SUBNET_START_IP=

# Specify ending IP address of your allocation pool range e.g. 10.169.122.190

SUBNET_END_IP=

# Specify the IP Address of the primary DNS Server e.g. 10.169.123.17

SUBNET_DNS_IP=

43 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

TROUBLESHOOTING ORACLE VM SERVER FOR SPARC FOR ORACLE OPENSTACK

Using Log Files

LOCATING LOG FILES

The following section details on where the required log files are located and how they are stored.

. Nova Compute Service  Oracle OpenStack on Oracle Linux 5.0 (O3L) Nova compute logs

These logs are stored in the /var/lib/docker/volumes/kolla_logs/_data/nova/ directory. The log file name is of the form: nova-compute-ldom--.log

Each entry will have a number for the compute instance, and the IP address or hostname appended to the log. For example:

/var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-10.145.13.127.log

. Generic OpenStack Distribution

You can specify the location of the log file in the nova configuration file for the compute instance. The default location is /var/log/nova/nova-cpu.log, however given you can run more than one Oracle VM for SPARC Nova compute service on each node it is recommended to append the IP address of the compute node to the log file name. Debug and verbose outputs can be set to help with troubleshooting. You can specify all three as follows:

[DEFAULT]

log_file = /var/log/nova/nova-cpu-10.145.13.127.log

debug = True

verbose = True

. Remote (SPARC) Compute Node VMMS Service Agent Log

The VMMS agent log output defaults to /var/log/vmms/vmms.log on all SPARC compute node(s).

44 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

UNDERSTANDING LOG OUTPUT FROM THE NOVA DRIVER

The Oracle VM Server for SPARC OpenStack Nova driver provides different levels of detail based on the debug and verbose property values in the nova.conf configuration file.

To improve the troubleshooting of an Oracle VMMS Server for SPARC OpenStack issue, ensure to enable verbose and debug output by setting verbose and debug properties in the relevant nova.conf file as follows:

[DEFAULT]

debug = True

verbose = True

When in debug mode, the Oracle VM Server for SPARC Nova driver provides trace messages to help identify precisely the method being run at any given time. These messages can help you identify the cause of an issue that you encounter. The nova-compute service also provides trace messages every time an unhandled exception occurs.

Perform the following steps to troubleshoot an issue.

1. Search for a method entry or a method return in the nova-compute instance log file. For identifying log see the section Locating Log Files in this document. To find a method entry, search for run_method: in the nova-compute service log file. To find a method return, search for method_return: in the nova-compute service log.

2. View other log messages from the driver by searching for messages that begin with one of the following words: . DEBUG . WARNING . EXCEPTION . INFO

Note that each Nova driver log entry begins with a line similar to the following:

2018-07-07 15:14:51.404 29186 DEBUG nova.virt.ldoms.driver

3. View trace messages to identify the root cause of a problem by searching for TRACE. The nova-compute service provides trace messages when an unhandled exception occurs.

Ensure that the driver started as expected. When you start the nova-compute service, ensure that it starts properly by verifying that lines similar to the following appear in the log:

INFO nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] *****Logical Domains Nova Driver (Remote) 3.0 Initialisation******

INFO nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] * OpenStack/LDoms Nova driver is starting *

INFO nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] ******************************************************************

45 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] Configuring nova.conf overrides {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:102}}

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] svc_config.client.host: 10.129.68.20 {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:122}}

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] svc_config.client.port: 8100 {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:125}}

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] svc_config.client.signing_key_file: /etc/vmms/keys/signing_key.pem {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:128}}

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] svc_config.client.ca_bundle_file: /etc/vmms/ca_bundle/rootCA.pem {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:132}}

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] Configuring remote agent {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:134}}

DEBUG nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] Getting initial LDoms state information {{(pid=2535) __init__ /opt/stack/nova/nova/virt/ldoms/driver.py:136}}

DEBUG nova.virt.ldoms.host [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] method_run: update_status() {{(pid=2535) update_status /opt/stack/nova/nova/virt/ldoms/host.py:61}}

.

.

.

INFO nova.virt.ldoms.driver [None req-2590a407-68f3-480c-b8f4-c74e6d39d6da None None] Logical Domains Nova Driver (Remote) 3.0 has been initialised.

.

.

.

DEBUG nova.service [None req-ea73264b-0906-438a-9e39-5cc61cb6ac20 None None] Creating RPC server for service compute {{(pid=7227) start /opt/stack/nova/nova/service.py:184}}

DEBUG nova.service [None req-ea73264b-0906-438a-9e39-5cc61cb6ac20 None None] Join ServiceGroup membership for this service compute {{(pid=7227) start /opt/stack/nova/nova/service.py:202}}

46 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

DEBUG nova.servicegroup.drivers.db [None req-ea73264b-0906-438a-9e39-5cc61cb6ac20 None None] DB_Driver: join new ServiceGroup member devstack-gmo-ca-ldom100 to the compute group, service = {{(pid=7227) join /opt/stack/nova/nova/servicegroup/drivers/db.py:47}}

a. If you do not see the initialization message, something is fundamentally wrong, such as a python import error (missing libraries). b. If you see a hang at "Getting initial LDoms state information" while in debug mode, there is likely a problem with VMMS agent communications, or the VMMS agent itself. Check the /var/log/vmms/vmms.log file on the SPARC compute node and ensure the service 'vmms-agent' is running. You may also wish to ensure that the port 8100 is listening using ' -an|grep 8100'. c. If you do not reach the "Logical Domains Nova Driver (Remote) 3.0 has been initialised" stage, again, there may be VMMS configuration issues to look into. d. If you do not see the Creating RPC server and the DB_Driver: join new ServiceGroup member ensure the driver has not stopped because of a configuration issue. If the problem is a configuration issue, you will see an exception starting with EXCEPTION or ERROR in the early startup log. There may also be issues with the message queue (for example, RabbitMQ). Some misconfiguration issues (like server clocks out of sync, or mismatched MTUs on the management network) can cause difficult to diagnose problems.

4. If the driver does not detect configuration issues, ensure that the MTU value for the management network is consistent across devices, that NTP is configured, and name resolution is working.

5. View DEBUG traces to precisely locate where the problem occurs and see all the steps that led up to the problem

6. When in debug mode (debug=true), DEBUG messages from the Nova driver including run_method and method_return traces, are written to the Nova compute manager logs. Use the following commands to simplify the debugging process when in debug mode or when viewing a debug log file. Note ${IP} is the IP address of the relevant SPARC compute node:

# tail -f /var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-${IP} | egrep "(DEBUG.*run_method:|ERROR)"

Or, run a trace that includes method_return with the return value:

# tail -f /var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-${IP} | egrep "(DEBUG.*(run_method:|method_return:)|ERROR)"

The following command performs a trace excluding periodic tasks:

# tail -f /var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-${IP} | egrep "(DEBUG.*run_method:|ERROR)" | egrep -v "PERIODIC"

The following command performs a trace of all messages logged by the LDOMs Nova driver at all logging levels:

47 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# tail -f /var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-${IP} | egrep "nova.virt.ldoms"

The following command peforms a trace of all WARNING and ERROR level messages logged by the LDOMS Nova driver:

# tail -f /var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-${IP} | egrep "(WARNING|ERROR) nova.virt.ldoms"

Validating Your Nova Compute Environment Validate your Nova compute environment by performing the following checks:

. Ensure all compute nodes have NTP enabled. OpenStack depends on accurate time to function properly. . Ensure that both forward and reverse name resolution function properly. Name resolution can be handled through a standard /etc/hosts file that contains information about the compute nodes in your OpenStack environment. Fully functioning DNS zones with accurate records and appropriate search paths on hosts should be used at scale. . Ensure that your SPARC compute systems are running the Oracle Solaris 11.4 Operating System. . Ensure that your SPARC compute systems are running at least the Oracle VM Server for SPARC 3.6 software. . Ensure that no services on the cloud controller or on the SPARC compute nodes have failed. Use svcs command to view the status of all services on your SPARC compute node:

# svcs -xv

On your Linux nodes you use the systemctl command to view the status of services:

# systemctl list-units --state=failed

Troubleshooting and Debugging the VMMS The Virtual Machine Management Service (VMMS) agent uses a configuration file of /etc/vmms/config.ini by default.

The configuration file to use can be specified via the SMF property config/config_path for the vmms-agent:default service. For example:

# svccfg -s vmms-agent:default setprop config/config_path

When debugging issues with the vmms-agent it is recommended to enable debug output to the log file.

The state of debug logging can be modified via the SMF property config/debug for the vmms-agent:default service:

# svccfg -s vmms-agent:default setprop config/debug = true

If you make any changes to the vmms-agent SMF properties you will need to refresh and restart the service:

48 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# svcadm refresh vmms-agent

# svcadm restart vmms-agent

Any error messages not sent to the default log file /var/log/vmms/vmms.log will be emitted to stderr which is captured in the SMF log file for the vmms-agent service.

Use the following command to view this log file:

# svcadm -Lv vmms-agent:default

The VMMS client (vmms.client) does not use any configuration file, all configuration related data is passed via API parameters. All exceptions raised in the client will be reported in the nova-compute log file.

Troubleshooting VM Deployment Issues “Error: No valid host was found” message received.

You may see the no valid host was found error message when a VM deployment fails. Even though a node should be able to satisfy the request based on available resources and hypervisor type the problem may be a partial deployment failure.

To determine the root cause, ensure that the compute node is in debug mode by setting debug=true and verbose=true in the relevant nova compute nova.conf. On O3L (Oracle OpenStack on Oracle Linux) this will be /etc/kolla/nova-compute-ldoms- 1/nova.conf

If debug mode is disabled, add the following lines to the relevant nova.conf file:

[DEFAULT]

debug=true

verbose=true

Restart the nova-compute service:

# docker restart nova_compute_ldoms_1

For each compute node running nova-compute-ldoms, search for DEBUG: run_method: spawn() in the compute node's log to determine whether the compute node received the request.

# tail -f /var/lib/docker/volumes/kolla_logs/_data/nova/nova-compute-ldoms-1-${IP} | grep "DEBUG: run_method: spawn()"

If you see a line like the following, the request is reaching the compute node and you have identified the likely location of the problem:

49 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

2016-07-07 13:48:58.319 29186 DEBUG nova.virt.ldoms.driver [req-1440679a-771d-4e21-aca7- 7b42f6a35648 d225a5a7434f4685a9f47326a2e5ff9f 3255d9556a354e8589b9a0a8475d7c0e - - -] DEBUG: run_method: spawn() spawn /usr/lib/python2.7/vendor- packages/nova/virt/ldoms/driver.py:954

If a spawn() occurs, the issue may be on the Compute node itself, consult the vmms agent service log file on the relevant SPARC compute node.

If you receive the No valid host found error but never see a spawn() line, the problem is probably on the cloud controller; Consulting other nova service log files such as nova-scheduler, and nova-conductor may provide more information.

Instance Failed To Spawn: ConnectionError: ('Connection aborted', BadStatusLine("''",)) This issue can occur in the Nova compute logs, when nova-compute is running in an environment where https_proxy and/or http_proxy environment variables are set. Please ensure you do NOT set environment proxy environment variables in an environment where you are running the OpenStack Logical Domains remote Nova driver. If these are set in your environment, unset them in the shell, profile, or anywhere else that may effect the shell environment that the driver will operate in.

Creating Basic Neutron Networks in Oracle OpenStack On Oracle Linux 5.0 (O3L) The utility script /opt/openstack-ldoms/bin/create-o3l-network can be used for creating basic flat or vlan type Neutron networks and matching subnets. This script will fail if ml2 is not configured as expected. Ensure properties network_vlan_ranges and flat_networks are set in the configuration file /etc/kolla/neutron- server/ml2_conf.ini to something similar to the following:

[ml2_type_vlan]

network_vlan_ranges = physnet1:2:4000

[ml2_type_flat]

flat_networks = physnet1

If any changes are made to the configuration file /etc/kolla/netron-server/ml2_conf.ini, the neutron-server service container must be restarted:

# docker restart neutron_server

Serial Console Instance Access Failing Serial console access for instances is by default disabled. If you have enabled serial console access for compute instances and it is failing this may be due to the nova-serialproxy service attempting to prevent cross site scripting access, especially if you are using a fully qualified domain name in the URL for accessing Horizon's dashboard.

Two possible solutions:

1. Use IP address instead of the FQDN(Fully Qualified Domain Name) in the URL when accessing the OpenStack Horizon Dashboard.

50 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

2. Configure the nova-serialproxy service to allow access from your FQDN. To do this update /etc/kolla/nova- serialproxy/nova.conf and add your FQDN to the property [console] allowed_origins. For example to use my.host.com when accessing the OpenStack Horizon Dashboard URL (http://my.host.com/auth/login) the file should contain

[console]

allowed_origins = my.host.com

Note that if changes are made to the nova-serialproxy configuration file you will need to restart the docker service:

# docker restart nova_serialproxy

DEMONSTRATION ENVIRONMENT BUILD QUICK START This section describes the pre-requisites that need to be met before you can install and configure the Oracle VM Server for SPARC OpenStack compute node within this quick start environment.

Controller Node Pre-requisites Hardware Requirement(s)

. Any x86/x64 system that supports Oracle Linux or a third-party Linux OS and OpenStack.

51 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

. Refer to the first section of the “Build and Configure the OpenStack Controller” in this guide. Ensure that the host controller system has the needed hardware requirements to host the OpenStack controller virtual machine.

Software Requirement(s)

. Oracle Linux 7 and Oracle OpenStack for Linux (O3L) version 5.0 (Queens release). Please refer to the Oracle OpenStack for Linux Documentation: http://www.oracle.com/technetwork/server-storage/openstack/linux/documentation/index.html or a third-party Linux OS and OpenStack.

Note - The Oracle VM Server for SPARC OpenStack Compute Node 3.0 software provides an OpenStack Nova driver that adds support for the Oracle VM Server for SPARC software OpenStack. See OpenStack Compute (nova) (https://docs.openstack.org/nova/latest/).

The Nova driver is compatible only with the OpenStack Queens release. You must obtain the rest of the OpenStack environment that is required for this driver. You can obtain OpenStack Queens software for an Oracle Linux 7.x x86/x64 system, or another OpenStack Queens software stack from a third party. At least one OpenStack controller node is required for this software to function.

Oracle VM Server for SPARC OpenStack Compute Node(s) Pre-Requisites Hardware Requirement(s)

. Platform – Oracle SPARC T4 Series or newer with minimum single processor. . RAM – 64GB minimum . Local Storage – 300GB free minimum . SP/ILOM Firmware Level version 4.0.3.1 or newer.

Software Requirement(s)

. Operating System – Oracle Solaris 11.4 minimum . Oracle VM Server for SPARC version 3.6 or higher

Oracle VM Server for SPARC OpenStack Guest Domain(s) Pre-Requisites Software Requirement(s)

. Oracle Solaris 10 Domain – Oracle Solaris 10 1/13 with appropriate patch set required to run on minimum Oracle SPARC T4 series server. . Oracle Solaris 11 Domain – Oracle Solaris 11.0 minimum.

Obtaining Required Software The Oracle VM Server for SPARC OpenStack Compute Node 3.0 software deliverable is comprised of the following files:

. ldoms-openstack-nova-remote-3.0.0.1.tar.gz This file contains installation and setup utilities, configuration files, python source for the Nova compute remote driver, and the Virtual Machine Management Service (VMMS) client. Download this file from: https://www.oracle.com/technetwork/server-storage/vm/downloads/index.html

. ol-ovm-sparc-openstack-nova-compute-ldoms-container-3.0.0.1.tar This file is the Docker container image for Oracle OpenStack on Oracle Linux Nova compute remote driver. Download this file from: https://www.oracle.com/technetwork/server-storage/vm/downloads/index.html

52 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Note – This Docker container can be obtained from the following Docker container repositories: . Oracle Container Registry: https://container-registry.oracle.com . Docker Hub: https://hub.docker.com/r/oracle

To deploy guest domains on the SPARC compute node, we need to have an image or media to install onto the guest domain. For this, please download the SPARC Text Installer for Oracle Solaris 11.4 from:

https://www.oracle.com/technetwork/server-storage/solaris11/downloads/install-2245079.html

and place the sol-11_4-text-sparc.iso file under /var/tmp directory on the OpenStack control node.

You can do the same for the Oracle Solaris 10 for SPARC ISO image. This can be downloaded from:

https://www.oracle.com/technetwork/server-storage/solaris10/downloads/index.html

Creating a Single-Node Demonstration Environment This section describes how to build a single-node demonstration environment that uses the Oracle OpenStack for Linux 5.0 software. This environment will allow you to test the OpenStack Nova driver for Oracle VM Server for SPARC by using the setup-O3L-openstack-ldoms utility on a freshly installed and updated Oracle Linux 7.5 (x86/x64) machine/virtual machine. This utility creates a single-node all-in-one Oracle OpenStack on a Linux 5.0 environment.

NOTE - This demonstration environment will not support Oracle VM for SPARC live migration as it does not configure shared storage.

The demonstration environment described in this document will consist of a single controller node and will manage two Oracle VM for SPARC Nova compute nodes. Any of the IP addresses listed in this document are for demonstration purposes. Use IP address assignments and networking details that will match your data center networking.

Build and Configure the OpenStack Controller The first step is to build the OpenStack single-node controller which for this quick start guide is a Linux KVM. This can also be done on a physical host as well.

53 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

To build the virtual machine, the KVM environment needs to be installed on the Linux host, and configure the appropriate virtual networks.

To install KVM on your Linux host, please refer to the following Oracle article: https://blogs.oracle.com/virtualization/configure-kvm-host-on-oracle-linux-with-uek-v2

Within your Oracle Linux R7 KVM environment, create virtual network that is connected to an “Isolated virtual network”. This network will be used in the KVM virtual machine in the next steps.

1. Create a new virtual machine with the following configuration details: . Oracle Linux R7-U5 Operating System . 2 vCPU minimum . 10GB memory minimum . 120GB local storage . Network adapter 1: . Network source: Device attached to physical network . Source mode: Bridge . Configure to on and connect automatically . No IPv4 address assigned, set IP address assignment via DHCP . Network adapter 2: . Network source: Device attached to internal “isolated virtual network” . Configure to on and connect automatically . No IPv4 address assigned, set IP address assignment via DHCP

2. Once the virtual machine is installed and running, configure the interface for adapter 1 with a static IP address and hostname. In the following example, hostname is set as demo-allinone. Ensure that you assign the appropriate IP addresses, DNS, netmask, and gateway for your environment. Only a single DNS entry is created in this example, enter additional DNS entries for your environment if required. Reboot the virtual machine when completed.

# vi /etc/sysconfig/network-scripts/ifcfg-

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

54 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=

UUID=

DEVICE=

ONBOOT=yes

IPADDR=

NETMASK=

GATEWAY=

DNS1=

# hostnamectl set-hostname demo-allinone

# reboot

After the host has rebooted ensure the network settings have been applied as you expected.

3. Download the latest public yum configuration file and update your system:

# curl -s http://public-yum.oracle.com/public-yum-ol7.repo -o /etc/yum.repos.d/public-yum-ol7.repo

# yum update -y

4. Install and enable Docker services:

# yum-config-manager --enable ol7_addons

# yum install -y docker-engine

# systemctl enable docker && systemctl start docker

5. Place the two files (ldoms-openstack-nova-remote-3.0.0.1.tar.gz and ol-ovm-sparc-openstack-nova-compute-ldoms- container-3.0.0.1.tar) under the /var/tmp/ directory on the demo-allinone virtual machine.

55 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

6. Extract ldoms-openstack-nova-remote.tar.gz onto the root partition:

# cd /; tar -xvf /var/tmp/ldoms-openstack-nova-remote-3.0.0.1.tar.gz

7. Load ol-ovm-sparc-openstack-nova-compute-ldoms-container-3.0.0.1.tar into your local Docker registry:

# docker load -i /var/tmp/ol-ovm-sparc-openstack-nova-compute-ldoms-container-3.0.0.1.tar

8. Complete the primary setup configuration file. For full details on configuring this configuration file, please refer to the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 Administration Guide. This section will cover the basic setup of the configuration file.

Copy over the provided example file to the file that will be worked on:

# cp /opt/openstack-ldoms/etc/setup-openstack-ldoms.conf.example /opt/openstack-ldoms/etc/setup- openstack-ldoms.conf

In the setup-openstack-ldoms.conf file, you will need to enter the correct values for the following paramters: . NODE_FQDN to the fully qualified domain name for the OpenStack controller . NODE_NIC to the interface for network adapter 1. This can be found using the ifconfig command. . NEUTRON_EXTERNAL_NIC to the interface for network adapter 2. . VMMS_AGENT_IP to the IP addresses of the SPARC T-series server that will interface with the OpenStack controller . CONFIGURE_VMMS_AGENT to True . ADD_COMPUTE_CONFIG_FILE to the location of the add-compute-<>.conf file. We will configure this file next. . DOCKER_NOVA_CONTAINER_PREFIX needs to be set to openstack-ldoms/ol-ovm-sparc-openstack. This will use the docker image that was downloaded and loaded earlier.

Optionally, the PROXY setting will need to be configured depending on your network.

# vi /opt/openstack-ldoms/etc/setup-openstack-ldoms.conf

NOTE – Please refer to appendix for example setup-openstack-ldoms.conf file

9. Create a SSH_PASSWORD_FILE on the controller for the compute node setup. This is assuming the password for the ssh user is changeme. Please use the appropriate password value here if the ssh user password is not ‘changeme’.

56 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# echo changeme >> /root/cnode001-pw.txt

10. Configure the compute setup file on the controller. For full details on configuring this compute setup file, please refer to the Oracle VM Server for SPARC OpenStack Nova Driver and Utilities 3.0 Administration Guide.

Copy over the provided example file to the file that will be worked on:

# cp /opt/openstack-ldoms/etc/add-compute.conf /opt/openstack-ldoms/etc/add-compute-cnode001.conf

In the add-compute-cnode001.conf file, you will need to enter the correct values for the following parameters: . VMMS_AGENT_IP to the IP addresses of the SPARC T-series server that will interface with the OpenStack controller. . SSH_USERNAME to the Operating System user of the SPARC T-series server. . SSH_PASSWORD_FILE to the password file we created in the previous step for the user specified for SSH_USERNAME.

This file is specific to the SPARC compute node being configured for this demonstration environment.

# vi /opt/openstack-ldoms/etc/add-compute-cnode001.conf

NOTE – Please refer to appendix for example add-compute-cnode001.conf file

11. It is advised at this point to shut down the virtual machine OpenStack controller and take a snapshot. This will help if there are any problems during deployment with the configuration files to restore the virtual machine controller to a good state to fix the configuration file issues and re-run the deployment process. Once the snapshot is performed, please power on the OpenStack controller virtual machine.

Build and configure the Oracle SPARC OpenStack compute node On an Oracle SPARC T-series server (Oracle SPARC T4 or newer), Oracle Solaris 11.4 should be installed on the Control/Primary Domain. Ensure the Oracle SPARC/Solaris environment meets the following criteria.

. Change root from role to user, and permit root user ssh privileges

# rolemod -K type=normal root

# vi /etc/ssh/sshd_config PermitRootLogin yes

57 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# svcadm restart ssh

NOTE – For a production environment, it is not recommended to allow root the capability for ssh login. Have a dedicated user with the adequate credentials.

. No pre-existing domain configuration of vds (virtual disk service), vcc (virtual console concentrator), vsw (virtual switch) . Best way to confirm this is to set the spconfig to factory-default and power cycle the system From the SPARC T-series server Operating System

# ldm set-config factory-default

# shutdown -i1 -g0 -y

. From the SPARC T-series ILOM/Service Processor power cycle the system.

-> reset /SYS

Deploy the OpenStack Controller Services on the Virtual Machine The Build and Configure OpenStack Controller section above has prepared the OpenStack controller demonstration environment for the deployment of the OpenStack controller services.

1. From this point, the next step is to run the O3L-openstack-ldom setup script. Execute the following:

# cd /opt/openstack-ldoms/etc/

# /opt/openstack-ldoms/bin/setup-O3L-openstack-ldoms setup-openstack-ldoms.conf

2. When prompted, agree to continue by typing ‘Y’ and pressing return. Please note that this process can take some time.

3. When everything has completed correctly, you should see the following message:

OpenStack configuration has completed successfully

4. Once completed successfully, reboot the SPARC T-series server. Run the following command from the SPARC compute node:

# init 6

58 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Access and Configure the OpenStack Environment Now that the OpenStack controller has been setup and the SPARC domain has been configured, access the OpenStack Horizon WebUI.

1. Collect the OpenStack admin password. Run the following command:

# grep keystone_admin /etc/kolla/passwords.yml

keystone_admin_password: j5RFThSw2s2cYxGZdjQhJdKar7wjRBST9Tqd985j

2. Open a web browser to the following URL: http://

3. Enter the following credentials to log into the OpenStack controller . User Name: admin . Password: Copy and paste the encrypted keystone_admin_password from the grep command ran above on the OpenStack controller

And click “Connect” to log in.

4. Configure the Guest Network Create a simple network by creating the create-network.conf file from the create-network.conf.example file. Then edit the create-network.conf file and fill in the required parameters to match your network requirements.

# cd /opt/openstack-ldoms/etc/

59 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# cp create-network.conf.example create-network.conf

# vi create-network.conf

NOTE – Please refer to appendix for example create-network.conf file

Once the create-network.conf file has been configured correctly, use the create-O3L-network utility script to configure the simple network based on the configuration file.

# /opt/openstack-ldoms/bin/create-O3L-network create-network.conf

5. Import OS images into Glance From the OpenStack controller, we can import ISO or raw image files into Glance using the import-O3L-glamce image utility. Once this image is imported, it can be deployed onto the Oracle VM for SPARC Nova compute node for a guest domain. The format for this is:

# import-O3L-glance-image

With the following parameters being passed: . image-file: Full path to image file to be uploaded . image-name: Name of give image in Glance, spaces not allowed . image-type: Specify image type, valid values are: 'RAW' or 'ISO'

For our demonstration environment, run the following command from the OpenStack controller to import the Oracle Solaris 11.4 ISO file (sol-11_4-text-sparc.iso) that we downloaded earlier:

# /opt/openstack-ldoms/bin/import-O3L-glance-image /var/tmp/sol-11_4-text-sparc.iso S11-ISO ISO

For details on how to create and import a golden OS image for Glance, please refer to the Oracle VM Server for SPARC for Oracle OpenStack Administration Guide.

60 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

6. Deploy a Guest Domain

There are two methods to deploy a guest domain on the SPARC OpenStack compute node. This can be done using the Horizon WebUI or using the command line utility from the OpenStack control node.

Method 1: From the OpenStack Horizon WebUI:

From the WebUI, on the left side-bar, navigate to Project > Compute > Instances. Click the “Launch Instance” button

Using the “Launch Instance” wizard, fill in the required details including which image to use for deployment and flavor/size of your guest domain. When completed click the “Launch Instance” button in the lower right portion of the window.

For details on how to deploy instances from the Oracle OpenStack WebUI, please refer to the standard Oracle OpenStack documentation found at:

https://www.oracle.com/technetwork/server-storage/openstack/linux/documentation/index.html

Method 2: From the command line utility run-O3L-openstack-cli

To deploy a guest logical domain via the controller CLI, use the O3L-openstack-cli utility and use the following syntax:

# run-O3L-openstack-cli server create --flavor "" --image ""

With the following parameters being passed: . sizing_flavor: Enter the flavor name for the size of the guest logical domain that will be created. A list of available flavors and sizing details can be found by running the following command:

# /opt/openstack-ldoms/bin/run-O3L-openstack-cli flavor list

. image-name: Enter the image name for the ISO or RAW image that will deployed onto the guest domain. A list of available images can be found by running the following command:

# /opt/openstack-ldoms/bin/run-O3L-openstack-cli image list

Currently, there should only be a single image listed as we have not added any default images, and only uploaded an ISO image in a previous step.

61 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

. guest_domain_name: Specify a name for the guest domain that will be deployed. This is how the guest domain will appear and referenced in OpenStack. This does not mean that this is how the guest domain will be referenced from the Oracle VM for SPARC ldm list command.

For our demonstration environment, run the following command from the OpenStack controller to deploy a guest domain, using the Oracle Solaris 11.4 ISO image, using a “LDom small” flavor, with the name “GuestDomain001”

the Oracle Solaris 11.4 ISO file (sol-11_4-text-sparc.iso) that we downloaded earlier:

# /opt/openstack-ldoms/bin/run-O3L-openstack-cli server create --flavor "LDom small" --image "S11-ISO" GuestDomain001

To view the status of the guest domain build, run the following command:

# /opt/openstack-ldoms/bin/run-O3L-openstack-cli server list

This will list the status of the instance deployment, as either in the build process, as active or as error if a problem was encountered during the instance deployment process.

7. Completing the Guest Domain Build

Once the guest domain is deployed, the domain still needs to have the OS configured to complete the installation. To perform this, log into the Oracle SPARC server control domain as root, locate the instance that was just deployed using the following command:

# ldm list

NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME

primary active -n-cv- UART 8 16G 5.9% 5.9% 50m

instance-00000002 active -n---- 5000 8 8G 0.1% 0.1% 4m

Then telnet to the CONS port listed for the guest domain.

# telnet 0 5000

Press the Return key to initiate a response from the domain in order to start the Oracle Solaris configuration and installation. At this point please follow standard Oracle Solaris 11 configuration and installation process. NOTE: It may take a couple of minutes for the guest domain to come online from the build process. Continue to hit the Return key until you are prompted to select a language you which to use to progress with the installation.

62 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Adding a Second SPARC Nova Compute Node To add another Oracle VM for SPARC compute node to the existing demonstration environment, we will need to create a new add-compute.conf file for the second compute node, as well as modify come files and settings on the OpenStack controller.

CREATE CONFIGURATION FILE FOR SECOND SPARC COMPUTE NODE

As previously done for the first compute node, on the OpenStack controller create a password file for this new node, and copy over the provided example configuration file to a file that will be worked on:

# cp /opt/openstack-ldoms/etc/add-compute.conf /opt/openstack-ldoms/etc/add-compute-cnode002.conf

In the add-compute-cnode002.conf file, you will need to enter the correct values for the following parameters: . VMMS_AGENT_IP to the IP addresses of the SPARC T-series server that will interface with the OpenStack controller. . SSH_USERNAME to the Operating System user of the SPARC T-series server. . SSH_PASSWORD_FILE to the password file we created in the previous step for the user specified for SSH_USERNAME.

This file is specific to the SPARC compute node being configured for this demonstration environment.

# vi /opt/openstack-ldoms/etc/add-compute-cnode002.conf

NOTE – Please refer to appendix for the example add-compute-cnode001.conf file from previous configuration.

Run the following command on the config file we just built from the OpenStack controller. This will prepare the SPARC T- series server to be added to the existing demonstration environment:

# cd /opt/openstack-ldoms/etc/

# /opt/openstack-ldoms/bin/add-compute-node --config-file add-compute-cnode002.conf

Once the SPARC T-series server is back online, on the OpenStack controller, verify that the second Nova compute node is reachable from the controller by running the following command:

# /opt/openstack-ldoms/bin/verify-O3L-compute-node

The command run above should return something similar to the following:

# # /opt/openstack-ldoms/bin/verify-O3L-compute-node 192.168.70.51

63 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

Connecting with parematers :

Host : 192.168.70.51

Port : 8100

Signing Key : /etc/vmms/client/signing_key.pem

Use TLS : True

CA Bundle : /etc/vmms/client/rootCA.pem

Successfully connected to agent.

CREATE NOVA.CONF FILE FOR SECOND SPARC COMPUTE NODE

On the OpenStack controller, we need to create a new nova.conf file for the second Oracle VM for SPARC Nova compute node. Copy the existing nova-compute-ldoms-1.conf file for the second node and make the needed changes to the new file:

# cd /etc/kolla/config/nova

# cp nova-compute-ldoms-1.conf nova-compute-ldoms-2.conf

Edit the nova-compute-ldoms-2.conf file for the second compute node, and modify the following entries to reflect the second node’s IP addresses and node2 instead of node1:

. Under [DEFAULT] log_file From log_file = nova-compute-ldoms-1-.log To log_file = nova-compute-ldoms-2-.log

host From host = nova-compute-ldoms-1 To host = nova-compute-ldoms-2

. Under [ldoms] From agent_host = To agent_host =

. Under [serial_console] From proxyclient_address = To proxyclient_address =

CREATE NEUTRON RELATED FILE FOR SECOND SPARC COMPUTE NODE

On the OpenStack controller, create a new neutron-openvswitch-agent-ldoms.conf file for the second Oracle VM for SPARC Nova compute node. Copy the existing neutron-openvswitch-agent-ldoms-1.conf file for the second node and make the needed changes to the new file:

64 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# cd /etc/kolla/config/neutron

# cp neutron-openvswitch-agent-ldoms-1.conf neutron-openvswitch-agent-ldoms-2.conf

Edit the neutron-openvswitch-agent-ldoms-2.conf file for the second compute node, and modify the following entries to reflect the second node’s IP addresses and node2 instead of node1:

. Change from nova-compute-ldoms-1 to nova-compute-ldoms-2

UPDATE KOLLACLI PROPERTY TO REFLECT SECOND SPARC COMPUTE NODE

By default, the kollacli property “num_nova_ldoms_per_node” is set to 1 for a single SPARC Nova compute node. Since a second node will be added, this kollacli property needs to be changed to 2. To make this change, enter the following command:

# kollacli property set num_nova_ldoms_per_node 2

If more than two SPARC Nova compute nodes were being added, use the adequate number of SPARC Nova compute nodes in this command.

After running property update command, the demonstration environment needs to be reconfigured to recognize this new SPARC Nova compute node. This can be done using kollacli that allows kolla to deploy the second nova-compute-ldoms container. Run the following kollacli command to reconfigure your demonstration environment:

# kollacli reconfigure

Please note that this process can take some time, similar to when the initial deployment was performed.

Once this is executed, the second SPARC Nova compute node should be added to the OpenStack demonstration environment. This can be confirmed by running the following command:

# /opt/openstack-ldoms/bin/run-O3L-openstack-cli hypervisor list

The output should show three entries, the first being the OpenStack controller “demo-allinone” as type QEMU, the other should be the two SPARC Nova compute nodes as hypervisor type “ldoms”.

65 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

APPENDIX

Example Configuration Files This section provides example configuration files for reference purposes.

SETUP-OPENSTACK-LDOMS.CONF

# What is this nodes Fully Qualified Domain Name e.g. hostname.ie.oracle.com

NODE_FQDN=demo-allinone.oracle.com

# What is the IP for this node, e.g. hostname -i or ip addr show or ifconfig -a

NODE_IP=192.168.15.160

# Primary NIC for this node e.g eth0 or enp0s3(VirtualBox)

NODE_NIC=ens3

# Second NIC for Neutron e.g. eth1 or enp0s8(VirtualBox)

NEUTRON_EXTERNAL_NIC=ens4

66 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# What is the default docker registry to use ?

# Default value is "docker.io", most scenarios this will not need changing.

# If O3L containers are on some other registry please specify here.

DOCKER_REGISTRY=

# What is the default docker namespace to user ?

# Default value is "oracle", most scenaries this will not need changing

# If O3L containers are stored under a different namespace please specify here.

DOCKER_NAMESPACE=

# What is the default docker container prefix ?

# Default is officially delivered packages on docker.io, prefix is

# ${DOCKER_NAMESPACE}/ol-openstack e.g. oracle/ol-openstack

# If O3L containers being used have a difference prefix please specify here.

DOCKER_CONTAINER_PREFIX=

# What is the default docker container tag ?

# Default is the officially delivered packages tag of "5.0" for OpenStack Queens

# In most cases this will not need to be changed.

DOCKER_CONTAINER_TAG=

# What is the docker nova-compute-ldoms container prefix ?

# Default if left blank presumes you are retrieving from docker.io as follows :

# ${DOCKER_NAMESPACE}/ol-ovm-sparc-openstack

# If nova-compute-ldoms container beins used was downloaded from OTN then this

# should be set to : openstack-ldoms/ol-ovm-sparc-openstack

# If you have built nova container images using the delivered kolla build

# configuration files set value to openstack-ldoms/oraclelinux-source

67 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

DOCKER_NOVA_CONTAINER_PREFIX=openstack-ldoms/ol-ovm-sparc-openstack

# What is the docker nova-compute-ldoms container tag ?

# Default if left blank is latest release of "3.0.0.1"

# In most cases this will not require setting unless you have either built

# a custom nova-compute-ldoms container with a different tag.

DOCKER_NOVA_CONTAINER_TAG=

# Is http_proxy required to get to outside world ?

# Default is no proxy, uncomment to set.

PROXY=http://www-proxy.us.oracle.com:80

# What domains do not need a proxy ?

# Default is nothing, uncomment to set.

# NO_PROXY=localhost,127.0.0.1

# Do you want to enable debug output to OpenStack service log files ?

# Default is False

DEBUG_LOG=False

# Remove docker images at start of run ? True or False

# This will include default docker.io images that were downloaded either

# by a previous run of setup-openstack-ldoms.sh or by some other means.

# It will also attempt to remove images identified by configuration

# variables LDOMS_NOVA_CONTAINER_PREFIX and LDOMS_NOVA_CONTAINER_TAG

# e.g. Default Docker Images :

# REPOSITORY = "oracle/ol-openstack-*

# TAG = "5.0.1"

# Custom configuration identified images :

68 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# REPOSITORY = "${LDOMS_NOVA_CONTAINER_PREFIX}*"

# TAG = "${LDOMS_NOVA_CONTAINER_TAG}"

#

# Default is to not remove images

DOCKER_IMAGES_REMOVE=False

# Do you want to overwrite custom nova.conf file ?

# When configuring OL3 a custom nova.conf file is generated for kolla to

# overwrite certain values in nova-compute-ldoms-1 nova.conf.

# If this file already exists, user may not want to overwrite it.

# Specify True here to overwrite it. Default is to not overwrite.

OVERWRITE_CUSTOM_NOVA_CONF=False

# What is Virtual Machine Management Service(VMMS) Agent IP ?

VMMS_AGENT_IP=192.168.70.88

# Do you want the VMMS Agent to be automatically configured on your remote

# compute node ?

# If yes, set CONFIGURE_VMMS_AGENT=True and provide a

# pointer to the add-compute-node.conf configuration file to use.

# Default is False

CONFIGURE_VMMS_AGENT=True

# Configuration file to be used for adding a new compute node for this setup.

# This file is used if CONFIGURE_VMMS_AGENT=True or

# if GENERATE_RSA_KEY_FILES=True and keys were generated

# In GENERATE_RSA_KEY_FILES scenario only used for SSH credentials to push

# verificaiton key to host.

ADD_COMPUTE_CONFIG_FILE=/opt/openstack-ldoms/etc/add-compute-cnode001.conf

69 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# Do you want to ensure default RSA security is configured ?

# RSA security is configured by default when adding a compute node

# (add-compute-node).

# This option is provided to ensure default RSA is configured on an already

# configured compute node.

# Default is False, if set to True will perform the following :

# - Check if default signing/verification keys exist, if one or both are missing

# remove stale pem files and re-generate new default ones.

# - Regardless of generation, distribute verification key to default location

# on compute node.

# - If signing_Key_file option set in client config.ini, ensure it points to

# default RSA signing file

# - If verificaiton_key_file option set in server config.ini, ensure it points

# to default RSA verificationfile

# - Restart vmms-agent SMF on compute node

# NOTE: If True, communication to Compute node is required via SSH. SSH

# Credentials should be provided via a add-compute.conf file

# (ADD_COMPUTE_CONFIG_FILE). ONLY SSH credentials and HOST IP are

# consumed from this file.

# If CONFIGURE_VMMS_AGENT=True, this option is ignored, as default RSA will be

# configured via that option.

VERIFY_DEFAULT_RSA=False

# Do you want to ensure default self-signed TLS is configured ?

# TLS is enabled by default when adding a compute node (add-compute-node),

# unless user specifically chose to disable via add-compute.conf option

# DISABLE_TLS or add-compute-node cli --disable-default-tls.

# This option is provided to ensure default TLS is enabled on an already

# configured compute node.

70 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# Default is False, if set to True will perform the following :

# - Check if default self-signed TLS Certificate/Key exist on compute node.

# If one or both are missing remove stale key/cert and re-generate new set

# of self-signed key/cert files.

# - Ensure use_tls option is set to "True" in client config.ini

# - Ensure tls_key_file and tls_cert_file options in server config.ini are set

# to default values.

# - Restart vmms-agent SMF on compute node

# NOTE: If True, communication to Compute node is required via SSH. SSH

# Credentials should be provided via a add-compute.conf file

# (ADD_COMPUTE_CONFIG_FILE). ONLY SSH credentials and HOST IP are

# consumed from this file.

# If CONFIGURE_VMMS_AGENT=True, this option is ignored, as default TLS will be

# configured via that option.

VERIFY_DEFAULT_TLS=False

# Do you want to disable TLS ?

# As noted TLS is enabled by default, however if you want to specifically

# disable TLS for this setup, set this option to True.

# You cannot specify VERIFY_DEFAULT_TLS=True and DISABLE_TLS=True at the

# same time.

# Default is False. If specified to disbale tls following is performed

# - Ensure use_tls=False in client config.ini

# - Ensure tls_key_file and tls_cert_file options are removed from server

# config.ini file

# - Restart vmms-agent SMF on compute node

# NOTE: If True, communication to Compute node is required via SSH. SSH

# Credentials should be provided via a add-compute.conf file

# (ADD_COMPUTE_CONFIG_FILE). ONLY SSH credentials and HOST IP are

71 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

# consumed from this file.

# If CONFIGURE_VMMS_AGENT=True, this option is ignored, as disabling of

# TLS will be handed via add-compute-node's DISABLE_TLS configuration file

# option.

DISABLE_TLS=False

# Do you want to enable serial console ?

# Serial console access is not secure, so by default it is disabled.

# Set to True to enable serial console access whilst being aware of lack of

# security

ENABLE_SERIAL_CONSOLE=False

ADD-COMPUTE-CNODE001.CONF

# Options used by add-compute-node for configuring vmms-agent on compute node

# IP/FQDN of compute hostname/IP

VMMS_AGENT_IP=192.168.70.88

# Username for SSH access to compute node

SSH_USERNAME=root

# File containing Password for SSH access to compute node

SSH_PASSWORD_FILE=/root/cnode001-pw.txt

# Port for SSH access to compute node

SSH_PORT=22

# SSH Key file to use for authorization

# Only supply of not providing SSH_PASSWORD_FILE

72 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

SSH_KEY_FILE=

# Timeout for SSH actions, default 10 minutes(600 Seconds)

SSH_TIMEOUT=600

# GID and UID for vmms user on VMMS_AGENT_HOST (Compute node), defaulting to 120 for both

VMMS_GID=120

VMMS_UID=120

# Default TLS Security settings is enabled by default.

# To specifically disable set DISABLE_TLS to True

DISABLE_TLS=False

# Create a new nova compute instance for this compute node

# Default is to not create a nova instance

CREATE_NOVA_INSTANCE=False

# The following three entries are only consumed if compute host control domain

# is in factory-default mode

# What NIC to use for primary VSW on compute node

LDOMS_VSW_NET=net0

# Cores allocated to the control domain:

CDOM_CORES=1

# RAM (in Gigabytes) allocated to the control domain:

CDOM_RAM=16

73 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

CREATE-NETWORK.CONF

# Specify the name of your network

NETWORK_NAME=demo-network

# Choose the network type to create. Two valid types "vlan" or "flat"

NETWORK_TYPE=flat

# Specify the name of your subnet

SUBNET_NAME=demo-subnet

# Specify the subnet network address in cidr format e.g. 10.169.122.128/25

SUBNET_CIDR=192.168.15.0/24

# Specify the gateway IP address e.g. 10.169.122.129

SUBNET_GATEWAY_IP=192.168.15.1

# Specify starting IP address of your allocation pool range e.g. 10.169.122.180

SUBNET_START_IP=192.168.70.15

# Specify ending IP address of your allocation pool range e.g. 10.169.122.190

SUBNET_END_IP=192.168.70.25

# Specify the IP Address of the primary DNS Server e.g. 10.169.123.17

SUBNET_DNS_IP=192.168.98.197

74 WHITE PAPER / Using Oracle OpenStack with Oracle VM Server for SPARC

ORACLE CORPORATION

Worldwide Headquarters 500 Oracle Parkway, Redwood Shores, CA 94065 USA

Worldwide Inquiries TELE + 1.650.506.7000 + 1.800.ORACLE1 FAX + 1.650.506.7200 oracle.com

CONNECT WITH US Call +1.800.ORACLE1 or visit oracle.com. Outside North America, find your local office at oracle.com/contact.

blogs.oracle.com/oracle facebook.com/oracle twitter.com/oracle

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 1118 White Paper Title Using Oracle OpenStack with Oracle VM Server for SPARC November 2018 Authors: Jeffrey Kiely, Matt Keenan