Deploying Openstack with Multi- Environment

With Openstack gaining lot of momentum, each of its release is able to address some unanswered questions. One of the many questions that enterprises or service providers ask is “Can Openstack work on Multi-hypervisor?”

Answer is YES. It DOES!

Before jumping right on how to set multi-hypervisor environment using Openstack, one should know what is “multi-hypervisor environment”?

What is multi hypervisor environment? For many organizations, a single hypervisor simply isn’t the right answer for all of their needs, and those businesses may choose to adopt a second hypervisor product as a complement to the first. When the subject of adding a second hypervisor comes up, it’s often because virtualization licensing costs are getting out of hand or because IT wants to avoid vendor lock-in. Or, it could be that the existing hypervisor isn’t offering all the features the business needs. As server virtualization has matured, multi-hypervisor environments have become more common. But adding a second virtualization platform to the Openstack cloud environment requires solid justifications and carries challenges that must be considered carefully before incorporating yet another layer of complexity to the environment.

Why we need multi hypervisor environment? Most OpenStack distributions are centered around KVM hypervisor. However Enterprises, on the other hand, have massive investments in a variety of different virtualization platforms such as , ESXi etc. Most large enterprises are unwilling to lose their existing capability with their chosen hypervisor at the same time. This provides a solid use case for Openstack community to ensure it supports multi-hypervisor environment.

What are the hypervisor that Openstack support? OpenStack Compute supports many , which might make it difficult for you to choose one. Most installations use only one hypervisor.

The following hypervisors are supported:

 KVM – Kernel-based . The virtual disk formats that it supports is inherited from QEMU since it uses a modified QEMU program to launch the virtual machine. The supported formats include raw images, the qcow2, and VMware formats.  LXC – Linux Containers (through ), use to run Linux-based virtual machines.  QEMU – Quick EMUlator, generally only use for development purposes.  UML – User Mode Linux, generally only use for development purposes.  VMware vSphere 4.1 update 1 and newer, runs VMware-based Linux and Windows images through a connection with a vCenter server or directly with an ESXi host.  Xen – XenServer, Xen Cloud Platform (XCP), used to run Linux or Windows virtual machines. You must install the nova-compute service in a para-virtualized VM.  Hyper-V – Server virtualization with ’s Hyper-V, use to run Windows, Linux, and FreeBSD virtual machines. Runs nova-compute natively on the Windows virtualization platform.  Ironic – Not a hypervisor in the traditional sense, this driver provisions physical hardware through pluggable sub-drivers (for example, PXE for image deployment, and IPMI for power management).

For a detailed list of features and support across different hypervisors, seehttp://wiki.openstack.org/HypervisorSupportMatrix.

How to setup multi-hypervisor environment in OpenStack? Here, we are going to discuss about how to add VMware ESXI hypervisor to the default Openstack environment with KVM hypervisor. Before getting into the integrating Multi-hypervisor process, we assume that you have working Openstack (havana or latest) environment with default KVM hypervisor and Vcenter 5.5 environment with any number of ESXI host added in it. OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). The following diagram shows a high-level view of the Openstack with multi-hypervisor (KVM & VMware) architecture:

As the figure shows, VMware vCenter cannot be added directly to the Openstack environment. We need to have separate compute-node dedicatedly for each VMware vCenter environment. You may have any number of ESXI host added in the single vCenter. If you want to add two different vCenter environments, you need two compute-node with VMware vCenter driver in it. Since, a VMware vCenter driver in compute-node can manage only one vCenter environment at a time.

“The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features. The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK images from the Image Service back end store.”

You should note the following:

 Unlike Linux kernel based hypervisors, such as KVM, vSphere with vCenter on OpenStack requires a separate vCenter Server host and the VM instances to be hosted in an ESXi cluster run on ESXi hosts distinct from a Nova compute node. In contrast, VM instances running on KVM can be hosted directly on a Nova compute node.  Although a single OpenStack installation can support multiple hypervisors, each compute node will support only one hypervisor. Hence, multi-hypervisor OpenStack cloud requires at least one compute node for each hypervisor type.

Configuration overview Prerequisites and limitations Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver:

1. DRS: For any cluster that contains multiple ESX hosts, enable DRS and enable fully automated placement. 2. Shared storage: Only shared storage is supported and data stores must be shared among all hosts in a cluster. It is recommended to remove data stores not intended for OpenStack from clusters being configured for OpenStack. 3. Clusters and data stores: Do not use OpenStack clusters and data stores for other purposes. If you do, OpenStack displays incorrect usage information. 4. Networking: The VMware driver supports networking with the nova-network service or the OpenStack Networking(neutron) Service, Depending on your installation.

Let’s take nova-network. Follow the below configuration settings: The nova-network service with the FlatManager or FlatDHCPManager:Create a port group with the same name as the flat_network_bridge value in the nova.conf file. The default value is br100. If you specify another value, the new value must be a valid linux bridge identifier that adheres to linux bridge naming conventions. All VM NICs are attached to this port group. Ensure that the flat interface of the node that runs the nova- network service has a path to this network. When configuring the port binding for this port group in vCenter, specify ephemeral for the port binding type. For more information, see Choosing a port binding type in ESX/ESXi in the VMware Knowledge Base.

5. Security groups. If you use the VMware driver with OpenStack Networking and the NSX plug-in, security groups are supported. If you use nova-network, security groups are not supported. So in our case. No security group support. 6. VNC. The port range 5900 – 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control. In addition to the default VNC port numbers (5900 to 6000) specified in the above document, the following ports are also used: 6101, 6102, and 6105. 7. SSH Keys: Injection of SSH keys into compute instances hosted by vCenter is not currently supported.

After completing the above Prerequisites follow the below steps:

1. Install nova compute service using the command – “apt-get install nova-compute nova-compute- python-suds” on the nova compute server(Separate compute node say novavmware1). Note:Python-suds are required for VMwareapi.

2. Edit nova.conf in nova compute server (novavmware1) as specified in the VMwareVCDriver configuration options below based on your environment .

VMware vCenter driver

Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS). VMwareVCDriver configuration options

When you use the VMwareVCDriver (vCenter versions 5.1 and later) with OpenStack Compute, add the following VMware-specific configuration options to the nova.conf file: [DEFAULT] compute_driver=vmwareapi.VMwareVCDriver [vmware] host_ip= host_username= host_password= cluster_name= datastore_regex= wsdl_location=file:///vmware-sdk/SDK/vsphere-ws/wsdl/vim25/vimService.wsdl #(optional)

3. (optional) Mirror WSDL from vCenter : It is not necessary to add wsdl location path while using Vcenter 5.1 or later but In some cases it will not work. So you may need to add WSDL entry in VMwareVCDriver configuration of nova.conf file.

Download „VMware vSphere Web Services SDK‟ from VMware and unzip it in /vmware- sdk on novavmware1, then referencing it in nova.conf file as wsdl_location=file:///vmware-sdk/SDK/vsphere-ws/wsdl/vim25/vimService.wsdl

5. Don’t forget to add all other common entries like database path, AMQP credintials, VNC details, authtoken details,etc into the nova.conf file. 6. Install nova-network using the command “apt-get install nova-network nova--metadata” on server nova compute server(novavmware1) .

6. Remember there is no modification on nova.conf file on the Openstack controller. All these changes should be done only in nova.conf file on compute node ( novavmware1).

7. Make sure that your nova-compute.conf file has below entries in it. compute_driver=vmwareapi.VMwareVCDriver libvirt_type=vmwareapi Now your openstack with Multi-hypervisor environment is ready

Images with VMware vSphere Before launching VM in vcenter we need to upload appropriate image to glance that can work with VMware (say VMDK image). This section describes how to configure VMware-based virtual machine images for launch. vSphere versions 4.1 and newer are supported. So follow the steps below to create image for VMware environment.

1. You must have cloud image that work in KVM hypervisor. So let’s convert the qcow2 Ubuntu cloud image into VMDK format that can be used in VMware ESXI host. $ -img convert -f qcow2 /Downloads/ubuntu.img -O ubuntu.vmdk

2. VMDK disks converted through qemu-img are always monolithic sparse VMDK disks. $ glance image-create –name ubuntu –is-public=True –container-format=bare –disk- format=vmdk –property vmware-disktype=”sparse” –property vmware-adaptertype=”lsiLogic” –property hypervisor_type=”vmware” < ubuntu.vmdk

Note: In a multi-hypervisor environment, OpenStack Compute uses the hypervisor_type tag to match images to the correct hypervisor type. For VMware images, set the hypervisor type to vmware. Other valid hypervisor types include: xen, qemu, kvm, , uml, and hyperv.

Now, your multi-hypervisor Openstack environment is all set .

Log On to Openstack horizon and choose the right glance image with respective to the hypervisor you want to run the VM and launch the instance.