Cloud Documentation Deploying Production Grade OpenStack with MAAS, Juju and

This documentation has been created to describe best practice in deploying a Production Grade installation of OpenStack using current technologies, including bare metal provisioning using MAAS, service orchestration with Juju and system management with Landscape.

This documentation is divided into four main topics:

1. Installing the MAAS Metal As A Service software 2. Installing Juju and configuring it to work with MAAS 3. Using Juju to deploy OpenStack 4. Deploying Landscape to manage your OpenStack cloud

Once you have an up and running OpenStack deployment, you should also read our Administration Guide which details common tasks for maintenance and scaling of your service. Legal notices

This documentation is copyright of Canonical Limited. You are welcome to display on your computer, download and print this documentation or to use the hard copy provided to you for personal, education and non-commercial use only. You must retain copyright, trademark and other notices unaltered on any copies or printouts you make. Any trademarks, logos and service marks displayed in this document are property of their owners, whether Canonical or third parties. This documentation is provided on an “as is” basis, without warranty of any kind, either express or implied. Your use of this documentation is at your own risk. Canonical disclaims all warranties and liability that may result directly or indirectly from the use of this documentation.

© 2014 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd. Installing the MAAS software

Scope of this documentation

This document provides instructions on how to install the Metal As A Service (MAAS) software. It has been prepared alongside guides for installing Juju, OpenStack and Landscape as part of a production grade cloud environment. MAAS itself may be used in different ways and you can find documentation for this on the main MAAS website [MAAS docs]. For the purposes of this documentation, the following assumptions have been made:

You have sufficient, appropriate node hardware You will be using Juju to assign workloads to MAAS You will be configuring the cluster network to be controlled entirely by MAAS (i.e. DNS and DHCP) If you have a compatible power-management system, any additional hardware required is also installed(e.g. IPMI network). Introducing MAAS

Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource.

What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware’s okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud.

When you’re ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It’s as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you’re done, it’s just as easy to give the node back to Nova. MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal. Installing MAAS from the Cloud Archive

The Ubuntu Cloud Archive is a repository made especially to provide users with the most up to date, stable versions of MAAS, Juju and other tools. It is highly recommended to keep your software up to date:

sudo -get update

There are several packages that comprise a MAAS install. These are: maas-region-controller: Which comprises the 'control' part of the software, including the web-based user interface, the API server and the main database. maas-cluster-controller: This includes the software required to manage a cluster of nodes, including managing DHCP and boot images. maas-dns: This is a customised DNS service that MAAS can use locally to manage DNS for all the connected nodes. mass-dhcp: As for DNS, there is a DHCP service to enable MAAS to correctly enlist nodes and assign IP addresses. The DHCP setup is critical for the correct PXE booting of nodes.

As a convenience, there is also a maas metapackage, which will install all these components

If you need to separate these services or want to deploy an additional cluster controller, you should install the corresponding packages individually.

Installing the packages

Running the command:

sudo apt-get install maas

...will initiate installation of all the components of MAAS.

The maas-dhcp and maas-dns packages should be installed by default.

Once the installation is complete, the web-based interface for MAAS will start. In many cases, your MAAS controller will have several NICs. By default, all the services will initiate using the first discovered controller (i.e. usually eth0)

Before you login to the server for the first time, you should create a superuser account. Create a superuser account

Once MAAS is installed, you'll need to create an administrator account:

sudo maas-region-admin createsuperuser

Running this command will prompt for a username, an email address and a password for the admin user. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember.

You can run this command again for any further administrator accounts you may wish to create, but you need at least one. Import the boot images

MAAS will check for and download new Ubuntu images once a week. However, you'll need to download them manually the first time. To do this you should connect to the MAAS web interface using a web browser. Use the URL:

http://172.18.100.1/MAAS/

You should substitute in the IP address of the server where you have installed the MAAS software. If there are several possible networks, by default it will be on whichever one is assigned to the eth0 device.

You should see a login screen like this: Enter the username and password you specified for the admin account. When you have successfully logged in you should see the main MAAS page:

Either click on the link displayed in the warning at the top, or on the 'Cluster' tab in the menu to get to the cluster configuration screen. The initial cluster is automatically added to MAAS when you install it, but it has no associated images for booting nodes with yet. Click on the button to begin the download of suitable boot images.

Importing the boot images can take some time, depending on the available network connection. This page does not dynamically refresh, so you can refresh it manually to determine when the boot images have been imported. Login to the server

To check that everything is working properly, you should try and login to the server now. Both the error messages should have gone (it can take a few minutes for the boot image files to register) and you can see that there are currently 0 nodes attached to this controller.

Configure switches on the network

Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use. To alleviate this problem, you should enable Portfast for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately. Add an additional cluster

Whilst it is certainly possible to run MAAS with just one cluster controller for all the nodes, in the interests of easier maintenance, uprades and stability, it is desirable to have at least two operational clusters.

Each cluster needs a controller node. Install Ubuntu on this node and then follow a similar setup proceedure to install the cluster controller software:

sudo apt-get update sudo apt-get install maas-cluster-controller sudo apt-get install maas-dhcp maas-dns

Once the cluster software is installed, it is useful to run:

sudo -reconfigure maas-cluster-controller

This will enable you to make sure the cluster controller agent is pointed at the correct address for the MAAS master controller.

Configure additional Cluster Controller(s) Cluster acceptance

When you install your first cluster controller on the same system as the region controller, it will be automatically accepted by default (but not yet configured, see below). Any other cluster controllers you set up will show up in the user interface as “pending,” until you manually accept them into the MAAS.

To accept a cluster controller, click on the "Clusters" tab at the top of the MAAS web interface: You should see that the text at the top of the page indicates a pending cluster. Click on that text to get to the Cluster acceptance screen.

Here you can change the cluster’s name as it appears in the UI, its DNS zone, and its status. Accepting the cluster changes its status from “pending” to “accepted.”

Now that the cluster controller is accepted, you can configure one or more of its network interfaces to be managed by MAAS. This will enable the cluster controller to manage nodes attached to those networks. The next section explains how to do this and what choices are to be made.

Cluster Configuration

MAAS automatically recognises the network interfaces on each cluster controller. Some of these will be connected to networks where you want to manage nodes. We recommend letting your cluster controller act as a DHCP server for these networks, by configuring those interfaces in the MAAS user interface.

As an example, we will configure the cluster controller to manage a network on interface eth0. Click on the edit icon for eth0, which takes us to this page: Here you can select to what extent you want the cluster controller to manage the network:

DHCP only - this will run a DHCP server on your cluster DHCP and DNS - this will run a DHCP server on the cluster and configure the DNS server included with the region controller so that it can be used to look up hosts on this network by name (recommended).

You cannot have DNS management without DHCP management because MAAS relies on its own DHCP server’s leases file to work out the IP address of nodes in the cluster. If you set the interface to be managed, you now need to provide all of the usual DHCP details in the input fields below. Once done, click “Save interface”. The cluster controller will now be able to boot nodes on this network.

There is also an option to leave the network unmanaged. Use this for networks where you don’t want to manage any nodes. Or, if you do want to manage nodes but want to use an existing DHCP service on your network.

A single cluster controller can manage more than one network, each from a different network interface on the cluster- controller server. This may help you scale your cluster to larger numbers of nodes, or it may be a requirement of your network architecture. Enlisting nodes

Now that the MAAS controller is running, we need to make the nodes aware of MAAS and vice-versa. With MAAS controlling DHCP and nodes capable of PXE booting, this is straightforward

Automatic Discovery

With nodes set to boot from a PXE image, they will start, look for a DHCP server, receive the PXE boot details, boot the image, contact the MAAS server and shut down.

During this process, the MAAS server will be passed information about the node, including the architecture, MAC address and other details which will be stored in the database of nodes. You can accept and comission the nodes via the web interface. When the nodes have been accepted the selected series of Ubuntu will be installed. You may also accept and commission all nodes from the commandline. This requires that you first login with the API key see Appendix I, then run the command:

maas-cli maas-profile nodes accept-all

Once commissioned, the node's status will be updated to "Ready". you can check the results of the comissioning scripts by clicking on the node name and then clicking on the link below the heading "Commissioning output". The screen will show a list of files and their result - you can further examine the output by clicking on the status of any of the files.

Manually adding nodes

If your nodes are not capable of booting from PXE images, they can be manually registered with MAAS. On the main web interface screen, click on the "Add Node" button: This will load a new page where you can manually enter details about the node, including its MAC address. This is used to identify the node when it contacts the DHCP server.

Power management

MAAS supports several types of power management. To configure power management, you should click on an individual node entry, then click on the "Edit" button. The power management type should be selected from the drop down list, and the appropriate power management details added. If you have a large number of nodes, it should be possible to script this process using the MAAS cli. See Appendix 1 for more details of the MAAS CLI

Without power management, MAAS will be unable to power on nodes when they are required. Preparing MAAS for Juju and OpenStack using Simplestreams

When Juju bootstraps a cloud, it needs two critical pieces of information:

1. The uuid of the image to use when starting new compute instances. 2. The URL from which to download the correct version of a tools tarball.

This necessary information is stored in a json metadata format called "simplestreams". For supported public cloud services such as Amazon Web Services, HP Cloud, Azure, etc, no action is required by the end user. However, those setting up a private cloud, or who want to change how things work (eg use a different Ubuntu image), can create their own metadata, after understanding a bit about how it works.

The simplestreams format is used to describe related items in a structural fashion.( See the project lp:simplestreams for more details on implementation). Below we will discuss how Juju determines which metadata to use, and how to create your own images and tools and have Juju use them instead of the defaults.

Basic Workflow

Whether images or tools, Juju uses a search path to try and find suitable metadata. The path components (in order of lookup) are:

1. User supplied location (specified by tools-metadata-url or image-metadata-url config settings). 2. The environment's cloud storage. 3. Provider specific locations (eg keystone endpoint if on Openstack). 4. A web location with metadata for supported public clouds (https://streams.canonical.com).

Metadata may be inline signed, or unsigned. We indicate a metadata file is signed by using the '.sjson' extension. Each location in the path is first searched for signed metadata, and if none is found, unsigned metadata is attempted before moving onto the next path location.

Juju ships with public keys used to validate the integrity of image and tools metadata obtained from https://streams.canonical.com. So out of the box, Juju will "Just Work" with any supported public cloud, using signed metadata. Setting up metadata for a private (eg Openstack) cloud requires metadata to be generated using tools which