Flocker Documentation Release 0.3.0dev1

ClusterHQ

October 10, 2014

Contents

1 Introduction to Flocker 3 1.1 Motivation for Building Flocker...... 3 1.2 Architecture...... 4 1.3 Initial Implementation Strategy...... 5 1.4 User Experience...... 6

2 Getting Started 7 2.1 Installing Flocker...... 7 2.2 Tutorial: Deploying and Migrating MongoDB...... 9 2.3 Flocker Application Examples...... 17 2.4 Flocker Feature Examples...... 27

3 Advanced Documentation 35 3.1 What’s New...... 35 3.2 Using Flocker...... 36 3.3 Configuring Flocker...... 37 3.4 Volume Manager...... 40 3.5 Data-Oriented Clustering...... 42 3.6 Setting up External Routing...... 42 3.7 Debugging...... 43 3.8 Cleaning Up...... 43

4 Getting Involved 45 4.1 Contributing to Flocker...... 45 4.2 Infrastructure...... 48

5 Areas of Potential Future Development 57 5.1 Flocker Volume Manager...... 57

6 FAQ 61 6.1 ZFS...... 61 6.2 Current Functionality...... 62 6.3 Future Functionality...... 62

7 Authors 63

i ii Flocker Documentation, Release 0.3.0dev1

Flocker is a data volume manager and multi-host cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on . This means that you can run your databases, queues and key-value stores in Docker and move them around as easily as the rest of your app. With Flocker’s command line tools and a simple configuration language, you can deploy your Docker-based appli- cations onto one or more Linux hosts. Once deployed, your applications will have access to the volumes you have configured for them. Those volumes will follow your containers when you use Flocker to move them between different hosts in your Flocker cluster. Contents:

Contents 1 Flocker Documentation, Release 0.3.0dev1

2 Contents CHAPTER 1

Introduction to Flocker

1.1 Motivation for Building Flocker

Flocker lets you move your Docker containers and their data together between Linux hosts. This means that you can run your databases, queues and key-value stores in Docker and move them around as easily as the rest of your app. Even stateless apps depend on many stateful services and currently running these services in Docker containers in production is nearly impossible. Flocker aims to solve this problem by providing an orchestration framework that allows you to port both your stateful and stateless containers between environments. Docker allows for multiple isolated, reproducible application environments on a single node: “containers”. Applica- tion state can be stored on a local disk in “volumes” attached to containers. And containers can talk to each other and the external world via specified ports. But what happens if you have more than one node? • Where do containers run? • How do you talk to the container you care about? • How do containers across multiple nodes talk to each other? • How does application state work if you move containers around? The diagram below provides a high level representation of how Flocker addresses these questions.

3 Flocker Documentation, Release 0.3.0dev1

1.2 Architecture

Below is a high-level overview of Flocker’s architecture. For more information, you can follow along with a tutorial that walks you through deploying and migrating MongoDB or read more in our advanced documentation.

1.2.1 Flocker - Orchestration

• Flocker can run multiple containers on multiple nodes. • Flocker offers a configuration language to specify what to run and where to run it.

1.2.2 Flocker - Routing

• Container configuration includes externally visible TCP port numbers. • Connect to any node on a Flocker cluster and traffic is routed to the node hosting the appropriate container (based on port). • Your external domain (www.example.com) configured to point at all nodes in the Flocker cluster (192.0.2.0, 192.0.2.1)

1.2.3 Flocker - Application State

• Flocker manages ZFS filesystems as Docker volumes. It attaches them to your containers. • Flocker provides tools for copying those volumes between nodes. • If an application container is moved from one node to another, Flocker automatically moves the volume with it.

1.2.4 Application Configuration

• Application configuration describes what you want to run in a container. – it identifies a Docker image – an optional volume mount point – externally “routed” ports • This configuration is expected to be shared between development, staging, production, etc environments. • Flocker 0.1 does not support automatic re-deployment of application configuration changes.

1.2.5 Deployment Configuration

• Deployment configuration describes how you want your containers deployed. – which nodes run which containers. • This configuration can vary between development, staging, production, etc environments. – Developer might want to deploy all of the containers on their laptop. – Production might put database on one node, web server on another node, etc. • Reacting to changes to this configuration is the primary focus of Flocker 0.1.

4 Chapter 1. Introduction to Flocker Flocker Documentation, Release 0.3.0dev1

1.3 Initial Implementation Strategy

• This is the 0.1 approach. • Future approaches will be very different; feedback is welcome. • All functionality is provided as short-lived, manually invoked processes. • flocker-deploy connects to each node over SSH and runs flocker-reportstate to gather the cluster state. • flocker-deploy then connects to each node over SSH and runs flocker-changestate to make the necessary deployment changes. • Nodes might connect to each other over SSH to copy volume data to the necessary place.

1.3.1 flocker-changestate

• This is installed on nodes participating in the Flocker cluster. • Accepts the desired global configuration and current global state. • Also looks at local state - running containers, configured network proxies, etc. • Makes changes to local state so that it complies with the desired global configuration. – Start or stop containers. – Push volume data to other nodes. – Add or remove routing configuration.

1.3.2 Managing Volumes

• Volumes are ZFS filesystems. • Volumes are attached to a Docker “data” container. • Flocker automatically associates the “data” container’s volumes with the actual container. – Association is done based on container names. • Data model * Volumes are owned by a specific node. – Node A can push a copy to node B but node A still owns the volume. Node B may not modify its copy. – Volumes can be “handed off” to another node, i.e. ownership is changed. Node A can hand off the volume to node B. Then node B is now the owner and can modify the volume and node A no longer can. • Volumes are pushed and handed off so as to follow the containers they are associated with. – This happens automatically when flocker-deploy runs with a new deployment configuration.

1.3.3 Managing Routes

• Containers claim TCP port numbers with the application configuration that defines them. • Connections to that TCP port on the node that is running the container are proxied (NAT’d) into the container for whatever software is listening for them there.

1.3. Initial Implementation Strategy 5 Flocker Documentation, Release 0.3.0dev1

• Connections to that TCP port on any other node in the Flocker cluster are proxied (NAT’d) to the node that is running the container. • Proxying is done using iptables.

1.4 User Experience

• Flocker provides a command-line interface for manually deploying or re-deploying containers across nodes. • The tool operates on two distinct pieces of configuration: – Application – Deployment • Your sysadmin runs a command like flocker-deploy deployment-config.yml application-config.yml on their laptop.

6 Chapter 1. Introduction to Flocker CHAPTER 2

Getting Started

Flocker is a lightweight volume and container manager. It lets you: • Define your application as a set of connected Docker containers • Deploy them to one or multiple hosts • Easily migrate them along with their data between hosts The goal of Flocker is to simplify the operational tasks that come along with running databases, key-value stores, queues and other data-backed services in containers. This Getting Started guide will walk you step-by-step through installing Flocker and provide some tutorials that demonstrate the essential features of Flocker.

2.1 Installing Flocker

As a user of Flocker you will need to install the flocker-cli package which provides command line tools to control the cluster. This should be installed on a machine with SSH credentials to control the cluster nodes (e.g., if you use our Vagrant setup then the machine which is running Vagrant). There is also a flocker-node package which is installed on each node in the cluster. It contains the flocker-changestate, flocker-reportstate, and flocker-volume utilities. These utilities are called by flocker-deploy (via SSH) to install and migrate Docker containers and their data volumes.

Note: For now the flocker-node package is pre-installed by the Vagrant configuration in the tutorial.

Note: If you’re interested in developing Flocker (as opposed to simply using it) see Contributing to Flocker.

2.1.1 Installing flocker-cli

Linux

Before you install flocker-cli you will need a compiler, Python 2.7, and the virtualenv Python utility in- stalled. On Fedora 20 you can install these by running: alice@mercury:~$ sudo yum install @buildsys-build python python-devel python-virtualenv

On Ubuntu or you can run: alice@mercury:~$ sudo apt-get install gcc python2.7 python-virtualenv python2.7-dev

7 Flocker Documentation, Release 0.3.0dev1

Then run the following script to install flocker-cli: linux-install.sh

#!/bin/sh

# Create a virtualenv, an isolated Python environment, in a new directory called # "flocker-tutorial": virtualenv --python=/usr/bin/python2.7 flocker-tutorial

# Upgrade the pip Python package manager to its latest version inside the # virtualenv. Some older versions of pip have issues installing Python wheel # packages. flocker-tutorial/bin/pip install --upgrade pip

# Install flocker-cli and dependencies inside the virtualenv: echo "Installing Flocker and dependencies, this may take a few minutes with no output to the terminal..." flocker-tutorial/bin/pip install --quiet https://storage.googleapis.com/archive.clusterhq.com/downloads/flocker/Flocker-0.3.0dev1-py2-none-any.whl echo "Done!"

Save the script to a file and then run it: alice@mercury:~$ sh linux-install.sh ... alice@mercury:~$

The flocker-deploy command line program will now be available in flocker-tutorial/bin/: alice@mercury:~$ cd flocker-tutorial alice@mercury:~/flocker-tutorial$ bin/flocker-deploy --version 0.3.0dev1 alice@mercury:~/flocker-tutorial$

If you want to omit the prefix path you can add the appropriate directory to your $PATH. You’ll need to do this every time you start a new shell. alice@mercury:~/flocker-tutorial$ export PATH="${PATH:+${PATH}:}${PWD}/bin" alice@mercury:~/flocker-tutorial$ flocker-deploy --version 0.3.0dev1 alice@mercury:~/flocker-tutorial$

OS X

Install the Homebrew package manager. Make sure Homebrew has no issues: alice@mercury:~$ brew doctor ... alice@mercury:~$

Fix anything which brew doctor recommends that you fix by following the instructions it outputs. Add the ClusterHQ/flocker tap to Homebrew and install flocker: alice@mercury:~$ brew tap ClusterHQ/tap ... alice@mercury:~$ brew install flocker-0.3.0dev1 ... alice@mercury:~$ brew test flocker-0.3.0dev1

8 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

... alice@mercury:~$

You can see the Homebrew recipe in the homebrew-tap repository. The flocker-deploy command line program will now be available: alice@mercury:~$ flocker-deploy --version 0.3.0dev1 alice@mercury:~$

2.2 Tutorial: Deploying and Migrating MongoDB

The goal of this tutorial is to teach you to use Flocker’s container, network, and volume orchestration functionality. By the time you reach the end of the tutorial you will know how to use Flocker to create an application. You will also know how to expose that application to the network and how to move it from one host to another. Finally you will know how to configure a persistent data volume for that application. This tutorial is based around the setup of a MongoDB service. Flocker is a generic container manager. MongoDB is used only as an example here. Any application you can deploy into Docker you can manage with Flocker. If you have any feedback or problems, you can Talk to Us.

2.2.1 Before You Begin

Requirements

To replicate the steps demonstrated in this tutorial, you will need: • Linux, FreeBSD, or OS X • Vagrant (1.6.2 or newer) • VirtualBox • At least 10GB disk space available for the two virtual machines • The OpenSSH client (the ssh, ssh-agent, and ssh-add command-line programs) • bash • The mongo MongoDB interactive shell (see below for installation instructions) You will also need flocker-cli installed (providing the flocker-deploy command). See Installing flocker-cli.

Setup

Installing MongoDB

The MongoDB client can be installed through the various package managers for Linux, FreeBSD and OS X. If you do not already have the client on your machine, you can install it by running the appropriate command for your system.

Ubuntu alice@mercury:~$ sudo apt-get install mongodb-clients ... alice@mercury:~$

2.2. Tutorial: Deploying and Migrating MongoDB 9 Flocker Documentation, Release 0.3.0dev1

Red Hat / Fedora alice@mercury:~$ sudo yum install mongodb ... alice@mercury:~$

OS X Install Homebrew alice@mercury:~$ brew update ... alice@mercury:~$ brew install mongodb ... alice@mercury:~$

Other Systems See the official MongoDB installation guide for your system.

Creating Vagrant VMs Needed for Flocker

Note: If you already have a tutorial environment from a previous release see Upgrading the Vagrant Environment.

Before you can deploy anything with Flocker you’ll need a node onto which to deploy it. To make this easier, this tutorial uses Vagrant to create two VirtualBox VMs. These VMs serve as hosts on which Flocker can run Docker. Flocker does not require Vagrant or VirtualBox. You can run it on other technology (e.g., VMware), on clouds (e.g., EC2), or directly on physical hardware. For your convenience, this tutorial includes Vagrantfile which will boot the necessary VMs. Flocker and its dependencies will be installed on these VMs the first time you start them. One important thing to note is that these VMs are statically assigned the IPs 172.16.255.250 (node1) and 172.16.255.251 (node2). These two IP addresses will be used throughout the tutorial and configuration files. If these addresses conflict with your local network configuration you can edit the Vagrantfile to use different values. Note that you will need to make the same substitution in commands used throughout the tutorial.

Note: The two virtual machines are each assigned a 10GB virtual disk. The underlying disk files grow to about 5GB. So you will need at least 10GB of free disk space on your workstation.

1. Create a tutorial directory: alice@mercury:~/$ mkdir flocker-tutorial alice@mercury:~/$ cd flocker-tutorial alice@mercury:~/flocker-tutorial$

2. Download the Vagrant configuration file by right clicking on the link below. Save it in the flocker-tutorial directory and preserve its filename. Vagrantfile

#-*- mode: ruby -*- # vi: set ft=ruby :

# This requires Vagrant 1.6.2 or newer (earlier versions can’t reliably # configure the Fedora 20 network stack). Vagrant.require_version">= 1.6.2"

10 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

# Vagrantfile API/syntax version. Don’t touch unless you know what you’re doing! ...

alice@mercury:~/flocker-tutorial$ ls Vagrantfile alice@mercury:~/flocker-tutorial$

3. Use vagrant up to start and provision the VMs: alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’’ provider... ==> node1: Importing base box ’clusterhq/flocker-dev’...... lots of output ... ==> node2: ln -s ’/usr/lib/systemd/system/docker.service’ ’/etc/systemd/system/multi-user.target.wants/docker.service’ alice@mercury:~/flocker-tutorial$

This step may take several minutes or more as it downloads the Vagrant image, boots up two nodes and down- loads the Docker image necessary to run the tutorial. Your network connectivity and CPU speed will affect how long this takes. Fortunately this extra work is only necessary the first time you bring up a node (until you destroy it). 4. After vagrant up completes you may want to verify that the two VMs are really running and accepting SSH connections: alice@mercury:~/flocker-tutorial$ vagrant status Current machine states:

node1 running (virtualbox) node2 running (virtualbox) ... alice@mercury:~/flocker-tutorial$ vagrant ssh -c hostname node1 node1 Connection to 127.0.0.1 closed. alice@mercury:~/flocker-tutorial$ vagrant ssh -c hostname node2 node2 Connection to 127.0.0.1 closed. alice@mercury:~/flocker-tutorial$

5. If all goes well, the next step is to configure your SSH agent. This will allow Flocker to authenticate itself to the VM: If you’re not sure whether you already have an SSH agent running, ssh-add can tell you. If you don’t, you’ll see an error: alice@mercury:~/flocker-tutorial$ ssh-add Could not open a connection to your authentication agent. alice@mercury:~/flocker-tutorial$

If you do, you’ll see no output: alice@mercury:~/flocker-tutorial$ ssh-add alice@mercury:~/flocker-tutorial$

If you don’t have an SSH agent running, start one: alice@mercury:~/flocker-tutorial$ eval $(ssh-agent) Agent pid 27233 alice@mercury:~/flocker-tutorial$

6. Finally, add the Vagrant key to your agent:

2.2. Tutorial: Deploying and Migrating MongoDB 11 Flocker Documentation, Release 0.3.0dev1

alice@mercury:~/flocker-tutorial$ ssh-add ~/.vagrant.d/insecure_private_key alice@mercury:~/flocker-tutorial$

You now have two VMs running and easy SSH access to them. This completes the Vagrant-related setup.

Upgrading the Vagrant Environment

The Vagrantfile used in this tutorial installs an RPM package called flocker-node on both the nodes. If you already have a tutorial environment from a previous release, you’ll need to ensure that both tutorial nodes are running the latest version of flocker-node before continuing with the following tutorials. First check the current Flocker version on the nodes. You can do this by logging into each node and running the flocker-reportstate command with a --version argument. alice@mercury:~/flocker-tutorial$ ssh [email protected] flocker-reportstate --version

Only proceed if you find that you are running an older version of Flocker than 0.3.0dev1. If you find that you are running an older version, you now need to rebuild the tutorial environment. This will ensure that you have the latest Flocker version and that you are using a pristine tutorial environment.

Warning: This will completely remove the existing nodes and their data.

If you have the original Vagrantfile, change to its parent directory and run vagrant destroy. alice@mercury:~/flocker-tutorial$ vagrant destroy node2: Are you sure you want to destroy the ’node2’ VM? [y/N] y ==> node2: Forcing shutdown of VM... ==> node2: Destroying VM and associated drives... ==> node2: Running cleanup tasks for ’shell’ provisioner... node1: Are you sure you want to destroy the ’node1’ VM? [y/N] y ==> node1: Forcing shutdown of VM... ==> node1: Destroying VM and associated drives... ==> node1: Running cleanup tasks for ’shell’ provisioner... alice@mercury:~/flocker-tutorial$

Next delete the cached SSH host keys for the virtual machines as they will change when new VMs are created. Failing to do so will cause SSH to think there is a security problem when you connect to the recreated VMs. alice@mercury:~/flocker-tutorial$ ssh-keygen -f "$HOME/.ssh/known_hosts" -R 172.16.255.250 alice@mercury:~/flocker-tutorial$ ssh-keygen -f "$HOME/.ssh/known_hosts" -R 172.16.255.251

Delete the original Vagrantfile and then download the latest Vagrantfile and run vagrant up. alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’virtualbox’ provider... Bringing machine ’node2’ up with ’virtualbox’ provider... alice@mercury:~/flocker-tutorial$

Alternatively, if you do not have the original Vagrantfile or if the vagrant destroy command fails, you can remove the existing nodes directly from VirtualBox. The two vir- tual machines will have names like flocker-tutorial_node1_1410450919851_28614 and flocker-tutorial_node2_1410451102837_79031.

12 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

2.2.2 Moving Applications

Note: If you haven’t already, make sure to install the flocker-cli package before continuing with this tutorial.

Starting an Application

Let’s look at an extremely simple Flocker configuration for one node running a container containing a MongoDB server. minimal-application.yml

"version": 1 "applications": "mongodb-example": "image":"clusterhq/mongodb" minimal-deployment.yml

"version": 1 "nodes": "172.16.255.250":["mongodb-example"] "172.16.255.251": []

Notice that we mention the node that has no applications deployed on it to ensure that flocker-deploy knows that it exists. If we hadn’t done that certain actions that might need to be taken on that node will not happen, e.g. stopping currently running applications. Next take a look at what containers Docker is running on the VM you just created. The node IPs are those which were specified earlier in the Vagrantfile: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES alice@mercury:~/flocker-tutorial$

From this you can see that there are no running containers. To fix this, use flocker-deploy with the simple configuration files given above and then check again: alice@mercury:~/flocker-tutorial$ flocker-deploy minimal-deployment.yml minimal-application.yml alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d117c7e653e clusterhq/mongodb:latest mongod 2 seconds ago Up 1 seconds 27017/tcp, 28017/tcp mongodb-example alice@mercury:~/flocker-tutorial$ flocker-deploy has made the necessary changes to make your node match the state described in the configuration files you supplied.

Moving an Application

Let’s see how flocker-deploy can move this application to a different VM. Recall that the Vagrant configuration supplied in the setup portion of the tutorial started two VMs. Copy the deployment configuration file and edit it so that it indicates the application should run on the second VM instead of the first. The only change necessary to indicate this is to change the original IP address, 172.16.255.250, to the address of the other node, 172.16.255.251. The new file should be named minimal-deployment-moved.yml. minimal-deployment-moved.yml

2.2. Tutorial: Deploying and Migrating MongoDB 13 Flocker Documentation, Release 0.3.0dev1

"version": 1 "nodes": "172.16.255.250": [] "172.16.255.251":["mongodb-example"]

Note that nothing in the application configuration file needs to change. Moving the application only involves updating the deployment configuration. Use flocker-deploy again to enact the change: alice@mercury:~/flocker-tutorial$ flocker-deploy minimal-deployment-moved.yml minimal-application.yml alice@mercury:~/flocker-tutorial$ docker ps shows that no containers are running on 172.16.255.250: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES alice@mercury:~/flocker-tutorial$ and that MongoDB has been successfully moved to 172.16.255.251: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d117c7e653e clusterhq/mongodb:latest mongod 3 seconds ago Up 2 seconds 27017/tcp, 28017/tcp mongodb-example alice@mercury:~/flocker-tutorial$

At this point you have successfully deployed a MongoDB server in a container on your VM. You’ve also seen how Flocker can move an existing container between hosts. There’s no way to interact with it apart from looking at the docker ps output yet. In the next section of the tutorial you’ll see how to expose container services on the host’s network interface.

2.2.3 Exposing Ports

Each application running in a Docker container has its own isolated networking stack. To communicate with an application running inside the container we need to forward traffic from a network port in the node where the container is located to the appropriate port within the container. Flocker takes this one step further: an application is reachable on all nodes in the cluster, no matter where it is currently located. Let’s start a MongoDB container that exposes the database to the external world. port-application.yml

"version": 1 "applications": "mongodb-port-example": "image":"clusterhq/mongodb" "ports": -"internal": 27017 "external": 27017 port-deployment.yml

"version": 1 "nodes": "172.16.255.250":["mongodb-port-example"] "172.16.255.251": []

We will once again run these configuration files with flocker-deploy:

14 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

alice@mercury:~/flocker-tutorial$ flocker-deploy port-deployment.yml port-application.yml alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d117c7e653e clusterhq/mongodb:latest mongod 2 seconds ago Up 1 seconds 27017/tcp, 28017/tcp mongodb-port-example alice@mercury:~/flocker-tutorial$

This time we can communicate with the MongoDB application by connecting to the node where it is running. Using the mongo command line tool we will insert an item into a database and check that it can be found. You should try to follow along and do these database inserts as well.

Note: To keep your download for the tutorial as speedy as possible, we’ve bundled the latest development release of MongoDB in to a micro-sized Docker image. You should not use this image for production.

If you get a connection refused error try again after a few seconds; the application might take some time to fully start up. alice@mercury:~/flocker-tutorial$ $ mongo 172.16.255.250 MongoDB shell version: 2.4.9 connecting to: 172.16.255.250/test > use example; switched to db example > db.records.insert({"flocker": "tested"}) > db.records.find({}) { "_id" : ObjectId("53c958e8e571d2046d9b9df9"), "flocker" : "tested" }

We can also connect to the other node where it isn’t running and the traffic will get routed to the correct node: alice@mercury:~/flocker-tutorial$ mongo 172.16.255.251 MongoDB shell version: 2.4.9 connecting to: 172.16.255.251/test > use example; switched to db example > db.records.find({}) { "_id" : ObjectId("53c958e8e571d2046d9b9df9"), "flocker" : "tested" }

Since the application is transparently accessible from both nodes you can configure a DNS record that points at both IPs and access the application regardless of its location. See Setting up External Routing for more details. At this point you have successfully deployed a MongoDB server and communicated with it. You’ve also seen how external users don’t need to worry about applications’ location within the cluster. In the next section of the tutorial you’ll learn how to ensure that the application’s data moves along with it, the final step to running stateful applications on a cluster.

2.2.4 Data Volumes

The Problem

By default moving an application from one node to another does not move its data along with it. Before proceeding let’s see in more detail what the problem is by continuing the Exposing Ports example. Recall that we inserted some data into the database. Next we’ll use a new configuration file that moves the application to a different node. port-deployment-moved.yml

"version": 1 "nodes":

2.2. Tutorial: Deploying and Migrating MongoDB 15 Flocker Documentation, Release 0.3.0dev1

"172.16.255.250": [] "172.16.255.251":["mongodb-port-example"] alice@mercury:~/flocker-tutorial$ flocker-deploy port-deployment-moved.yml port-application.yml alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d117c7e653e clusterhq/mongodb:latest mongod 2 seconds ago Up 1 seconds 27017/tcp, 28017/tcp mongodb-port-example alice@mercury:~/flocker-tutorial$

If we query the database the records we’ve previously inserted have disappeared! The application has moved but the data has been left behind. alice@mercury:~/flocker-tutorial$ mongo 172.16.255.251 MongoDB shell version: 2.4.9 connecting to: 172.16.255.251/test > use example; switched to db example > db.records.find({}) >

The Solution

Unlike many other Docker frameworks Flocker has a solution for this problem, a ZFS-based volume manager. An application with a Flocker volume configured will move the data along with the application, transparently and with no additional intervention on your part. We’ll create a new configuration for the cluster, this time adding a volume to the MongoDB container. volume-application.yml

"version": 1 "applications": "mongodb-volume-example": "image":"clusterhq/mongodb" "ports": -"internal": 27017 "external": 27017 "volume": # The location within the container where the data volume will be # mounted: "mountpoint":"/data/db" volume-deployment.yml

"version": 1 "nodes": "172.16.255.250":["mongodb-volume-example"] "172.16.255.251": []

Then we’ll run these configuration files with flocker-deploy: alice@mercury:~/flocker-tutorial$ flocker-deploy volume-deployment.yml volume-application.yml alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d117c7e653e clusterhq/mongodb:latest mongod 2 seconds ago Up 1 seconds 27017/tcp, 28017/tcp mongodb-volume-example alice@mercury:~/flocker-tutorial$

Once again we’ll insert some data into the database:

16 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

alice@mercury:~/flocker-tutorial$ $ mongo 172.16.255.250 MongoDB shell version: 2.4.9 connecting to: 172.16.255.250/test > use example; switched to db example > db.records.insert({"the data": "it moves"}) > db.records.find({}) { "_id" : ObjectId("53d80b08a3ad4df94a2a72d6"), "the data" : "it moves" }

Next we’ll move the application to the other node. volume-deployment-moved.yml

"version": 1 "nodes": "172.16.255.250": [] "172.16.255.251":["mongodb-volume-example"] alice@mercury:~/flocker-tutorial$ flocker-deploy volume-deployment-moved.yml volume-application.yml alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4d117c7e653e clusterhq/mongodb:latest mongod 2 seconds ago Up 1 seconds 27017/tcp, 28017/tcp mongodb-volume-example alice@mercury:~/flocker-tutorial$

This time however the data has moved with the application: alice@mercury:~/flocker-tutorial$ mongo 172.16.255.251 MongoDB shell version: 2.4.9 connecting to: 172.16.255.251/test > use example; switched to db example > db.records.find({}) { "_id" : ObjectId("53d80b08a3ad4df94a2a72d6"), "the data" : "it moves" }

At this point you have successfully deployed a MongoDB server and communicated with it. You’ve also seen how Flocker allows you to move an application’s data to different locations in a cluster as the application is moved. You now know how to run stateful applications in a Docker cluster using Flocker. The virtual machines you are running will be useful for testing Flocker and running other examples in the documenta- tion. If you would like to shut them down temporarily you can run vagrant halt in the tutorial directory. You can then restart them by running vagrant up. If you would like to completely remove the virtual machines you can run vagrant destroy.

2.3 Flocker Application Examples

You can find below examples of how to deploy some common applications with Flocker. Each example includes instructions and Flocker configuration files to download that can be used immediately with the virtual machines created in the MongoDB tutorial.

2.3.1 Using Environment Variables

MySQL Example

Flocker supports passing environment variables to a container via its Application Configuration. This example will use a configured environment variable to set the root user password for a MySQL service running inside a container.

2.3. Flocker Application Examples 17 Flocker Documentation, Release 0.3.0dev1

Create the Virtual Machines

You can reuse the Virtual Machines defined in the Vagrant configuration for the MongoDB tutorial. If you have since shutdown or destroyed those VMs, boot them up again: alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’virtualbox’ provider... ==> node1: Importing base box ’clusterhq/flocker-dev’...

Download the Docker Image

The Docker image used by this example is quite large, so you should pre-fetch it to your nodes. alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull mysql:5.6.17 ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull mysql:5.6.17 ... alice@mercury:~/flocker-tutorial$

Note: The mysql:5.6.17 Docker image is used in this example for compatibility with ZFS. Newer versions of the MySQL Docker image enable asynchronous I/O, which is not yet supported by ZFS on Linux.

Launch MySQL

Download and save the following configuration files to the flocker-tutorial directory: mysql-application.yml

"version": 1 "applications": "mysql-volume-example": "image":"mysql:5.6.17" "environment": "MYSQL_ROOT_PASSWORD":"clusterhq" "ports": -"internal": 3306 "external": 3306 "volume": "mountpoint":"/var/lib/mysql"

mysql-deployment.yml

"version": 1 "nodes": "172.16.255.250":["mysql-volume-example"] "172.16.255.251": []

Now run flocker-deploy to deploy the MySQL application to the target Virtual Machine. alice@mercury:~/flocker-tutorial$ flocker-deploy mysql-deployment.yml mysql-application.yml alice@mercury:~/flocker-tutorial$

18 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

Connect to MySQL & Insert Sample Data

You can now use the mysql client on the host machine to connect to the MySQL server running inside the container. Connect using the client to the IP address of the Virtual Machine. In this case the example has exposed the default MySQL port 3306 so it is not required to specify a connection port on the command line: alice@mercury:~/flocker-tutorial$ mysql -h172.16.255.250 -uroot -pclusterhq

Welcome to the MySQL monitor. Commands end with ; or \g. ... mysql> CREATE DATABASE example; Query OK, 1 row affected (0.00 sec) mysql> USE example; Database changed mysql> CREATE TABLE ‘testtable‘ (‘id‘ INT NOT NULL AUTO_INCREMENT,‘name‘ VARCHAR(45) NULL,PRIMARY KEY (‘id‘)) ENGINE = MyISAM; Query OK, 0 rows affected (0.05 sec) mysql> INSERT INTO ‘testtable‘ VALUES(’’,’flocker test’); Query OK, 1 row affected, 1 warning (0.01 sec) mysql> quit Bye alice@mercury:~/flocker-tutorial$

Create a New Deployment Configuration and Move the Application

Download and save the following configuration file to your flocker-tutorial directory: mysql-deployment-moved.yml

"version": 1 "nodes": "172.16.255.250": [] "172.16.255.251":["mysql-volume-example"]

Then run flocker-deploy to move the MySQL application along with its data to the new destination host: alice@mercury:~/flocker-tutorial$ flocker-deploy mysql-deployment-moved.yml mysql-application.yml alice@mercury:~/flocker-tutorial$

Verify Data Has Moved

Confirm the application has moved to the target Virtual Machine: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51b5b09a46bb mysql:5.6.17 /bin/sh -c /init 7 seconds ago Up 6 seconds 0.0.0.0:3306->3306/tcp mysql-volume-example alice@mercury:~/flocker-tutorial$

And is no longer running on the original host: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES alice@mercury:~/flocker-tutorial$

2.3. Flocker Application Examples 19 Flocker Documentation, Release 0.3.0dev1

You can now connect to MySQL on its host and confirm the sample data has also moved: alice@mercury:~/flocker-tutorial$ mysql -h172.16.255.251 -uroot -pclusterhq

Welcome to the MySQL monitor. Commands end with ; or \g. ... mysql> SHOW DATABASES; +------+ | Database | +------+ | information_schema | | example | | mysql | | performance_schema | +------+ 4 rows in set (0.02 sec) mysql> USE example; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A

Database changed mysql> SELECT * FROM ‘testtable‘; +----+------+ | id | name | +----+------+ | 1 | flocker test | +----+------+ 1 row in set (0.01 sec) mysql>

This concludes the MySQL example.

2.3.2 Running PostgreSQL

Create the Virtual Machines

You can reuse the Virtual Machines defined in the Vagrant configuration for the MongoDB tutorial. If you have since shutdown or destroyed those VMs, boot them up again: alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’virtualbox’ provider... ==> node1: Importing base box ’clusterhq/flocker-dev’...

Download the Docker Image

The Docker image used by this example is quite large, so you should pre-fetch it to your nodes. alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull postgres ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull postgres ... alice@mercury:~/flocker-tutorial$

20 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

Launch PostgreSQL

Download and save the following configuration files to your flocker-tutorial directory: postgres-application.yml

"version": 1 "applications": "postgres-volume-example": "image":"postgres" "ports": -"internal": 5432 "external": 5432 "volume": # The location within the container where the data volume will be # mounted; see https://github.com/docker-library/postgres/blob/docker/Dockerfile.template "mountpoint":"/var/lib/postgresql/data" postgres-deployment.yml

"version": 1 "nodes": "172.16.255.250":["postgres-volume-example"] "172.16.255.251": []

Now run flocker-deploy to deploy the PostgreSQL application to the target Virtual Machine. alice@mercury:~/flocker-tutorial$ flocker-deploy postgres-deployment.yml postgres-application.yml alice@mercury:~/flocker-tutorial$

Confirm the container is running in its destination host: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6ee0fbd0446 postgres:latest /bin/sh -c /init 7 seconds ago Up 6 seconds 0.0.0.0:5432->5432/tcp postgres-volume-example alice@mercury:~/flocker-tutorial$

Connect to PostgreSQL

You can now use the psql client on the host machine to connect to the PostgreSQL server running inside the container. Connect using the client to the IP address of the Virtual Machine, using the port number exposed in the application configuration: alice@mercury:~/flocker-tutorial$ psql postgres --host 172.16.255.250 --port 5432 --username postgres psql (9.3.5) Type "help" for help. postgres=#

This verifies the PostgreSQL service is successfully running inside its container.

Insert a Row into the Database postgres=# CREATE DATABASE flockertest; CREATE DATABASE postgres=# \connect flockertest; psql (9.3.5)

2.3. Flocker Application Examples 21 Flocker Documentation, Release 0.3.0dev1

You are now connected to database "flockertest" as user "postgres". flockertest=# CREATE TABLE testtable (testcolumn int); CREATE TABLE flockertest=# INSERT INTO testtable (testcolumn) VALUES (3); INSERT 0 1 flockertest=# SELECT * FROM testtable; testcolumn ------3 (1 row) flockertest=# \quit

Move the Application

Download and save the following configuration file to your flocker-tutorial directory: postgres-deployment-moved.yml

"version": 1 "nodes": "172.16.255.250": [] "172.16.255.251":["postgres-volume-example"]

Then run flocker-deploy to move the PostgreSQL application along with its data to the new destination host: alice@mercury:~/flocker-tutorial$ flocker-deploy postgres-deployment-moved.yml postgres-application.yml alice@mercury:~/flocker-tutorial$

Verify Data Has Moved

Confirm the application has moved to the target Virtual Machine: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51b5b09a46bb clusterhq/postgres:latest /bin/sh -c /init 7 seconds ago Up 6 seconds 0.0.0.0:5432->5432/tcp postgres-volume-example alice@mercury:~/flocker-tutorial$

And is no longer running on the original host: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES alice@mercury:~/flocker-tutorial$

You can now connect to PostgreSQL on its host and confirm the sample data has also moved: alice@mercury:~/flocker-tutorial$ psql postgres --host 172.16.255.251 --port 5432 --username postgres psql (9.3.5) Type "help" for help. postgres=# \connect flockertest; psql (9.3.5) You are now connected to database "flockertest" as user "postgres". flockertest=# select * from testtable; testcolumn ------

22 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

3 (1 row)

This concludes the PostgreSQL example.

2.3.3 Linking Containers

Elasticsearch, Logstash & Kibana

Flocker provides functionality similar to Docker Container Linking. In this example you will learn how to deploy ElasticSearch, Logstash, and Kibana with Flocker, demonstrating how applications running in separate Docker containers can be linked together such that they can connect to one another, even when they are deployed on separate nodes. The three applications are connected as follows: • Logstash receives logged messages and relays them to ElasticSearch. • ElasticSearch stores the logged messages in a database. • Kibana connects to ElasticSearch to retrieve the logged messages and present them in a web interface.

Create the Virtual Machines

You can reuse the Virtual Machines defined in the Vagrant configuration for the MongoDB tutorial. If you have since shutdown or destroyed those VMs, boot them up again: alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’virtualbox’ provider... ==> node1: Importing base box ’clusterhq/flocker-dev’...

Download the Docker Images

The Docker images used by this example are quite large, so you should pre-fetch them to your nodes. alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/elasticsearch ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/logstash ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/kibana ... alice@mercury:~/flocker-tutorial$

alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/elasticsearch ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/logstash ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/kibana ... alice@mercury:~/flocker-tutorial$

Deploy on Node1

Download and save the following configuration files to your flocker-tutorial directory:

2.3. Flocker Application Examples 23 Flocker Documentation, Release 0.3.0dev1 elk-application.yml

"version": 1 "applications": "elasticsearch": "image":"clusterhq/elasticsearch" "ports": -"internal": 9200 "external": 9200 "volume": "mountpoint":"/var/lib/elasticsearch/" "logstash": "image":"clusterhq/logstash" "ports": -"internal": 5000 "external": 5000 "links": -"local_port": 9200 "remote_port": 9200 "alias":"es" "kibana": "image":"clusterhq/kibana" "ports": -"internal": 8080 "external": 80 elk-deployment.yml

"version": 1 "nodes": "172.16.255.250":["elasticsearch","logstash","kibana"] "172.16.255.251": []

Run flocker-deploy to start the three applications: alice@mercury:~/flocker-tutorial$ flocker-deploy elk-deployment.yml elk-application.yml alice@mercury:~/flocker-tutorial$

Connect to Kibana

Browse to port 80 on Node1 (http://172.16.255.250:80) with your web browser. You should see the Kibana web interface but there won’t be any messages yet.

24 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

Generate Sample Log Messages

Use telnet to connect to the Logstash service running in the Virtual Machine and send some sample JSON data. alice@mercury:~/flocker-tutorial$ telnet 172.16.255.250 5000 {"firstname": "Joe", "lastname": "Bloggs"} {"firstname": "Fred", "lastname": "Bloggs"} ^] telnet> quit Connection closed. alice@mercury:~/flocker-tutorial$

Now refresh the Kibana web interface and you should see those messages.

2.3. Flocker Application Examples 25 Flocker Documentation, Release 0.3.0dev1

Move ElasticSearch to Node2

Download and save the following configuration files to the flocker-tutorial directory: "version": 1 "nodes": "172.16.255.250":["logstash","kibana"] "172.16.255.251":["elasticsearch"]

Then run flocker-deploy to move the Elasticsearch application along with its data to the new destination host: alice@mercury:~/flocker-tutorial$ flocker-deploy elk-deployment.yml elk-application.yml alice@mercury:~/flocker-tutorial$

Now verify that the ElasticSearch application has moved to the other VM: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 894d1656b74d clusterhq/elasticsearch:latest /bin/sh -c ’source / 2 minutes ago Up 2 minutes 9300/tcp, 0.0.0.0:9200->9200/tcp elasticsearch alice@mercury:~/flocker-tutorial$

And is no longer running on the original host: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES abc5c08557d4 clusterhq/kibana:latest /usr/bin/twistd -n w 45 minutes ago Up 45 minutes 0.0.0.0:80->8080/tcp kibana

26 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

44a4ee72d9ab clusterhq/logstash:latest /bin/sh -c /usr/loca 45 minutes ago Up 45 minutes 0.0.0.0:5000->5000/tcp logstash alice@mercury:~/flocker-tutorial$

Now if you refresh the Kibana web interface, you should see the log messages that were logged earlier. This concludes the Elasticsearch-Logstash-Kibana example. Read more about linking containers in our Configuring Flocker documentation.

2.4 Flocker Feature Examples

You can find below examples of how to deploy some common applications with Flocker. Each example includes instructions and Flocker configuration files to download that can be used immediately with the virtual machines created in the MongoDB tutorial.

2.4.1 Using Environment Variables

MySQL Example

Flocker supports passing environment variables to a container via its Application Configuration. This example will use a configured environment variable to set the root user password for a MySQL service running inside a container.

Create the Virtual Machines

You can reuse the Virtual Machines defined in the Vagrant configuration for the MongoDB tutorial. If you have since shutdown or destroyed those VMs, boot them up again: alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’virtualbox’ provider... ==> node1: Importing base box ’clusterhq/flocker-dev’...

Download the Docker Image

The Docker image used by this example is quite large, so you should pre-fetch it to your nodes. alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull mysql:5.6.17 ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull mysql:5.6.17 ... alice@mercury:~/flocker-tutorial$

Note: The mysql:5.6.17 Docker image is used in this example for compatibility with ZFS. Newer versions of the MySQL Docker image enable asynchronous I/O, which is not yet supported by ZFS on Linux.

Launch MySQL

Download and save the following configuration files to the flocker-tutorial directory: mysql-application.yml

2.4. Flocker Feature Examples 27 Flocker Documentation, Release 0.3.0dev1

"version": 1 "applications": "mysql-volume-example": "image":"mysql:5.6.17" "environment": "MYSQL_ROOT_PASSWORD":"clusterhq" "ports": -"internal": 3306 "external": 3306 "volume": "mountpoint":"/var/lib/mysql" mysql-deployment.yml

"version": 1 "nodes": "172.16.255.250":["mysql-volume-example"] "172.16.255.251": []

Now run flocker-deploy to deploy the MySQL application to the target Virtual Machine. alice@mercury:~/flocker-tutorial$ flocker-deploy mysql-deployment.yml mysql-application.yml alice@mercury:~/flocker-tutorial$

Connect to MySQL & Insert Sample Data

You can now use the mysql client on the host machine to connect to the MySQL server running inside the container. Connect using the client to the IP address of the Virtual Machine. In this case the example has exposed the default MySQL port 3306 so it is not required to specify a connection port on the command line: alice@mercury:~/flocker-tutorial$ mysql -h172.16.255.250 -uroot -pclusterhq

Welcome to the MySQL monitor. Commands end with ; or \g. ... mysql> CREATE DATABASE example; Query OK, 1 row affected (0.00 sec) mysql> USE example; Database changed mysql> CREATE TABLE ‘testtable‘ (‘id‘ INT NOT NULL AUTO_INCREMENT,‘name‘ VARCHAR(45) NULL,PRIMARY KEY (‘id‘)) ENGINE = MyISAM; Query OK, 0 rows affected (0.05 sec) mysql> INSERT INTO ‘testtable‘ VALUES(’’,’flocker test’); Query OK, 1 row affected, 1 warning (0.01 sec) mysql> quit Bye alice@mercury:~/flocker-tutorial$

Create a New Deployment Configuration and Move the Application

Download and save the following configuration file to your flocker-tutorial directory: mysql-deployment-moved.yml

28 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

"version": 1 "nodes": "172.16.255.250": [] "172.16.255.251":["mysql-volume-example"]

Then run flocker-deploy to move the MySQL application along with its data to the new destination host: alice@mercury:~/flocker-tutorial$ flocker-deploy mysql-deployment-moved.yml mysql-application.yml alice@mercury:~/flocker-tutorial$

Verify Data Has Moved

Confirm the application has moved to the target Virtual Machine: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51b5b09a46bb mysql:5.6.17 /bin/sh -c /init 7 seconds ago Up 6 seconds 0.0.0.0:3306->3306/tcp mysql-volume-example alice@mercury:~/flocker-tutorial$

And is no longer running on the original host: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES alice@mercury:~/flocker-tutorial$

You can now connect to MySQL on its host and confirm the sample data has also moved: alice@mercury:~/flocker-tutorial$ mysql -h172.16.255.251 -uroot -pclusterhq

Welcome to the MySQL monitor. Commands end with ; or \g. ... mysql> SHOW DATABASES; +------+ | Database | +------+ | information_schema | | example | | mysql | | performance_schema | +------+ 4 rows in set (0.02 sec) mysql> USE example; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A

Database changed mysql> SELECT * FROM ‘testtable‘; +----+------+ | id | name | +----+------+ | 1 | flocker test | +----+------+ 1 row in set (0.01 sec) mysql>

This concludes the MySQL example.

2.4. Flocker Feature Examples 29 Flocker Documentation, Release 0.3.0dev1

2.4.2 Linking Containers

Elasticsearch, Logstash & Kibana

Flocker provides functionality similar to Docker Container Linking. In this example you will learn how to deploy ElasticSearch, Logstash, and Kibana with Flocker, demonstrating how applications running in separate Docker containers can be linked together such that they can connect to one another, even when they are deployed on separate nodes. The three applications are connected as follows: • Logstash receives logged messages and relays them to ElasticSearch. • ElasticSearch stores the logged messages in a database. • Kibana connects to ElasticSearch to retrieve the logged messages and present them in a web interface.

Create the Virtual Machines

You can reuse the Virtual Machines defined in the Vagrant configuration for the MongoDB tutorial. If you have since shutdown or destroyed those VMs, boot them up again: alice@mercury:~/flocker-tutorial$ vagrant up Bringing machine ’node1’ up with ’virtualbox’ provider... ==> node1: Importing base box ’clusterhq/flocker-dev’...

Download the Docker Images

The Docker images used by this example are quite large, so you should pre-fetch them to your nodes. alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/elasticsearch ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/logstash ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/kibana ... alice@mercury:~/flocker-tutorial$

alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/elasticsearch ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/logstash ... alice@mercury:~/flocker-tutorial$ ssh -t [email protected] docker pull clusterhq/kibana ... alice@mercury:~/flocker-tutorial$

Deploy on Node1

Download and save the following configuration files to your flocker-tutorial directory: elk-application.yml

"version": 1 "applications": "elasticsearch": "image":"clusterhq/elasticsearch"

30 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

"ports": -"internal": 9200 "external": 9200 "volume": "mountpoint":"/var/lib/elasticsearch/" "logstash": "image":"clusterhq/logstash" "ports": -"internal": 5000 "external": 5000 "links": -"local_port": 9200 "remote_port": 9200 "alias":"es" "kibana": "image":"clusterhq/kibana" "ports": -"internal": 8080 "external": 80 elk-deployment.yml

"version": 1 "nodes": "172.16.255.250":["elasticsearch","logstash","kibana"] "172.16.255.251": []

Run flocker-deploy to start the three applications: alice@mercury:~/flocker-tutorial$ flocker-deploy elk-deployment.yml elk-application.yml alice@mercury:~/flocker-tutorial$

Connect to Kibana

Browse to port 80 on Node1 (http://172.16.255.250:80) with your web browser. You should see the Kibana web interface but there won’t be any messages yet.

2.4. Flocker Feature Examples 31 Flocker Documentation, Release 0.3.0dev1

Generate Sample Log Messages

Use telnet to connect to the Logstash service running in the Virtual Machine and send some sample JSON data. alice@mercury:~/flocker-tutorial$ telnet 172.16.255.250 5000 {"firstname": "Joe", "lastname": "Bloggs"} {"firstname": "Fred", "lastname": "Bloggs"} ^]

telnet> quit Connection closed. alice@mercury:~/flocker-tutorial$

Now refresh the Kibana web interface and you should see those messages.

Move ElasticSearch to Node2

Download and save the following configuration files to the flocker-tutorial directory: "version": 1 "nodes": "172.16.255.250":["logstash","kibana"] "172.16.255.251":["elasticsearch"]

Then run flocker-deploy to move the Elasticsearch application along with its data to the new destination host:

32 Chapter 2. Getting Started Flocker Documentation, Release 0.3.0dev1

alice@mercury:~/flocker-tutorial$ flocker-deploy elk-deployment.yml elk-application.yml alice@mercury:~/flocker-tutorial$

Now verify that the ElasticSearch application has moved to the other VM: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 894d1656b74d clusterhq/elasticsearch:latest /bin/sh -c ’source / 2 minutes ago Up 2 minutes 9300/tcp, 0.0.0.0:9200->9200/tcp elasticsearch alice@mercury:~/flocker-tutorial$

And is no longer running on the original host: alice@mercury:~/flocker-tutorial$ ssh [email protected] docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES abc5c08557d4 clusterhq/kibana:latest /usr/bin/twistd -n w 45 minutes ago Up 45 minutes 0.0.0.0:80->8080/tcp kibana 44a4ee72d9ab clusterhq/logstash:latest /bin/sh -c /usr/loca 45 minutes ago Up 45 minutes 0.0.0.0:5000->5000/tcp logstash alice@mercury:~/flocker-tutorial$

Now if you refresh the Kibana web interface, you should see the log messages that were logged earlier. This concludes the Elasticsearch-Logstash-Kibana example. Read more about linking containers in our Configuring Flocker documentation.

2.4. Flocker Feature Examples 33 Flocker Documentation, Release 0.3.0dev1

34 Chapter 2. Getting Started CHAPTER 3

Advanced Documentation

3.1 What’s New

Note: If you already have a tutorial environment from a previous release see Upgrading the Vagrant Environment.

3.1.1 v0.3 (in development)

• geard is no longer used to manage Docker containers. • Added support for Fig compatible application configuration files.

3.1.2 v0.2

• Moving volumes between nodes is now done with a two-phase push that should dramatically decrease applica- tion downtime when moving large amounts of data. • Added support for environment variables in the application configuration. • Added basic support for links between containers in the application configuration.

3.1.3 v0.1

Everything is new since this is our first release.

3.1.4 Known Limitations

• This release is not ready for production and should not be used on publicly accessible servers or to store data you care about. Backwards compatibility is not a goal yet. • Changes to the application configuration file will often not be noticed by flocker-deploy, and there is no way to delete applications or volumes. Choose new names for your applications if you are making changes to the application configuration. You can learn more about where we might be going with future releases by: • Stopping by the #clusterhq channel on irc.freenode.net. • Visiting our GitHub repository at https://github.com/ClusterHQ/flocker.

35 Flocker Documentation, Release 0.3.0dev1

• Reading Areas of Potential Future Development.

3.2 Using Flocker

Flocker manages which containers are running and on what hosts. It also manages network configuration for these containers (between them and between containers and the world). And Flocker also creates and replicates volumes. All of this functionality is available via a simple invocation of the flocker-deploy program. This program is included in the flocker-cli package. If you haven’t installed that package yet, you may want to do so now.

3.2.1 Command Line Arguments flocker-deploy takes just two arguments. The first of these is the path to a deployment configuration file. The second is the path to an application configuration file. $ flocker-deploy clusterhq_deployment.yml clusterhq_app.yml

The contents of these two configuration files determine what actions Flocker actually takes. The configuration files completely control this; there are no other command line arguments or options. See Configuring Flocker for details about these two files. You can run flocker-deploy anywhere you have it installed. The containers you are managing do not need to be running on the same host as flocker-deploy.

3.2.2 Authentication

Setup flocker-deploy lets you manage containers on one or more hosts. Before flocker-deploy can do this it needs to be able to authenticate itself to these hosts. Flocker uses SSH to communicate with the hosts you specify in the deployment configuration file. It requires that you configure SSH access to the root user in advance. The recommended configuration is to generate an SSH key (if you don’t already have one): $ ssh-keygen

Then add it to your SSH key agent: $ ssh-add

Finally add it to the authorized_keys file of each host you want to manage: $ ssh-copy-id -i root@

This will allow flocker-deploy to connect to these hosts (as long as the key is still available in your key agent). If you have a different preferred SSH authentication configuration which allows non-interactive SSH authentication you may use this instead.

Other Keys flocker-deploy will generate an additional SSH key. This key is deployed to each host you manage with Flocker and allows the hosts to authenticate to each other.

36 Chapter 3. Advanced Documentation Flocker Documentation, Release 0.3.0dev1

3.3 Configuring Flocker

Flocker operates on two configuration files: application and deployment. Together these configurations define a de- ployment. The configuration is represented using yaml syntax.

3.3.1 Application Configuration

The application configuration consists of a version and short, human-meaningful application names and the parameters necessary to run those applications. The required parameters are version and applications. For now the version must be 1. The parameters required to define an application are: • image This is the name of the Docker image which will be used to start the container which will run the application. Optionally, this may include a tag using the : syntax. For example, an application which is meant to use version 1.0 of ClusterHQ’s flocker-dev Docker image is configured like this: "image":"clusterhq/flocker-dev:v1.0"

The following parameters are optional when defining an application: • ports This is an optional list of port mappings to expose to the outside world. Connections to the external port on the host machine are forwarded to the internal port in the container. "ports": -"internal": 80 "external": 8080

• links This is an optional list of links to make to other containers, providing a mechanism by which your containers can communicate even when they are located on different hosts. Linking containers in Flocker works by populating a number of environment variables in the application specifying a link. The environment variables generated are named for the specified alias and local port, while their values will point to the configured remote port. For example, given an application configuration containing a links section as follows: "links": -"local_port": 80 "remote_port": 8080 "alias":"apache"

The above configuration will produce environment variables in that application using the same format as gener- ated by Docker: APACHE_PORT_80_TCP=tcp://example.com:8080 APACHE_PORT_80_TCP_PROTO=tcp APACHE_PORT_80_TCP_ADDR=example.com APACHE_PORT_80_TCP_PORT=8080

3.3. Configuring Flocker 37 Flocker Documentation, Release 0.3.0dev1

Warning: As you may have noticed in the example above, unlike Docker links, the destination port will not be the port used to create the environment variable names. Flocker implements linking via the ports exposed to the network, whereas Docker creates an internal tunnel between linked containers, an approach that is not compatible with the deployment of links across multiple nodes.

Note: Only TCP links are supported by Flocker, therefore the TCP portion of the environment variable names and the tcp value of the _PROTO and _TCP variables are not configurable.

• volume This specifies that the application container requires a volume. It also allows you to specify where in the container the volume will be mounted via the mountpoint key. The value for this key must be a string giving an absolute path. "volume": "mountpoint":"/var/www/data"

• environment This is an optional mapping of key/value pairs for environment variables that will be applied to the application container. Keys and values for environment variables must be strings and only ASCII characters are supported at this time. "environment": "foo":"bar" "baz":"qux"

Here’s an example of a simple but complete configuration defining one application: "version": 1 "applications": "site-clusterhq.com": "image":"clusterhq/clusterhq-website" "environment": "WP_ADMIN_USERNAME":"administrator" "WP_ADMIN_PASSWORD":"password" "ports": -"internal": 80 "external": 8080 "volume": "mountpoint":"/var/mysql/data"

3.3.2 Fig-compatible Application Configuration

As an alternative to Flocker’s configuration syntax, you may also use Fig‘s configuration syntax to define applications.

Note: Flocker does not yet support the entire range of configuration directives available in Fig. The parameters currently supported to define an application in Fig syntax are:

• image This is the name of the Docker image which will be used to start the container which will run the application. Optionally, this may include a tag using the : syntax. For example, in an application which is meant to use version 5.6 of MySQL, the Docker image is configured like this:

38 Chapter 3. Advanced Documentation Flocker Documentation, Release 0.3.0dev1

image:"mysql:5.6"

• environment This is an optional mapping of key/value pairs for environment variables that will be applied to the application container. Keys and values for environment variables must be strings and only ASCII characters are supported at this time. environment: "WP_ADMIN_USERNAME":"admin" "WP_ADMIN_PASSWORD":"8x6nqf5arbt"

• ports This is an optional list of port mappings to expose to the outside world, with each entry in external:internal format. Connections to the external port on the host machine are forwarded to the internal port in the container. You should wrap port mappings in quotes, as per the example below, to explicitly specify the mappings as strings. This is because YAML will parse numbers in the form of xx:yy as base 60 numbers, leading to erroneous behaviour. ports: -"8080:80"

• links This is an optional list of links to make to other containers, providing a mechanism by which your containers can communicate even when they are located on different hosts. Linking containers in Flocker works by populating a number of environment variables in the application specifying a link. The environment variables created will be mapped to the name or alias of an application along with exposed internal and external ports. For example, a configuration: links: -"mysql:db"

Where mysql is another application defined in the configuration, db will be the alias available to the application linking mysql and the following environment variables will be populated (assuming port mapping in mysql of 3306:3306: DB_PORT_3306_TCP=tcp://example.com:3306 DB_PORT_3306_TCP_PROTO=tcp DB_PORT_3306_TCP_ADDR=example.com DB_PORT_3306_TCP_PORT=3306

If an alias is not specified in a link configuration, the environment variable prefix will be the application name. For example: links: -"mysql"

will populate environment variables: MYSQL_PORT_3306_TCP=tcp://example.com:3306 MYSQL_PORT_3306_TCP_PROTO=tcp MYSQL_PORT_3306_TCP_ADDR=example.com MYSQL_PORT_3306_TCP_PORT=3306

• volumes This is an optional list specifying volumes to be mounted inside a container.

3.3. Configuring Flocker 39 Flocker Documentation, Release 0.3.0dev1

Warning: Flocker only supports one volume per container at this time. Therefore if using a Fig compatible configuration, the volumes list should contain only one entry.

The value for an entry in this list must be a string giving an absolute path. volumes: -"/var/lib/mysql"

Here’s a complete example of a Fig compatible application configuration for Flocker: "mysql": image:"mysql:5.6.17" environment: "MYSQL_ROOT_PASSWORD":"clusterhq" ports: -"3306:3306" volumes: -"/var/lib/mysql"

3.3.3 Deployment Configuration

The deployment configuration specifies which applications are run on what nodes. It consists of a version and a mapping from node names to application names. The required parameters are version and applications. For now the version must be 1. Here’s an example of a simple but complete configuration defining a deployment of one application on one host: "version": 1 "nodes": "node017.example.com": "site-clusterhq.com"

3.4 Volume Manager

Flocker comes with a volume manager, a tool to manage volumes that can be attached to Docker containers. Of particular note is the ability to push volumes to different machines.

3.4.1 Configuration

Each host in a Flocker cluster has a universally unique identifier (UUID) for its volume manager. By default the UUID is stored in /etc/flocker/volume.json. The volume manager stores volumes inside a ZFS pool called flocker.

3.4.2 Volume Ownership

Each volume is owned by a specific volume manager and only that volume manager can write to it. To begin with a volume is owned by the volume manager that created it. A volume manager can push volumes it owns to another machine, copying the volume’s data to a remote volume manager. The copied volume on that remote volume manager will continue to be owned by the local volume manager, and therefore the remote volume manager will not be able to write to it. A volume manager can also handoff a volume to a remote volume manager, i.e. transfer ownership.

40 Chapter 3. Advanced Documentation Flocker Documentation, Release 0.3.0dev1

The remote volume manager becomes the owner of the volume and subsequently it is able to write to the volume. The volume manager that did the handoff ceases to own the volume and subsequently is not allowed to write to the volume. Volumes are mounted read-write by the manager which owns them. They are mounted read-only by any other manager which has a copy.

3.4.3 Implementation Details

Each volume is a ZFS dataset. Volumes are created with three parameters: • The UUID of the volume manager that owns the volume. The creating volume manager’s UUID (see above) is used to supply a value for this parameter. • The logical name, composed of a namespace and an identifier; this must be the same as the name of the container it will be mounted in. The logical name must also be unique within the Flocker cluster. For example, for a container in namespace "default" named "myapp-mongodb" a volume called "myapp-mongodb" will be created in the same namespace. When a Flocker environment is cloned each clone resides in its own namespace. "myapp-mongodb" can therefore be the identifier of both the original and cloned volumes; differing namespace differentiates their logical name. • A mount path, indicating where within a container the volume will be mounted. For example, for a MongoDB server this would be "/var/lib/mongodb" since that is where MongoDB stores its data. The ZFS dataset name is a combination of the UUID and the logical name (namespace + identifier); it will be a child of the Flocker ZFS pool. The pool is usually called flocker. For exam- ple if the volume manager’s UUID is 1234, the namespace is default and the volume identifier is myapp.mongodb, a ZFS dataset called flocker/1234.default.myapp-mongodb will be mounted at /flocker/1234.default.myapp-mongodb on the node’s filesystem.

Docker Integration

When starting a container with a volume configured, Flocker checks for the existence of the volume. If it does not exist a new ZFS dataset is created. Flocker mounts the volume into the container as a normal Docker volume.

Push and Handoff

Push and handoffs are currently done over SSH between nodes, with ad hoc calls to the flocker-volume command-line tool. In future releases this will be switched to a real protocol and later on to communication between long-running daemons rather than short-lived scripts. (See #154.) When a volume is pushed a zfs send is used to serialize its data for transmission to the remote machine, which does a zfs receive to decode the data and create or update the corresponding ZFS dataset. If the sending node determines that it has a snapshot of the volume in common with the receiving node (as determined using flocker-volume snapshot) then it will construct an incremental data stream based on that snapshot. This can drastically reduce the amount of data that needs to be transferred between the two nodes. Handoff involves renaming the ZFS dataset to change the owner UUID encoded in the dataset name. For example, imagine two volume managers with UUIDs 1234 and 5678 and a dataset called mydata.

3.4. Volume Manager 41 Flocker Documentation, Release 0.3.0dev1

Action Volume Manager 1234 Volume Manager 5678 1234.mydata (owner) 1. Create mydata on 1234

1234.mydata (owner) 1234.mydata 2. Push mydata to 5678

5678.mydata 5678.mydata (owner) 3. Handoff mydata to 5678

3.5 Data-Oriented Clustering

3.5.1 Minimal Downtime Volume Migration

Flocker’s cluster management logic uses the volume manager (see Volume Manager) to efficiently move containers’ data between nodes. Consider a MongoDB application with a 20GB volume being moved from node A to node B. The naive implementation would be: 1. Shut down MongoDB on node A. 2. Push all 20GB of data to node B with no database running. 3. Hand off ownership of the volume to node B, a quick operation. 4. Start MongoDB on node B. This method would cause significant downtime. Instead Flocker uses a superior two-phase push: 1. Push the full 20GB of data in the volume from node A to a node B. Meanwhile MongoDB continues to run on node A. 2. Shut down MongoDB on node A. 3. Push only the changes that were made to the volume since the last push happened. This will likely be orders of magnitude less than 20GB, depending on what database activity happened in the interim. 4. Hand off ownership of the volume to node B, a quick operation. 5. Start MongoDB on node B. MongoDB is only unavailable during the time it takes to push the incremental changes from node A to node B.

3.6 Setting up External Routing

Flocker allows you to expose public ports on your applications. For example, you can export port 8443 on an HTTPS server running inside a container as an externally visible port 443 on the host machine. Because Flocker runs on a cluster of nodes your web application might run on different nodes at different times. You could update the DNS record every time a container moves. However, updating DNS records can take anywhere from a minute to a few hours to take effect for all clients so this will impact your application’s availability. This is where Flocker’s routing functionality comes in handy. When an external route is configured (e.g. on port 443) Flocker routes that port on all nodes to the node where your application is running. You can therefore move a node and then change your DNS configuration appropriately without incurring any downtime.

42 Chapter 3. Advanced Documentation Flocker Documentation, Release 0.3.0dev1

3.6.1 No-Change DNS Configuration

What’s more it is also possible to configure your DNS records in such a way that no DNS changes are necessary when applications move to different nodes. Specifically, the DNS record for your application should be configured to point at all IPs in the cluster. For example, consider the following setup:

www.example.com has a DNS record pointing at two different nodes’ IP. Every time you connect to www.example.com your browser will choose one of the two IPs at random. • If you connect to port 80 on the node2 — which is not hosting the container — the traffic will be routed on to node1. • If you connect to port 80 on node1 you will reach the web server that is listening on port 8080 within a container. Note that if nodes are in different data centers and you pay for bandwidth this configuration will require you to pay for forwarded traffic between nodes.

3.7 Debugging

3.7.1 Logging

The Flocker processes running on the nodes will write their logs to /var/log/flocker/. The log files are named -.log, e.g. flocker-volume-1234.log. Logs from the Docker containers are written to systemd’s journal with a unit name constructed with a ctr- prefix. For example if you’ve started an application called mymongodb you can view its logs by running the following command on the node where the application was started: $ journalctl -u ctr-mymongodb

3.8 Cleaning Up

Flocker does not currently implement a tool to purge containers and state from deployment nodes that have had applications and volumes installed via flocker-deploy. Adding a cleanup tool is on the Flocker development path for a later release. Until this feature is available, you may wish to manually purge deployment nodes of all containers and state created by Flocker. This will enable you to test, play around with Flocker or repeat the deployment process (for example, if you have followed through the tutorial and would like to clean up the virtual machines to start again without having to destroy and rebuild them).

Note: This process will destroy all applications and their associated data deployed by Flocker on the target node. In addition, the verbatim commands documented below will destroy all Docker containers on the target node, regardless of whether or not they were deployed via Flocker. Proceed at your own risk and only if you fully understand the effects of executing these commands.

You can run the necessary cleanup commands via SSH. The tutorial’s virtual machines are created with IP addresses 172.16.255.250 and 172.16.255.251. Be sure to replace the example IP address in the commands below with the actual IP address of the node you wish to purge.

3.7. Debugging 43 Flocker Documentation, Release 0.3.0dev1

3.8.1 Stopping Containers

Docker containers must be stopped before they can be removed. alice@mercury:~/flocker-mysql$ ssh [email protected] ’docker ps -q | xargs --no-run-if-empty docker stop’

3.8.2 Removing Containers alice@mercury:~/flocker-mysql$ ssh [email protected] ’docker ps -aq | xargs --no-run-if-empty docker rm’

These commands list the ID numbers of all the Docker containers on each host, including stopped containers and then pipes each ID to the docker rm command to purge.

3.8.3 Removing ZFS Volumes

To remove ZFS volumes created by Flocker, you can list the volumes on each host and then use the unique IDs in conjunction with the zfs destroy command. alice@mercury:~/flocker-mysql$ ssh [email protected] ’zfs list -H -o name’ flocker flocker/e16d5b2b-471d-4bbe-be23-d58bbc8f1b94.mysql-volume-example alice@mercury:~/flocker-mysql$ ssh [email protected] ’zfs destroy -r flocker/e16d5b2b-471d-4bbe-be23-d58bbc8f1b94.mysql-volume-example’

Alternatively if you wish to destroy all data sets created by Flocker, you can run the following command: alice@mercury:~/flocker-mysql$ zfs destroy -r flocker

44 Chapter 3. Advanced Documentation CHAPTER 4

Getting Involved

4.1 Contributing to Flocker

4.1.1 Introduction

ClusterHQ develops software using a variation of the Ultimate Quality Development System. • Each unit of work is defined in an issue in the issue tracker and developed on a branch. • Code is written using test-driven development. • The issue is closed by merging the branch (via a GitHub pull request). • Before a branch is merged it must pass code review. • The code reviewer ensures that the pull request: – Follows the coding standard (Python’s PEP 8). – Includes appropriate documentation. – Has full test coverage (unit tests and functional tests). – The tests pass in the continuous integration system (Buildbot). – Resolves the issue. • The code reviewer can approve the pull request for merging as is, with some changes, or request changes and an additional review.

4.1.2 Talk to Us

Have questions or need help? Besides filing a GitHub issue with feature requests or bug reports you can also join us on the #clusterhq channel on the irc.freenode.net IRC network or on the flocker-users Google Group.

4.1.3 Development Environment

• To run the complete test suite you will need ZFS and Docker installed. The recommended way to get an environment with these installed is to use the included Vagrantfile which will create a pre-configured Fedora 20 virtual machine. Vagrant 1.6.2 or later is required. Once you have Vagrant installed (see the Vagrant documentation) you can run the following to get going:

45 Flocker Documentation, Release 0.3.0dev1

$ vagrant up $ vagrant ssh

• You will need Python 2.7 and a recent version of PyPy installed on your development machine. • If you don’t already have tox on your development machine, you can install it and other development depen- dencies (ideally in a virtualenv) by doing: $ python setup.py install .[doc,dev]

4.1.4 Running Tests

You can run all unit tests by doing: $ tox

Functional tests require ZFS and Docker to be installed and in the case of the latter running as well. In addition, tox needs to be run as root: $ sudo tox

Since these tests involve global state on your machine (filesystems, iptables, Docker containers, etc.) we recom- mend running them in the development Vagrant image.

4.1.5 Documentation

Documentation is generated using Sphinx and stored in the docs/ directory. You can build it individually by running: $ tox -e sphinx

You can view the result by opening docs/_build/html/index.html in your browser.

4.1.6 Requirements for Contributions

1. All code must have unit test coverage and to the extent possible functional test coverage. Use the coverage.py tool with the --branch option to generate line and branch coverage reports. This report can tell you if you missed anything. It does not necessarily catch everything though. Treat it as a helper but not the definitive indicator of success. You can also see coverage output in the Buildbot details link of your pull request. Practice test-driven development to ensure all code has test coverage. 2. All code must have documentation. Modules, functions, classes, and methods must be documented (even if they are private). Function parameters and object attributes must be documented (even if they are private). 3. All user-facing tools must have documentation. Document tool usage as part of big-picture documentation. Identify useful goals the user may want to ac- complish and document tools within the context of accomplishing those goals. Documentation should be as accessible and inclusive as possible. Avoid language and markup which assumes the ability to precisely use a mouse and keyboard, or that the reader has perfect vision. Create alternative but equal documentation for the visually impaired, for example, by using alternative text on all images. If in doubt, particularly about markup changes, use http://achecker.ca/ and fix any “Known Problems” and “Likely Problems”. 4. Add your name (in alphabetical order) to the AUTHORS.rst file.

46 Chapter 4. Getting Involved Flocker Documentation, Release 0.3.0dev1

4.1.7 Project Development Process

The core development team uses GitHub issues to track planned work. Issues are organized by release milestones, and then by subcategories: Backlog Issues we don’t expect to do in the release. These issues don’t have any particular category label. All issues start in the backlog when they are filed. The requirements for an issue must be completely specified before it can move out of the backlog. Design Issues that we expect to work on soon. This is indicated by a design label. A general plan for accomplishing the requirements must be specified on the issue before it can move to the Ready state. The issue is assigned to the developer working on the plan. When there is a proposed plan the review label is added to the issue (so that it has both design and review). Ready Issues that are ready to be worked on. This is indicated by a ready label. Issues can only be Ready after they have been in Design so they include an implementation plan. When someone starts work on an issue it is moved to the In Progress category (the ready keyword is removed and the in progress label is added). In Progress Such issues are assigned to the developer who is currently working on them. This is indicated by an in progress label. When the code is ready for review a new pull request is opened. The pull request is added to the Review category. Ready for Review An issue or pull request that includes work that is ready to be reviewed. This is indicated by a review label. Issues can either be in design review (design and review) or final review (just review). A reviewer can move a design review issue to Ready (to indicate the design is acceptable) or back to Design (to indicate it needs more work). A reviewer can move a final review issue to Approved (to indicate the work is acceptable) or back to In Progress (to indicate more work is needed). Passed Review A pull request that has some minor problems that need addressing, and can be merged once those are dealt with and all tests pass. This is indicated by an accepted label. Done Closed issues and pull requests. Blocked Issues that can’t be worked on because they are waiting on some other work to be completed. This is indicated by a blocked label. You can see the current status of all issues and pull requests by visiting https://waffle.io/clusterhq/flocker. In general issues will move from Backlog to Design to Ready to In Progress. An in-progress issue will have a branch with the issue number in its name. When the branch is ready for review a pull request will be created in the Review category. When the branch is merged the corresponding pull requests and issues will be closed.

Steps to Contribute Code

GitHub collaborators can participate in the development workflow by changing the labels on an issue. GitHub lets non- collaborators create new issues and pull requests but it does not let them change labels. If you are not a collaborator you may seek out assistance from a collaborator to set issue labels to reflect the issue’s stage. 1. Pick the next issue in the Ready category. Drag it to the In Progress column in Waffle (or change the label from ready to in progress in GitHub). 2. Create a branch from master with a name including a few descriptive words and ending with the issue number, e.g. add-thingie-123. 3. Resolve the issue by making changes in the branch. 4. Submit the issue/branch for review. Create a pull request on GitHub for the branch. The pull request should include a Fixes #123 line referring to the issue that it resolves (to automatically close the issue when the branch is merged). Make sure Buildbot indicates all tests pass.

4.1. Contributing to Flocker 47 Flocker Documentation, Release 0.3.0dev1

5. Address any points raised by the reviewer. If a re-submission for review has been requested, change the label from in progress to review in GitHub (or drag it to the Ready for Review column in Waffle) and go back to step 4. 6. Once it is approved, merge the branch into master by clicking the Merge button. 7. As a small thank you for contributing to Flocker, we’d like to send you some ClusterHQ swag. Once your pull request has been merged, just send an email to [email protected] with your t-shirt size, mailing address and a phone number to be used only for filling out the shipping form. We’ll get something in the mail to you.

Steps to Contribute Reviews

1. Pick a pull request in GitHub/Waffle that is ready for review (review label/Review category). 2. Use the continuous integration information in the PR to verify the test suite is passing. 3. Verify the code satisfies the Requirements for Contribution (see above). 4. Verify the change satisfies the requirements specified on the issue. 5. Think hard about whether the code is good or bad. 6. Leave comments on the GitHub PR page about any of these areas where you find problems. 7. Leave a comment on the GitHub PR page explicitly approving or rejecting the change. If you accept the PR and no final changes are required then use the GitHub merge button to merge the branch. If you accept the PR but changes are needed move it to the Review Passed column in Waffle or change its label from review to approved. If you do not accept the PR move it to the In Progress column in Waffle or change its label from review to in progress.

4.2 Infrastructure

Contents:

4.2.1 Vagrant

There is a Vagrantfile in the base of the repository, that is pre-installed with all of the dependencies required to run flocker. See the Vagrant documentation for more details.

Boxes

There are several vagrant boxes. Development Box (vagrant/dev) The box is initialized with the yum repositories for ZFS and for dependencies not available in Fedora and installs all the dependencies. This is the box the Vagrantfile in the root of the repository is based on. Tutorial Box (vagrant/tutorial) This box is initialized with the yum repositories for ZFS and Flocker, and has Flocker pre-installed. This is the box the tutorial is based on.

48 Chapter 4. Getting Involved Flocker Documentation, Release 0.3.0dev1

Building

To build one of the above boxes, run the build script in the corresponding directory. This will generate a flocker--.box file. Upload this file to Google Cloud Storage, using gsutil: gsutil cp -a public_read flocker-dev-$(python ../../setup.py --version).box gs://clusterhq-vagrant/

(If you’re uploading the tutorial box the image will be flocker-tutorial-... instead of flocker-dev-....) Then add a version on Vagrant Cloud (flocker-dev) or Vagrant Cloud (flocker-tutorial) as applicable. The version on Vagrant Cloud should be the version with - replaced with ..

Testing

It is possible to test this image locally before uploading. The build script generates metadata pointing a the locally built file, which can be used to add the box with the correct version: vagrant box add vagrant/dev/flocker-dev.json

Then destroy and re-up that vagrant image. It is also possible to build a vagrant image based on RPMs from a branch. If you pass a --branch argument to build, then it will use the RPMs from the latest build of that branch on Buildbot.

4.2.2 Building RPMs

To build flocker RPMs, run the following commands: python setup.py sdist python setup.py generate_spec cp dist/Flocker-$(python setup.py --version).tar.gz ~/rpmbuild/SOURCES sudo yum-builddep flocker.spec rpmbuild -ba flocker.spec

The above commands require the rpmdevtools and yum-utils packages installed. Flocker depends on a number of packages which aren’t available in fedora, or newer versions than are available there. These packages are available from our Copr repository. To enable yum to find them, put the repo file in /etc/yum.repos.d/.

4.2.3 Release Process

Note: Make sure to follow the latest documentation when doing a release.

Outcomes

By the end of the release process we will have: • a tag in version control • a Python wheel in the ClusterHQ package index

4.2. Infrastructure 49 Flocker Documentation, Release 0.3.0dev1

• Fedora 20 RPMs for software on the node and client • a Vagrant base tutorial image • documentation on docs.clusterhq.com

Prerequisites

Software

•A Flocker development machine which has the following commands: – rpmbuild – createrepo – yumdownloader • a web browser • an up-to-date clone of the Flocker repository • an up-to-date clone of the homebrew-tap repository

Access

• A Read the Docs account (registration), with maintainer access to the Flocker project. • Access to Google Cloud Storage using gsutil.

Preparing for a release

Warning: The following steps should be carried out on a Flocker development machine. Log into the machine using SSH agent forwarding so that you can push changes to GitHub using the keys from your workstation.

vagrant ssh -- -A

1. Choose a version number: • Release numbers should be of the form x.y.z e.g.: export VERSION=0.1.2

2. Create an issue: (a) Set the title to “Release flocker $VERSION” (b) Assign it to yourself 3. Check that all required versions of the dependency packages are built: (a) Inspect the package versions listed in the install_requires section of setup.py. (b) Check that matching RPM packages are available on the clusterhq repository. You can list the current contents of the clusterhq repository using the following command on Fedora.

repoquery --repoid clusterhq --repofrompath clusterhq,http://archive.clusterhq.com/fedora/20/x86_64/ "*"

4. Create a clean, local working copy of Flocker with no modifications:

50 Chapter 4. Getting Involved Flocker Documentation, Release 0.3.0dev1

git clone [email protected]:ClusterHQ/flocker.git "flocker-${VERSION}"

5. Create a branch for the release and push it to GitHub: git checkout -b release/flocker-${VERSION} origin/master git push origin --set-upstream release/flocker-${VERSION}

6. Back port features from master (optional) The release may require certain changes to be back ported from the master branch. See Appendix: Back Porting Changes From Master. 7. Update the version numbers in: • the yum install line in docs/gettingstarted/linux-install.sh and • the box_version in docs/gettingstarted/tutorial/Vagrantfile • docs/gettingstarted/installation.rst (including the sample command output) • The “Next Release” line in docs/advanced/whatsnew.rst • Commit the changes: git commit -am "Bumped version numbers"

8. Ensure the release notes in NEWS are up-to-date: XXX: Process to be decided. See https://github.com/ClusterHQ/flocker/issues/523 git commit -am "Updated NEWS"

9. Ensure copyright dates in LICENSE are up-to-date: XXX: Process to be decided. See https://github.com/ClusterHQ/flocker/issues/525 git commit -am "Updated copyright"

10. Push the changes: git push

11. Ensure all the tests pass on BuildBot: Go to the BuildBot web status and force a build on the just-created branch. 12. Do the acceptance tests: XXX: See https://github.com/ClusterHQ/flocker/issues/315 13. Make a pull request on GitHub The pull request should be for the release branch against master, with a Fixes #123 line in the description referring to the release issue that it resolves. Wait for an accepted code review before continuing.

Warning: Do not merge the branch yet. It should only be merged once it has been tagged, in the next series of steps.

4.2. Infrastructure 51 Flocker Documentation, Release 0.3.0dev1

Release

Warning: The following steps should be carried out on a Flocker development machine. Log into the machine using SSH agent forwarding so that you can push changes to GitHub using the keys from your workstation.

vagrant ssh -- -A

1. Change your working directory to be the Flocker release branch working directory. 2. Create (if necessary) and activate the Flocker release virtual environment:

Note: The following instructions use virtualenvwrapper but you can use virtualenv directly if you prefer.

mkvirtualenv flocker-release-${VERSION} pip install --editable .[release]

3. Tag the version being released: git tag --annotate "${VERSION}" "release/flocker-${VERSION}" -m "Tag version ${VERSION}" git push origin "${VERSION}"

4. Go to the BuildBot web status and force a build on the tag. Force a build on a tag by putting the tag name (e.g. 0.2.0) into the branch box (without any prefix).

Note: We force a build on the tag as well as the branch because the RPMs built before pushing the tag won’t have the right version. Also, the RPM upload script currently expects the RPMs to be built from the tag, rather than the branch.

5. Build python packages and upload them to archive.clusterhq.com python setup.py sdist bdist_wheel gsutil cp -a public-read \ "dist/Flocker-${VERSION}.tar.gz" \ "dist/Flocker-${VERSION}-py2-none-any.whl" \ gs://archive.clusterhq.com/downloads/flocker/

Note: Set up gsutil authentication by following the instructions from the following command: $ gsutil config

6. Build RPM packages and upload them to archive.clusterhq.com admin/upload-rpms "${VERSION}"

7. Build and upload the tutorial vagrant box. 8. Build tagged docs at Read the Docs: (a) Force Read the Docs to reload the repository There is a GitHub webhook which should notify Read The Docs about changes in the Flocker repository, but it sometimes fails. Force an update by running: curl -X POST http://readthedocs.org/build/flocker

(b) Go to the Read the Docs dashboard.

52 Chapter 4. Getting Involved Flocker Documentation, Release 0.3.0dev1

(c) Enable the version being released. (d) Wait for the documentation to build. The documentation will be visible at http://docs.clusterhq.com/en/${VERSION} when it has been built. (e) Set the default version to that version.

Warning: Skip this step for weekly releases and pre-releases. The features and documentation in weekly releases and pre-releases may not be complete and may not have been tested. We want new users’ first experience with Flocker to be as smooth as possible so we direct them to the tutorial for the last stable release. Other users choose to try the weekly releases, by clicking on the latest weekly version in the ReadTheDocs version panel.

9. Update the Homebrew recipe The aim of this step is to provide a version specific homebrew recipe for each release. • Checkout the homebrew-tap repository. git clone [email protected]:ClusterHQ/homebrew-tap.git

• Create a release branch

git checkout -b release/flocker-${VERSION%pre*} origin/master git push origin --set-upstream release/flocker-${VERSION%pre*}

• Create a flocker-${VERSION}.rb file Copy the last recipe file and rename it for this release. • Update recipe file – Update the version number The version number is included in the class name with all dots and dashes removed. e.g. class Flocker012 < Formula for Flocker-0.1.2 – Update the URL The version number is also included in the url part of the recipe. – Update the sha1 checksum.

sha1sum "dist/Flocker-${VERSION}.tar.gz" ed03a154c2fdcd19eca471c0e22925cf0d3925fb dist/Flocker-0.1.2.tar.gz

– Commit the changes and push

git commit -am "Bumped version number and checksum in homebrew recipe" git push

• Test the new recipe on OS X with Homebrew installed Try installing the new recipe directly from a GitHub link brew install https://raw.githubusercontent.com/ClusterHQ/homebrew-tap/release/flocker-${VERSION}/flocker-${VERSION}.rb

• Make a pull request Make a homebrew-tap pull request for the release branch against master, with a Refs #123 line in the description referring to the release issue that it resolves.

4.2. Infrastructure 53 Flocker Documentation, Release 0.3.0dev1

10. Merge the release branch Merge release branch and close the release pull request.

Appendix: Back Porting Changes From Master

XXX: This process needs documenting. See https://github.com/ClusterHQ/flocker/issues/877

Appendix: Pre-populating RPM Repository

Warning: This only needs to be done if the dependency packages for Flocker (e.g. 3rd party Python libraries) change; it should not be done every release. If you do run this you need to do it before running the release process above as it removes the flocker-cli etc. packages from the repository! XXX How does one know?

These steps must be performed from a machine with the ClusterHQ Copr repository installed. You can either use the Flocker development environment or install the Copr repository locally by running curl https://copr.fedoraproject.org/coprs/tomprince/hybridlogic/repo/fedora-20-x86_64/tomprince-hybridlogic-fedora-20-x86_64.repo >/etc/yum.repos.d/hybridlogic.repo mkdir repo yumdownloader --destdir=repo python-characteristic python-eliot python-idna python-netifaces python-service-identity python-treq python-twisted createrepo repo gsutil cp -a public-read -R repo gs://archive.clusterhq.com/fedora/20/x86_64 mkdir srpm yumdownloader --destdir=srpm --source python-characteristic python-eliot python-idna python-netifaces python-service-identity python-treq python-twisted createrepo srpm gsutil cp -a public-read -R srpm gs://archive.clusterhq.com/fedora/20/SRPMS

4.2.4 Release Schedule and Version Numbers

Goals

The goals of the release schedule are to: • Make new features and bug fixes available to users as quickly as possible. • Practice releasing so that we are less likely to make mistakes. • Improve the automation of releases through experience.

Schedule

We will make a new release of Flocker each week. This will proceed according to the Release Process. The releases will happen on Tuesday of each week. If nobody is available in the ClusterHQ organization to create a release, the week will be skipped. After each release is distributed, the engineer who performed the release will create issues for any improvements which could be made. The release engineer should then spend 4-8 hours working on making improvements to release process. If there is an issue that will likely take over 8 hours then they should consult the team manager before starting them.

54 Chapter 4. Getting Involved Flocker Documentation, Release 0.3.0dev1

Version Numbers

Released version numbers take the form of X.Y.Z. The current value of X is 0 until the project is ready for production. Y is the “marketing version”. ClusterHQ’s marketing department is made aware of the content of a release ahead of time. If the marketing department decides that this release is sufficiently important to publicize then Y is incremented and Z is set to 0. Z is incremented for each standard weekly release.

Patch Releases

ClusterHQ will not be producing patch releases until the project is ready for production.

4.2. Infrastructure 55 Flocker Documentation, Release 0.3.0dev1

56 Chapter 4. Getting Involved CHAPTER 5

Areas of Potential Future Development

Flocker is an ongoing project whose direction will be guided in large part by the community. The list below includes some potential areas for future development. It is not all encompassing or indicative of what definitely will be built. Feedback welcome and encouraged. • Support for atomic updates. • Scale-out for stateless containers. • API to support managing Flocker volumes programmatically. • Statically configured continuous replication and manual failover. • No-downtime migrations between containers. • Automatically configured continuous replication and failover. • Multi-data center support. • Automatically balance load across cluster. • Roll-back a container to a snapshot. Detailed plans have been made in some areas:

5.1 Flocker Volume Manager

The Flocker Volume Manager (FVM) provides snapshotting and replication of Flocker volumes. It has the ability to push volumes to remote nodes, track changes to those volumes, and roll them back to earlier states. Although initially built on top of ZFS, FVM should eventually be capable of being backed on a number of filesystems. As such a generic data model is required.

5.1.1 Data Model

Motivation: • ZFS has some peculiarities in its model when it comes to clones, e.g. promoting a clone moves snapshots from original dataset to the clone. • Having clones be top-level constructs on the same level as originating dataset is a problem, since they are closely tied to each other both in terms of usage and in administrative “cleaning up old data” way. • We don’t want to be too tied to the ZFS model (or terminology!) in case we want to switch to Btrfs or some other system. Especially given conflicting terminology - Btrfs “snapshots” are the same as ZFS “clones”.

57 Flocker Documentation, Release 0.3.0dev1

• When it comes to replication, it is probably useful to differentiate between “data which is a copy of what the remote host has” and “local version”, in particular when divergence is a potential issue (e.g. can be caused by erroneous failover). In git you have “origin/branchname” vs. the local “branchname”, for example. We are therefore going to be using the following model for CLI examples below: •A“ volume” is a tree of “branches”. •A“ tag” is a named read-only pointer to the contents of a branch at a given point in time; it is attached to the volume, and is not mounted on the filesystem. • Given volume called “mydata”, “mydata/trunk” is (by convention) is the main branch from which other branches originate, “mydata/branchname” is some other branch, and “mytag@mydata” is a tag. • Branches’ full name includes the Flocker instance they came from (by default let’s say us- ing its hostname), e.g. “somehost/myvolume/trunk”. “dataset/branch” is short- hand for the current host, e.g. “thecurrenthost.example.com/dataset/branch”. In a replication scenario we could have “remote.example.com/datavolume/trunk” and “thecurrenthost.example.com/datavolume/trunk” (aka “datavolume/trunk”) as a branch off of that. • Local branches are mounted on the filesystem, and then exposed to Docker, e.g. “myvolume/trunk” is exported via a Docker container called “flocker:myvolume/trunk” (“flocker:” prefix is not a Docker feature, just a proposed convention for naming our containers). • Remote branches are not mounted, but a local branch can be created off of them and then that is auto-mounted.

Implementation Notes - ZFS

The names of volumes, branches and tags do not map directly onto the ZFS naming system. Each Flocker instance has a UUID, with a matching (unique across a Flocker cluster) human readable name, typically the hostname. We can imagine having two Flocker instances on same machine (with different pools) for testing, so don’t want to require hostname. This is the first part of the // triplet of branch names - in human-exposed CLI we probably want to use human names though, not UUIDs. Branches are known to be local if branch’s specified Flocker instance matches the UUID of the Flocker process that is managing it. Volumes have UUIDs, and a matching (cluster unique?) human readable name. Tags are indicated by having a snapshot with a user attributes indicating it is a tag, the tag name and the volume name. However, not all ZFS snapshots will be exposed as tags. E.g. the fact that a snapshot is necessary for cloning (and therefore branch creation) is an implementation detail; sometimes you want to branch off a tag, but if you want to branch off of latest version the fact that a snapshot is created needn’t be exposed. A remote branch exists if there is a non-tag ZFS snapshot naming it, i.e. the snapshot has a user attribute indicating which branch it’s on (e.g. “thathost/somevolume/abranch”). In either case the ZFS-level snapshot name is the Flocker instance UUID + the timestamp when it was generated. A local branch exists due to local existence ZFS dataset, one of: 1. A root dataset (“trunk”), if this is the primary host (whatever that means). 2. A clone of a remote branch snapshot. 3. A clone of a local branch snapshot. The branch name is stored as a user attribute on the ZFS dataset. Dataset names can be the branch human readable names, since only one Flocker instance will ever be setting them.

58 Chapter 5. Areas of Potential Future Development Flocker Documentation, Release 0.3.0dev1

In cases where we can’t use attributes the data will be in a local database of some sort. E.g. ZFS properties are inherited automatically (not the behavior we want), which might lead to some corrupt state in crashes if the low-level APIs don’t allow bypassing this. . .

Implementation Notes - Btrfs

Btrfs does not have a concept of clones - it just has snapshots, and they are mounted and writable. As such the proposed model should also work with Btrfs. Btrfs appears to lack promotion, but that can be emulated via renames. It’s not clear if Btrfs has the “can’t delete parent if it has children” restriction, though it may just keep around extra disk storage in that case.

5.1. Flocker Volume Manager 59 Flocker Documentation, Release 0.3.0dev1

60 Chapter 5. Areas of Potential Future Development CHAPTER 6

FAQ

• ZFS – Flocker uses ZFS. What about the ZFS licensing issues? – But if ZFS isn’t part of mainline Linux proper, it won’t benefit from rigorous testing. How do you know it’s stable? • Current Functionality – Which operating systems are supported? • Future Functionality – How does Flocker integrate with Kubernetes / Mesos / Deis / CoreOS / my favorite orchestration framework? – If I clone a 2 GB database five times, won’t I need a really large server with 10 GB of disk? – If I clone a database five times, how does maintaining five different versions of the database work?

Flocker is under active deployment and we receive a lot of questions about how this or that will be done in a future release. You can find these questions in the Future Functionality section below. You can also view ideas for future versions of Flocker. If you want to get involved in a discussion about a future release or have a question about Flocker today, get in touch on our Freenode IRC channel #clusterhq or the Flocker Google group.

6.1 ZFS

6.1.1 Flocker uses ZFS. What about the ZFS licensing issues?

There is a good write up of the ZFS and Linux license issues on the ZFS on Linux website. In short, while ZFS won’t be able to make it into mainline Linux proper due to licensing issues, “there is nothing in either license that prevents distributing it in the form of a binary module or in the form of source code.”

6.1.2 But if ZFS isn’t part of mainline Linux proper, it won’t benefit from rigorous testing. How do you know it’s stable?

ZFS on Linux is already in use in companies and institutions all over the world to the tune of hundreds of petabytes of data. We are also rigorously testing ZFS on Linux to make sure it is stable. ZFS is production quality code.

61 Flocker Documentation, Release 0.3.0dev1

6.2 Current Functionality

6.2.1 Which operating systems are supported?

Flocker manages Docker applications and Docker runs on Linux, so Flocker runs on Linux. However, you do not need to be running Linux on your development machine in order to manage Docker containers with the flocker-cli. See Installing flocker-cli for installation instructions for various operating systems.

6.3 Future Functionality

6.3.1 How does Flocker integrate with Kubernetes / Mesos / Deis / CoreOS / my favorite orchestration framework?

Over time, we hope that Flocker becomes the de facto way for managing storage volumes with your favorite orches- tration framework. We are interested in expanding libswarm to include support for filesystems and are talking with the various open source projects about the best way to collaborate on storage and networking for volumes. If you’d like work with us on integration, get in touch on our Freenode IRC #clusterhq or the Flocker Google group. You can also submit an issue or a pull request if you have a specific integration that you’d like to propose.

6.3.2 If I clone a 2 GB database five times, won’t I need a really large server with 10 GB of disk?

Thankfully no. This is where ZFS makes things really cool. Each clone is essentially free until the clone is modified. This is because ZFS is a copy-on-write filesystem, so a clone is just a set of block pointers. It’s only when a block is modified that the data is copied, so a 2GB database that is cloned five times still just uses 2GB of disk space until a copy is modified. That means, when the database is modified, only the changes are written to disk, so your are only storing the net new data. This also makes it really fast to create database clones.

6.3.3 If I clone a database five times, how does maintaining five different versions of the database work?

The idea will be that cloning the app and the database together in some sense allows the containers to maintain what we call independent “links” between 10 instances of the app server (deployed at different staging URLs) and the respective 10 different instances of the cloned database. This works because e.g. port 3306 inside one app server gets routed via an ephemeral port on the host(s) to 3306 inside the corresponding specific instance of the database. The upshot if which is that you shouldn’t need to change the apps at all, except to configure each clone with a different URL.

62 Chapter 6. FAQ CHAPTER 7

Authors

Flocker is maintained by ClusterHQ and is licensed under the Apache 2.0. The following people and organizations contributed to its development; please add your name in alphabetical order with your first pull request: • ClusterHQ (formerly Hybrid Logic Ltd.) • Scotch Media (Base for documentation theme)

63