<<

Kubernetes: Orchestrate your containers

Environment:

KVM . Enterprise Atomic Host 7.1 qcow2 image All in one configuration Pod, replication controller and service manifest file are available under /root folder in the qcow2 image. Required container images are included in the qcow2 image.

Activities:

1. Setting up the kubernetes nodes 2. Deploying multi-container application 3. Scaling up the number of pods 4. Rolling updates Importing the image

Copy the image rhel-atomic-cloud-7.1-0.x86_64.qcow2 and atomic0-cidata1.iso to your desktop or laptop. Unzip the archive

1. Open a terminal and execute the command virt-manager as the root user. 2. Hit the icon for “Creating a new virtual machine” 3. In the new Window, enter the following details

Name: Kubernetes Select “Import Existing disk image” Hit next

4. In the new window, hit the browse button and point to the location of the image rhel-atomic-cloud-7.1- 0.x86_64.qcow2. Select the OS type as and version as Fedora . Click the forward button 5. Allocate a minimum of 1024 MB RAM and 1 cpus. Hit forward. Select the option “Customize Configuration Before Install” and hit finish 6. Add a hardware -> storage->cdrom and point the iso to atomic0-cidata1.iso 7. Make sure the NIC device model is set as virtio 8.Click finish. 9. Hit apply and click on the Begin installation option, VM will be spawned and you will notice the login screen in the VM console

Kubernetes services:

The kubernetes relies on a set daemons: apiserver, scheduler, controller, kubelet, proxy. These daemons/services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. The services are split according to the intended role in the kubernetes environment. For this demo, we will use all in one node configuration. All the below services should be started on the virtual machine. Kubernetes Master Services

3 services that constitutes kubernetes master are kube-apiserver kuber-controller-manager kube-scheduler

This host should be configured to run all the above services running. Another essential service that the master node should have is etcd

Kubernetes Node Services (formerly known as Minion)

Following services should be configured to run in the slave

1. Kubelet 3. 2. kube-proxy

Activity 1: Setting up the kubernetes cluster

Login as the root user with the password “atomicrhte”. Since both Kubernetes Master and Minion services are running on the local system, you don't need to change the Kubernetes configuration . Master and minion services will point to each other on localhost and services are made available only on localhost

Starting Kubernetes

For starting all the Kubernetes Master Services, use the below command

# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done

For starting all the Kubernetes Node Services, use the below command

# for SERVICES in docker kube-proxy.service kubelet.service; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done

Verify the configuration

Using netstat/ss verify the required services are up and listening

# netstat -tulnp | grep -E "(kube)|(etcd)" or using ss

# ss -tulnp | grep -E "(kube)|(etcd)"

Test etcd service

curl -s -L http://localhost:4001/version etcd 0.4.6

Verify Kubernetes configuration

# kubectl get nodes NAME LABELS STATUS 127.0.0.1 Ready

If kubernetes is configured properly, kubectl get nodes should list localhost in “ready” status

Activity 2: Deploying multi-container application

Logical diagram 1. Understanding Mariadb ReplicationController mainfest file

Under /root folder, open the file 1_mariadb_rc.yaml using vi and then observe the following line

1 id: "db-controller" 2 kind: "ReplicationController" 3 apiVersion: "v1beta1"

Keyword “Kind” helps one to identify what type of resource it is. Here kind refers to “ReplicationController”. Mariadb service here will be started via the ReplicationController. For this demo, we will be using apiversion v1beta1

4 desiredState: 5 replicas: 1 6 replicaSelector: 7 selectorname: "db"

Line number 5 (replicas 1) tells kubernetes to always run one instance of the pod. If the pod dies, then start the pod again. ReplicaSelector identifies the set of pods that this replicationcontroller is responsible for

13 containers: 14 - name: "db" 15 image: "backend-mariadb" 16 ports: 17 - containerPort: 3306

Line 16 tells kubernetes to use the container image backend-mariadb. If this image is not locally available, it will first search Red Hat repository. If the image is not available in the Red Hat repository, then it will search docker hub. Docker daemon can be configured to use a custom or private registry as well.

18 labels: 19 name: "db" 20 selectorname: "db" 21 labels: 22 name: "db-controller"

Kubernetes will label the mariadb pods as “db” (line 19) and line 21 labels the replicationcontroller as db- controller

2. Starting the Mariadb replicationController

# kubectl create -f 1_mariadb_rc.yaml

The above command will start the replicationController “maria” and the corresponding mariadb pod 3. Verify whether mariadb pod has been started

# kubectl get pods

If you notice the status of pod as “Pending”, execute the same command again. Within a few seconds, the pod status should change as running

-bash-4.2# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 029 172.17.0.23 db backend-mariadb 127.0.0.1/ name=db,selectorname=db Running

From the above output, note the values under the column LABELS and compare it with the Mariadb manifest file 1_mariadb_rc.yaml. Label can be used for querying or selecting the pods

The container has received an IP from the docker bridge docker0

# kubectl get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS maria mariadbapp mariadb:latest name=mariadbpod 1

Use the above command to query the replicationConroller information. Number of replicas for mariadb pod is 1.

4. Understanding mariadb service manifest file under /root folder, open the file “2_mariadb_serv.yaml”

1 id: "db-service" 2 kind: "Service" 3 apiVersion: "v1beta1" 4 port: 3306 5 selector: 6 name: "db" 7 labels: 8 name: "db" kind here is of type “service” (line 2). This service definition will forward incoming request to port 3306 to pods that are labeled as “db”

5. Deploying the mariadb services

# kubectl create -f 2_mariadb_serv.yaml

6. Verify the status of mariadb services

#kubectl get services NAME LABELS SELECTOR IP PORT kubernetes component=apiserver,provider=kubernetes 10.254.110.210 443 kubernetes-ro component=apiserver,provider=kubernetes 10.254.160.132 80 db-service name=db name=db 10.254.66.155 3306 The frontend app should use the IP “10.254.24.214” as the service ip address when trying to connect to the backend database. This IP range is taken from the directive “KUBE_SERVICE_ADDRESSES=" mentioned in the configuration file “/etc/kubernetes/apiserver”. This address range is used for services

7. Deploying the frontend application

Frontend pod manifestation file is available under /root as “3_frontend_rc.yaml”. It is similar to mariadb

Review the replicationController definition for the front end python application and then deploy it. Note the replica is set to two.

# kubectl create -f 3_frontend_rc.yaml

8. Deploy the Frontend Service end point

Review the frontend service file “4_front_svc.yaml” available under /root

1 kind: "Service" 2 id: "webserver-service" 3 apiVersion: "v1beta1" 4 port: 80 5 publicIPs: 6 - 192.168.100.166 7 selector: 8 name: "webserver" 9 labels: 10 name: "webserver"

Line 6: Replace the IP with the node's ip. This is optional. By defining a public IP address, we’re telling the kubernetes that we want this service reachable on these particular IP addresses.

# kubectl create -f 4_front_svc.yaml

9. Testing the application

If all the pods are deployed properly, the output of “kubectl get pods” should be similar to one shown below

-bash-4.2# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 029 172.17.0.23 db backend-mariadb 127.0.0.1/ name=db,selectorname=db Running 838 172.17.0.24 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running 838 172.17.0.25 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running

Two frontend pods and one backend pod should be running. Verify whether the frontend pod is accessible. Use the IP of the frontend pod shown in the above command output

# curl http://172.17.0.24 The Web Server is Running Test the backend connectivity from the front end application. It connects to the database and prints the table contents.

# curl http://172.17.0.24/cgi-bin/action --

RedHat rocks

--

Frontend service should be accessible via the publicIP (here node IP as well)

# curl http://192.169.100.166/cgi-bin/action --

RedHat rocks

--

Activity 3. Scaling up the number of pods

Frontend replicationcontroller was configured to start with two replicas initially. Update the file 3_frontend_rc.yaml so that it now starts 5 pods/replicas

1 id: "webserver-controller" 2 kind: "ReplicationController" 3 apiVersion: "v1beta1" 4 desiredState: 5 replicas: 5--

Change line number 5 from “2” to “5”. Execute the below command after making the change

# kubectl update -f 3_frontend_rc.yaml

You can also scale directly using the below command

# kubectl resize --replicas=5 replicationcontrollers webserver-controller

This should start three more additional frontend pods.

-bash-4.2# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 4cb3da07-457d-11e5-a970-52540086b516 172.17.0.27 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running 4cb5040c-457d-11e5-a970-52540086b516 172.17.0.28 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running 029d4b1d-4572-11e5-a970-52540086b516 172.17.0.23 db backend-mariadb 127.0.0.1/ name=db,selectorname=db Running 83825c9b-4572-11e5-a970-52540086b516 172.17.0.24 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running 8382a58e-4572-11e5-a970-52540086b516 172.17.0.25 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running 4cb1e0fc-457d-11e5-a970-52540086b516 172.17.0.26 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Activity 4. Kubernetes rolling updates

Requirement

Need to update the application/container images and all the running pods should use the updated the docker image

For the demo, index.html page will be updated. The current output when accessing the webserver “ The Web Server is Running”

This text will be modified in the updated image as “The Web Server is Running using a modified image” under /root folder, review the file 5_frontend_rolling.yaml and compare it with 3_frontend_rc.yaml

13 containers: 14 - name: "apache-frontend" 15 image: "frontend-apache:1.1" 16 ports: 17 - containerPort: 80

We are instructing kubernetes to use the image “ frontend-apache:1.1” instead of “frontend-apache”. Replication controller name has also been modified.

Perform the rolling update

# kubectl rollingupdate webserver-controller -f 5_frontend_rolling.yaml

The above command replaces the existing named controller with new controller, updating one pod at a time to use the new PodTemplate. Following output will be displayed after executing the above command.

# kubectl rollingupdate webserver-controller -f 5_frontend_rolling.yaml Creating webserver1-controller Updating webserver-controller replicas: 4, webserver1-controller replicas: 1 I0818 09:08:03.812461 5491 restclient.go:146] Waiting for completion of operation 25 Updating webserver-controller replicas: 3, webserver1-controller replicas: 2 Updating webserver-controller replicas: 2, webserver1-controller replicas: 3 Updating webserver-controller replicas: 1, webserver1-controller replicas: 4 Updating webserver-controller replicas: 0, webserver1-controller replicas: 5 Update succeeded. Deleting webserver-controller

Verify whether all the pods are using the new updated image.

Find the ip of the new pods using the command, “kubectl get pods”. Using curl access the webpage served by the frontend pod

-bash-4.2# curl 172.17.0.18 The Web Server is Running using a modified image References:

Get Started Orchestrating Containers with Kubernetes https://access.redhat.com/articles/1198103

Getting Started with Red Hat Enterprise Linux Atomic Host https://access.redhat.com/articles/rhel-atomic-getting-started

Document prepared by

Ranjith Rajaram [email protected] twitter: @ranjithrajaram