Kubernetes Examples

U.S. PATENT AND TRADEMARK OFFICE (AUGUST 22, 2019)

Contents

Contents 1

1 Kubernetes Core Examples 3 Kubernetes Interfaces 3 kubectl 3 Stateless Applications 4 Example: Deploy a Java Spring Application 5 Dynamic Storage and DaemonSet 6 Deploy Dynamic Provisioner as a DaemonSet 7 Stateful Applications 9 Example: Deploy MySQL 10 Example: Deploy Lime Survey Against 11

1

Kubernetes Core Examples

Kubernetes Interfaces kubectl General Command Structure kubectl organizes its operations into a set of broad commands: get, describe, config, … The com- mands work in a consistent fashion across resource types. Retrieve resources: kubectl get

Describe or inspect resources: kubectl describe

Create new resources: kubectl create

Most types of resources can be modified in either an “imperative” or “declarative” fashion.

• Imperative modifications are made by running a command and specifying options. Com- mands are often described as “verbs” that apply a specific action. Example:

kubectl scale --replicas deployment deployment-name

• Declarative modifications are mady by specifying the desired options in a manifest and running a command with the -f (file) option to create or apply the changes. Example:

kubectl create -f deployment-manifest.yaml

to create a new resource or:

kubectl apply -f deployment-manifest.yaml

to update an existing resource. When working with complex applications, it is possible to create more than a single object in a manifest. The different blocks should be offset with three dashes --- to indicate a new section.

3 4 Kubernetes Core Examples

Cluster Commands Check the currently active context:

kubectl config current-context

View available contexts:

kubectl config get-contexts

Check available nodes:

kubectl get nodes

Show the current state of a node, metadata, and events:

kubectl describe node-name

Show namespaces:

kubectl get namespaces

Describe a namespace:

kubectl describe namespace default

Stateless Applications While Kubernetes can be used to run nearly type of workload, perhaps the most common is “stateless workloads.” Stateless applications:

• have no persistent storage or volume associated with them • utilize backing storage, such as a database or external object store, to persist data • are less susceptible to error caused by container start/stop or migration • can be scaled horizontally without special consideration to clustering or coordination of operations

Their deployment often follows a set procedure:

• Create and validate a container image – Container image might be created on a local workstation and validated using docker run Stateless Applications 5

– Alternatively, it might be created through an automated process and validated using unit and functional tests • Describe the desired structure of how the system will be deployed – What resources will the container application use? What services will it consume as part of its operations? – Create supporting objects, such as configuration maps (ConfigMap) to provide the needed structure to all running instances of the application. – Create the manifests or deployment command for the application itself. Statelss ap- plications are most often deployed as deployments, rather than directly as pods or replica sets. • Deploy the system to Kubernetes • Validate the system was deployed without error – Check the initiation of containers and associated resources – For web applications or other systems, utilize kubectl port-forward to create a tunnel and check the application function directly. • Expose the service through the use of an appropriate service type – NodePort: Provides an “inside the firewall” validation (quick and dirty) – LoadBalancer: Exposes the application externally (“outside the firewall”) – Ingress: Exposes a portion of the application or allows for the utilization of more complex access control. Authentication (HTTP basic or digest, oAuth2) or integra- tion with an external authorization application (JSON web token).

Example: Deploy a Java Spring Application • Reference: Building Spring Applications with Docker

1. Checkout the example application from the Spring Examples repository.

$ git clone https://github.com/spring-projects/spring-petclinic.git

2. Create a Docker container using the Jib

$ cd spring-petclinic $ docker run -it --rm --workdir /src -v $(pwd):/src \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/bin/docker:/usr/bin/docker \ maven:3.6-jdk-11 mvn compile -Dimage=spring/petclinic \ com.google.cloud.tools:jib-maven-plugin:1.0.0:dockerBuild

Google Jib provides an automation tool that can build container images via mvn. The builder above requires access to the Docker socket. Mounting the socket as shown is a technique called Docker in Docker. 3. Ensure that the image built as expected

$ docker images spring/petclinic 6 Kubernetes Core Examples

4. Run the Docker image locally to validate

$ docker run -it --rm -p 8080:8080 spring/petclinic

5. Tag the image for the registry and push in prep of Kubernetes deployment

$ docker tag spring/petclinic registry.example.com:5000/examples/spring-petclinic

6. Create a deployment using the image

$ kubectl create deployment spring-example-petclinic \ --image=registry.example.com:5000/examples/spring-petclinic

7. Use kubectl port-forward to access the Kubernetes deployment and validate the application

$ kubectl port-forward deployment/spring-example-petclinic 8080:8080

8. Expose the application as a service

$ kubectl expose deployment spring-example-petclinic \ --type=NodePort --port 8080 --targetPort 8080

9. Scale the application

$ kubectl scale deployment spring-example-petclinic --replicas 3

10. Remove the deployment and service

$ kubectl delete deployment spring-example-petclinic $ kubectl delete service spring-example-petclinic

Dynamic Storage and DaemonSet Storage in Kubernetes is managed through the use of persistent volumes and persistent volume claims. These provide a consistent interface through which storage can be associated with an application (deployment or stateful set).

• Persistent volumes bind a certain type of storage type to the Kubernetes cluster • Volume claims allow specific pods to “claim” the storage for use by an application.

Persistent volumes are typically created through the use of one of two methods:

• static provisioning: volumes and claims are created manually by an infrastructure or oper- ations team – cluster administrators must manually amke calls to cloud or storage provider – persistent volumes are then created describing the resource are generated manually – volume claims linked to the volumes are then created (also manually) for use by pods • dynamic provisioning: storage volumes are created on-demand Dynamic Storage and DaemonSet 7

– resources are created when generated by users

When deploying systems to manage dynamic storage, a common technique is to utilize Kuber- netes worker nodes to host the resources.

• A daemon set ensures that all (or some) nodes run a copy of a pod. – As nodes are added to the cluster, pods are added to them. – As nodes are removed, the pods running on them are garbage collected. • Common use cases of a DaemonSet: – Running cluster storage daemons, such as glusterd or ceph on a specific set of nodes – Running log collection daemons such as fluentd or logstash – Running node monitoring daemons on nodes, such as the Prometheus node exporter

Deploy Dynamic Provisioner as a DaemonSet • Reference :GlusterFS Simple Provisioner for Kubernetes 1.5+

The following steps will provision a simple glusterfs provisioner. It will permit volumes tobe created dynamicly with GlusterFS but will not manage the cluster itself. For that functionality see Heketi.

1. List DaemonSet, Storage Class, PV’s, and PVC

kubectl get ds,sc,pv,pvc

kubectl can be used to retrieve multiple types of objects at once. 2. Deploy the daemonSet from (https://raw.githubusercontent.com/kubernetes- incubator/external-storage/master/gluster/glusterfs/deploy/glusterfs-daemonset.yaml).

$ kubectl apply -f https://bit.ly/2Zdh3Bf

Show the new DaemonSet, notice all zeros and the use of the node-selector

kubectl get ds,sc,pv,pvc

3. Label any node that should be used for storage. Nodes will not appear in the storage cluster until labeled.

kubectl label node <...node...> storagenode=glusterfs

4. Obtain Pod Names and Node IP for next step

kubectl get pods -o wide --selector=glusterfs-node=pod

5. Execute gluster command on one of the nodes to create a trusted storage pool 8 Kubernetes Core Examples

kubectl exec -ti glusterfs-grck0 gluster peer probe 10.10.10.52 kubectl exec -ti glusterfs-grck0 gluster peer probe 10.10.10.53 kubectl exec -ti glusterfs-grck0 gluster peer probe 10.10.10.54

6. Create a service account and provide RBAC rules for the provisioner from rbac.yaml

kubectl apply -f https://bit.ly/2ZkVpLj

7. Create the directory volumes and bricks will be located under

for i in worker1 worker2 worker3 ; do ssh root@${i} mkdir -p /data/glusterfs done

8. Deploy the new provisioner using the manifest from deployment.yaml

kubectl apply -f https://bit.ly/2P5gXfl

9. Create a new StorageClass

echo 'kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: glusterfs-simple provisioner: gluster.org/glusterfs-simple parameters: forceCreate: "true" brickrootPaths: "10.10.10.52:/data/glusterfs/,10.10.10.53:/data/glusterfs/,10.10.10.54:/data/glusterfs"' | kubectl create -f -

10. Check on the ds,sc,pv,pvc again

kubectl get ds,sc,pv,pvc

11. Create a simple PVC

echo 'apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster1 annotations: volume.beta.kubernetes.io/storage-class: glusterfs-simple spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi ' | kubectl create -f -

12. Check on the ds,sc,pv,pvc again

kubectl get ds,sc,pv,pvc Stateful Applications 9

13. List the Glusterfs Volumes

kubectl exec -ti glusterfs-grck0 gluster volume list

14. Remove the PVC and watch the volume disappear (may take a minute or two)

kubectl delete pvc gluster1 kubectl exec -ti glusterfs-grck0 gluster volume list

Stateful Applications Stateful applications have a different set of needs than stateless applications.

• require some type of persistence storage • often include a discrete identity or role within a larger system which needs to be preserved – master or slave in a distributed database – specific node in a redundant storage – identity needs to be preserved between restarts of the container • along with an identity, there needs to exist some type of mechanism for routing traffic to particular pods (nodes)

Stateful sets fulfills the needs of stateful applications in Kubernetes

• Brings the concepts of replica sets to stateful pods – enables running of pods in a clustered mode – ideal for deploying highly available database workloads • Stateful sets provide – stable unique network identifiers – stable persistent storage – ordered graceful deployment and scaling – ordered graceful deletion and termination

Stateful sets provide their functionality by relying on supporting structures

• Networking – Depends on headless (mapping without a cluster IP) for pod to pod communications – Each pod gets a DNS name accessible to other pods in the set and cluster • Storage: leverage volumes and volume claims • Identity: pods are suffixed with a predictable index, and identity remains consistent • Startup: pods are created sequentially and terminated in last in/first out order 10 Kubernetes Core Examples Example: Deploy MySQL • Reference: How to install Limesurvey on 18.04 • Reference: Kubernetes Tutorial installing Wordpress w/ MySQL StatefulSet

1. Setup GlusterFS Provisioner 2. Create a “secret” for MySQL to use

kubectl create secret generic -pass --from-literal=password=makeitso

3. Create a StatefulSet

kubectl create -f - << END apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: StatefulSet metadata: name: limesurvey-mysql labels: app: limesurvey spec: serviceName: limesurvey-mysql replicas: 1 selector: matchLabels: app: limesurvey tier: mysql template: metadata: labels: app: limesurvey tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumeClaimTemplates: - metadata: Stateful Applications 11

name: mysql-persistent-storage spec: accessModes: [ "ReadWriteOnce" ] storageClassName: glusterfs-simple resources: requests: storage: 2Gi END

4. Create a headless service

kubectl create -f - << END apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None END

Example: Deploy Lime Survey Against Database 1. Add necessary entries to the database

kubectl exec -t limesurvey-mysql-0 -- mysql -u root -p makeitso << END CREATE DATABASE limesurvey_db; GRANT ALL PRIVILEGES ON limesurvey_db.* TO 'limesurvey_user'@'%' IDENTIFIED BY 'SqlPassWord'; FLUSH PRIVILEGES; \q END

2. Create a new Dockerfile

FROM Ubuntu:18.04

RUN add-apt-repository ppa:ondrej/ -y && apt-get update RUN apt-get install apache2 software-properties-common php7.2 \ php7.2-cli php7.2-common php7.2-mbstring php7.2-xml php7.2-mysql \ php7.2-gd php7.2-zip php7.2-ldap php7.2-imap unzip wget curl -y \ && apt-cache clean

ADD https://download.limesurvey.org/latest-stable-release/limesurvey3.14.3+180809.tar.gz /var/www/ 12 Kubernetes Core Examples

RUN chown -R www-data:www-data /var/www/html/limesurvey

ENTRYPOINT ["apachectl", "-DFOREGROUND"] CMD ["start"]

3. Build and push the new image

docker build -t limesurvey:v1 . docker tag limesurvey:v1 registry.example.com:5000/limesurvey/limesurvey:v1 docker push regsitry.example.com:5000/limesurvey/limesurvey:v1

4. Create a new configMap holding the config for Apache Open the file for editing in a text editor:

vim limesurvey.conf

Text of the Apache virtual host that will serve the application:

ServerAdmin [email protected] DocumentRoot /var/www/html/limesurvey/ ServerName example.com Options FollowSymLinks AllowOverride All ErrorLog /var/log/apache2/lime-error_log CustomLog /var/log/apache2/lime-access_log common

Create the configuration for the image:

kubectl create configmap limesurvey-apache-config --from-file=limesurvey.conf=limesurvey.conf