
Kubernetes Examples U.S. PATENT AND TRADEMARK OFFICE (AUGUST 22, 2019) Contents Contents 1 1 Kubernetes Core Examples 3 Kubernetes Interfaces 3 kubectl 3 Stateless Applications 4 Example: Deploy a Java Spring Application 5 Dynamic Storage and DaemonSet 6 Deploy Dynamic Provisioner as a DaemonSet 7 Stateful Applications 9 Example: Deploy MySQL 10 Example: Deploy Lime Survey Against Database 11 1 Kubernetes Core Examples Kubernetes Interfaces kubectl General Command Structure kubectl organizes its operations into a set of broad commands: get, describe, config, … The com- mands work in a consistent fashion across resource types. Retrieve resources: kubectl get <resource> Describe or inspect resources: kubectl describe <resource-type> <resource-name> Create new resources: kubectl create <resource-type> <resource-name> Most types of resources can be modified in either an “imperative” or “declarative” fashion. • Imperative modifications are made by running a command and specifying options. Com- mands are often described as “verbs” that apply a specific action. Example: kubectl scale --replicas deployment deployment-name • Declarative modifications are mady by specifying the desired options in a manifest and running a command with the -f (file) option to create or apply the changes. Example: kubectl create -f deployment-manifest.yaml to create a new resource or: kubectl apply -f deployment-manifest.yaml to update an existing resource. When working with complex applications, it is possible to create more than a single object in a manifest. The different blocks should be offset with three dashes --- to indicate a new section. 3 4 Kubernetes Core Examples Cluster Commands Check the currently active context: kubectl config current-context View available contexts: kubectl config get-contexts Check available nodes: kubectl get nodes Show the current state of a node, metadata, and events: kubectl describe node-name Show namespaces: kubectl get namespaces Describe a namespace: kubectl describe namespace default Stateless Applications While Kubernetes can be used to run nearly type of workload, perhaps the most common is “stateless workloads.” Stateless applications: • have no persistent storage or volume associated with them • utilize backing storage, such as a database or external object store, to persist data • are less susceptible to error caused by container start/stop or migration • can be scaled horizontally without special consideration to clustering or coordination of operations Their deployment often follows a set procedure: • Create and validate a container image – Container image might be created on a local workstation and validated using docker run Stateless Applications 5 – Alternatively, it might be created through an automated process and validated using unit and functional tests • Describe the desired structure of how the system will be deployed – What resources will the container application use? What services will it consume as part of its operations? – Create supporting objects, such as configuration maps (ConfigMap) to provide the needed structure to all running instances of the application. – Create the manifests or deployment command for the application itself. Statelss ap- plications are most often deployed as deployments, rather than directly as pods or replica sets. • Deploy the system to Kubernetes • Validate the system was deployed without error – Check the initiation of containers and associated resources – For web applications or other systems, utilize kubectl port-forward to create a tunnel and check the application function directly. • Expose the service through the use of an appropriate service type – NodePort: Provides an “inside the firewall” validation (quick and dirty) – LoadBalancer: Exposes the application externally (“outside the firewall”) – Ingress: Exposes a portion of the application or allows for the utilization of more complex access control. Authentication (HTTP basic or digest, oAuth2) or integra- tion with an external authorization application (JSON web token). Example: Deploy a Java Spring Application • Reference: Building Spring Applications with Docker 1. Checkout the example application from the Spring Examples repository. $ git clone https://github.com/spring-projects/spring-petclinic.git 2. Create a Docker container using the Google Jib $ cd spring-petclinic $ docker run -it --rm --workdir /src -v $(pwd):/src \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/bin/docker:/usr/bin/docker \ maven:3.6-jdk-11 mvn compile -Dimage=spring/petclinic \ com.google.cloud.tools:jib-maven-plugin:1.0.0:dockerBuild Google Jib provides an automation tool that can build container images via mvn. The builder above requires access to the Docker socket. Mounting the socket as shown is a technique called Docker in Docker. 3. Ensure that the image built as expected $ docker images spring/petclinic 6 Kubernetes Core Examples 4. Run the Docker image locally to validate $ docker run -it --rm -p 8080:8080 spring/petclinic 5. Tag the image for the registry and push in prep of Kubernetes deployment $ docker tag spring/petclinic registry.example.com:5000/examples/spring-petclinic 6. Create a deployment using the image $ kubectl create deployment spring-example-petclinic \ --image=registry.example.com:5000/examples/spring-petclinic 7. Use kubectl port-forward to access the Kubernetes deployment and validate the application $ kubectl port-forward deployment/spring-example-petclinic 8080:8080 8. Expose the application as a service $ kubectl expose deployment spring-example-petclinic \ --type=NodePort --port 8080 --targetPort 8080 9. Scale the application $ kubectl scale deployment spring-example-petclinic --replicas 3 10. Remove the deployment and service $ kubectl delete deployment spring-example-petclinic $ kubectl delete service spring-example-petclinic Dynamic Storage and DaemonSet Storage in Kubernetes is managed through the use of persistent volumes and persistent volume claims. These provide a consistent interface through which storage can be associated with an application (deployment or stateful set). • Persistent volumes bind a certain type of storage type to the Kubernetes cluster • Volume claims allow specific pods to “claim” the storage for use by an application. Persistent volumes are typically created through the use of one of two methods: • static provisioning: volumes and claims are created manually by an infrastructure or oper- ations team – cluster administrators must manually amke calls to cloud or storage provider – persistent volumes are then created describing the resource are generated manually – volume claims linked to the volumes are then created (also manually) for use by pods • dynamic provisioning: storage volumes are created on-demand Dynamic Storage and DaemonSet 7 – resources are created when generated by users When deploying systems to manage dynamic storage, a common technique is to utilize Kuber- netes worker nodes to host the resources. • A daemon set ensures that all (or some) nodes run a copy of a pod. – As nodes are added to the cluster, pods are added to them. – As nodes are removed, the pods running on them are garbage collected. • Common use cases of a DaemonSet: – Running cluster storage daemons, such as glusterd or ceph on a specific set of nodes – Running log collection daemons such as fluentd or logstash – Running node monitoring daemons on nodes, such as the Prometheus node exporter Deploy Dynamic Provisioner as a DaemonSet • Reference :GlusterFS Simple Provisioner for Kubernetes 1.5+ The following steps will provision a simple glusterfs provisioner. It will permit volumes tobe created dynamicly with GlusterFS but will not manage the cluster itself. For that functionality see Heketi. 1. List DaemonSet, Storage Class, PV’s, and PVC kubectl get ds,sc,pv,pvc kubectl can be used to retrieve multiple types of objects at once. 2. Deploy the daemonSet from (https://raw.githubusercontent.com/kubernetes- incubator/external-storage/master/gluster/glusterfs/deploy/glusterfs-daemonset.yaml). $ kubectl apply -f https://bit.ly/2Zdh3Bf Show the new DaemonSet, notice all zeros and the use of the node-selector kubectl get ds,sc,pv,pvc 3. Label any node that should be used for storage. Nodes will not appear in the storage cluster until labeled. kubectl label node <...node...> storagenode=glusterfs 4. Obtain Pod Names and Node IP for next step kubectl get pods -o wide --selector=glusterfs-node=pod 5. Execute gluster command on one of the nodes to create a trusted storage pool 8 Kubernetes Core Examples kubectl exec -ti glusterfs-grck0 gluster peer probe 10.10.10.52 kubectl exec -ti glusterfs-grck0 gluster peer probe 10.10.10.53 kubectl exec -ti glusterfs-grck0 gluster peer probe 10.10.10.54 6. Create a service account and provide RBAC rules for the provisioner from rbac.yaml kubectl apply -f https://bit.ly/2ZkVpLj 7. Create the directory volumes and bricks will be located under for i in worker1 worker2 worker3 ; do ssh root@${i} mkdir -p /data/glusterfs done 8. Deploy the new provisioner using the manifest from deployment.yaml kubectl apply -f https://bit.ly/2P5gXfl 9. Create a new StorageClass echo 'kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: glusterfs-simple provisioner: gluster.org/glusterfs-simple parameters: forceCreate: "true" brickrootPaths: "10.10.10.52:/data/glusterfs/,10.10.10.53:/data/glusterfs/,10.10.10.54:/data/glusterfs"' | kubectl create -f - 10. Check on the ds,sc,pv,pvc again kubectl get ds,sc,pv,pvc 11. Create a simple PVC echo 'apiVersion:
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-