Kubernetes: Orchestrate Your Containers

Total Page:16

File Type:pdf, Size:1020Kb

Kubernetes: Orchestrate Your Containers Kubernetes: Orchestrate your containers Environment: KVM Hypervisor. Red Hat Enterprise Atomic Host 7.1 qcow2 image All in one configuration Pod, replication controller and service manifest file are available under /root folder in the qcow2 image. Required container images are included in the qcow2 image. Activities: 1. Setting up the kubernetes nodes 2. Deploying multi-container application 3. Scaling up the number of pods 4. Rolling updates Importing the Virtual Machine image Copy the image rhel-atomic-cloud-7.1-0.x86_64.qcow2 and atomic0-cidata1.iso to your desktop or laptop. Unzip the archive 1. Open a terminal and execute the command virt-manager as the root user. 2. Hit the icon for “Creating a new virtual machine” 3. In the new Window, enter the following details Name: Kubernetes Select “Import Existing disk image” Hit next 4. In the new window, hit the browse button and point to the location of the image rhel-atomic-cloud-7.1- 0.x86_64.qcow2. Select the OS type as Linux and version as Fedora . Click the forward button 5. Allocate a minimum of 1024 MB RAM and 1 cpus. Hit forward. Select the option “Customize Configuration Before Install” and hit finish 6. Add a hardware -> storage->cdrom and point the iso to atomic0-cidata1.iso 7. Make sure the NIC device model is set as virtio 8.Click finish. 9. Hit apply and click on the Begin installation option, VM will be spawned and you will notice the login screen in the VM console Kubernetes services: The kubernetes relies on a set daemons: apiserver, scheduler, controller, kubelet, proxy. These daemons/services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. The services are split according to the intended role in the kubernetes environment. For this demo, we will use all in one node configuration. All the below services should be started on the virtual machine. Kubernetes Master Services 3 services that constitutes kubernetes master are kube-apiserver kuber-controller-manager kube-scheduler This host should be configured to run all the above services running. Another essential service that the master node should have is etcd Kubernetes Node Services (formerly known as Minion) Following services should be configured to run in the slave 1. Kubelet 3. docker 2. kube-proxy Activity 1: Setting up the kubernetes cluster Login as the root user with the password “atomicrhte”. Since both Kubernetes Master and Minion services are running on the local system, you don't need to change the Kubernetes configuration files. Master and minion services will point to each other on localhost and services are made available only on localhost Starting Kubernetes For starting all the Kubernetes Master Services, use the below command # for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done For starting all the Kubernetes Node Services, use the below command # for SERVICES in docker kube-proxy.service kubelet.service; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done Verify the configuration Using netstat/ss verify the required services are up and listening # netstat -tulnp | grep -E "(kube)|(etcd)" or using ss # ss -tulnp | grep -E "(kube)|(etcd)" Test etcd service curl -s -L http://localhost:4001/version etcd 0.4.6 Verify Kubernetes configuration # kubectl get nodes NAME LABELS STATUS 127.0.0.1 <none> Ready If kubernetes is configured properly, kubectl get nodes should list localhost in “ready” status Activity 2: Deploying multi-container application Logical diagram 1. Understanding Mariadb ReplicationController mainfest file Under /root folder, open the file 1_mariadb_rc.yaml using vi and then observe the following line 1 id: "db-controller" 2 kind: "ReplicationController" 3 apiVersion: "v1beta1" Keyword “Kind” helps one to identify what type of resource it is. Here kind refers to “ReplicationController”. Mariadb service here will be started via the ReplicationController. For this demo, we will be using apiversion v1beta1 4 desiredState: 5 replicas: 1 6 replicaSelector: 7 selectorname: "db" Line number 5 (replicas 1) tells kubernetes to always run one instance of the pod. If the pod dies, then start the pod again. ReplicaSelector identifies the set of pods that this replicationcontroller is responsible for 13 containers: 14 - name: "db" 15 image: "backend-mariadb" 16 ports: 17 - containerPort: 3306 Line 16 tells kubernetes to use the container image backend-mariadb. If this image is not locally available, it will first search Red Hat repository. If the image is not available in the Red Hat repository, then it will search docker hub. Docker daemon can be configured to use a custom or private registry as well. 18 labels: 19 name: "db" 20 selectorname: "db" 21 labels: 22 name: "db-controller" Kubernetes will label the mariadb pods as “db” (line 19) and line 21 labels the replicationcontroller as db- controller 2. Starting the Mariadb replicationController # kubectl create -f 1_mariadb_rc.yaml The above command will start the replicationController “maria” and the corresponding mariadb pod 3. Verify whether mariadb pod has been started # kubectl get pods If you notice the status of pod as “Pending”, execute the same command again. Within a few seconds, the pod status should change as running -bash-4.2# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 029 172.17.0.23 db backend-mariadb 127.0.0.1/ name=db,selectorname=db Running From the above output, note the values under the column LABELS and compare it with the Mariadb manifest file 1_mariadb_rc.yaml. Label can be used for querying or selecting the pods The container has received an IP from the docker bridge docker0 # kubectl get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS maria mariadbapp mariadb:latest name=mariadbpod 1 Use the above command to query the replicationConroller information. Number of replicas for mariadb pod is 1. 4. Understanding mariadb service manifest file under /root folder, open the file “2_mariadb_serv.yaml” 1 id: "db-service" 2 kind: "Service" 3 apiVersion: "v1beta1" 4 port: 3306 5 selector: 6 name: "db" 7 labels: 8 name: "db" kind here is of type “service” (line 2). This service definition will forward incoming request to port 3306 to pods that are labeled as “db” 5. Deploying the mariadb services # kubectl create -f 2_mariadb_serv.yaml 6. Verify the status of mariadb services #kubectl get services NAME LABELS SELECTOR IP PORT kubernetes component=apiserver,provider=kubernetes <none> 10.254.110.210 443 kubernetes-ro component=apiserver,provider=kubernetes <none> 10.254.160.132 80 db-service name=db name=db 10.254.66.155 3306 The frontend app should use the IP “10.254.24.214” as the service ip address when trying to connect to the backend database. This IP range is taken from the directive “KUBE_SERVICE_ADDRESSES=" mentioned in the configuration file “/etc/kubernetes/apiserver”. This address range is used for services 7. Deploying the frontend application Frontend pod manifestation file is available under /root as “3_frontend_rc.yaml”. It is similar to mariadb Review the replicationController definition for the front end python application and then deploy it. Note the replica is set to two. # kubectl create -f 3_frontend_rc.yaml 8. Deploy the Frontend Service end point Review the frontend service file “4_front_svc.yaml” available under /root 1 kind: "Service" 2 id: "webserver-service" 3 apiVersion: "v1beta1" 4 port: 80 5 publicIPs: 6 - 192.168.100.166 7 selector: 8 name: "webserver" 9 labels: 10 name: "webserver" Line 6: Replace the IP with the node's ip. This is optional. By defining a public IP address, we’re telling the kubernetes that we want this service reachable on these particular IP addresses. # kubectl create -f 4_front_svc.yaml 9. Testing the application If all the pods are deployed properly, the output of “kubectl get pods” should be similar to one shown below -bash-4.2# kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 029 172.17.0.23 db backend-mariadb 127.0.0.1/ name=db,selectorname=db Running 838 172.17.0.24 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running 838 172.17.0.25 apache-frontend frontend-apache 127.0.0.1/ name=webserver,selectorname=webserver,uses=db Running Two frontend pods and one backend pod should be running. Verify whether the frontend pod is accessible. Use the IP of the frontend pod shown in the above command output # curl http://172.17.0.24 The Web Server is Running Test the backend connectivity from the front end application. It connects to the database and prints the table contents. # curl http://172.17.0.24/cgi-bin/action -- <h2>RedHat rocks</h2> -- Frontend service should be accessible via the publicIP (here node IP as well) # curl http://192.169.100.166/cgi-bin/action -- <h2>RedHat rocks</h2> -- Activity 3. Scaling up the number of pods Frontend replicationcontroller was configured to start with two replicas initially. Update the file 3_frontend_rc.yaml so that it now starts 5 pods/replicas 1 id: "webserver-controller" 2 kind: "ReplicationController" 3 apiVersion: "v1beta1" 4 desiredState: 5 replicas: 5-- Change line number 5 from “2” to “5”. Execute the below command after making the change # kubectl update -f 3_frontend_rc.yaml You can also scale directly using the below command # kubectl resize --replicas=5 replicationcontrollers webserver-controller This should start three more additional frontend
Recommended publications
  • Kubernetes Security Guide Contents
    Kubernetes Security Guide Contents Intro 4 CHAPTER 1 Securing your container images and CI/CD pipeline 6 Image scanning 6 What is image scanning 7 Docker image scanning open source tools 7 Open source Docker scanning tool: Anchore Engine 8 Securing your CI/CD pipeline 9 Image scanning in CI/CD 10 CHAPTER 2 Securing Kubernetes Control Plane 14 Kubelet security 14 Access to the kubelet API 15 Kubelet access to Kubernetes API 16 RBAC example, accessing the kubelet API with curl 16 Kubernetes API audit and security log 17 Audit log policies configuration 19 Extending the Kubernetes API using security admission controllers 20 Securing Kubernetes etcd 23 PKI-based authentication for etcd 23 etcd peer-to-peer TLS 23 Kubernetes API to etcd cluster TLS 24 Using a trusted Docker registry 24 Kubernetes trusted image collections: Banning non trusted registry 26 Kubernetes TLS certificates rotation and expiration 26 Kubernetes kubelet TLS certificate rotation 27 Kubernetes serviceAccount token rotation 28 Kubernetes user TLS certificate rotation 29 Securing Kubernetes hosts 29 Kubernetes 2 Security Guide Using a minimal host OS 30 Update system patches 30 Node recycling 30 Running CIS benchmark security tests 31 CHAPTER 3 Understanding Kubernetes RBAC 32 Kubernetes role-based access control (RBAC) 32 RBAC configuration: API server flags 34 How to create Kubernetes users and serviceAccounts 34 How to create a Kubernetes serviceAccount step by step 35 How to create a Kubernetes user step by step 37 Using an external user directory 40 CHAPTER 4 Security
    [Show full text]
  • Running Legacy VM's Along with Containers in Kubernetes!
    Running Legacy VM’s along with containers in Kubernetes Delusion or Reality? Kunal Kushwaha NTT Open Source Software Center Copyright©2019 NTT Corp. All Rights Reserved. About me • Work @ NTT Open Source Software Center • Collaborator (Core developer) for libpod (podman) • Contributor KubeVirt, buildkit and other related projects • Docker Community Leader @ Tokyo Chapter Copyright©2019 NTT Corp. All Rights Reserved. 2 Growth of Containers in Companies Adoption of containers in production has significantly increased Credits: CNCF website Copyright©2019 NTT Corp. All Rights Reserved. 3 Growth of Container Orchestration usage Adoption of container orchestrator like Kubernetes have also increased significantly on public as well private clouds. Credits: CNCF website Copyright©2019 NTT Corp. All Rights Reserved. 4 Infrastructure landscape app-2 app-2 app-M app-1 app-2 app-N app-1 app-1 app-N VM VM VM kernel VM Platform VM Platform Existing Products New Products • The application infrastructure is fragmented as most of old application still running on traditional infrastructure. • Fragmentation means more work & increase in cost Copyright©2019 NTT Corp. All Rights Reserved. 5 What keeps applications away from Containers • Lack of knowledge / Too complex to migrate in containers. • Dependency on custom kernel parameters. • Application designed for a custom kernel. • Application towards the end of life. Companies prefer to re-write application, rather than directly migrating them to containers. https://dzone.com/guides/containers-orchestration-and-beyond Copyright©2019 NTT Corp. All Rights Reserved. 6 Ideal World app-2 app-2 app-M app-1 app-2 app-N app-1 app-1 app-N VM VM VM kernel VM Platform • Applications in VM and containers can be managed with same control plane • Management/ Governance Policies like RBAC, Network etc.
    [Show full text]
  • Ovirt and Docker Integration
    oVirt and Docker Integration October 2014 Federico Simoncelli Principal Software Engineer – Red Hat oVirt and Docker Integration, Oct 2014 1 Agenda ● Deploying an Application (Old-Fashion and Docker) ● Ecosystem: Kubernetes and Project Atomic ● Current Status of Integration ● oVirt Docker User-Interface Plugin ● “Dockerized” oVirt Engine ● Docker on Virtualization ● Possible Future Integration ● Managing Containers as VMs ● Future Multi-Purpose Data Center oVirt and Docker Integration, Oct 2014 2 Deploying an Application (Old-Fashion) ● Deploying an instance of Etherpad # yum search etherpad Warning: No matches found for: etherpad No matches found $ unzip etherpad-lite-1.4.1.zip $ cd etherpad-lite-1.4.1 $ vim README.md ... ## GNU/Linux and other UNIX-like systems You'll need gzip, git, curl, libssl develop libraries, python and gcc. *For Debian/Ubuntu*: `apt-get install gzip git-core curl python libssl-dev pkg- config build-essential` *For Fedora/CentOS*: `yum install gzip git-core curl python openssl-devel && yum groupinstall "Development Tools"` *For FreeBSD*: `portinstall node, npm, git (optional)` Additionally, you'll need [node.js](http://nodejs.org) installed, Ideally the latest stable version, be careful of installing nodejs from apt. ... oVirt and Docker Integration, Oct 2014 3 Installing Dependencies (Old-Fashion) ● 134 new packages required $ yum install gzip git-core curl python openssl-devel Transaction Summary ================================================================================ Install 2 Packages (+14 Dependent
    [Show full text]
  • Container and Kernel-Based Virtual Machine (KVM) Virtualization for Network Function Virtualization (NFV)
    Container and Kernel-Based Virtual Machine (KVM) Virtualization for Network Function Virtualization (NFV) White Paper August 2015 Order Number: 332860-001US YouLegal Lines andmay Disclaimers not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting: http://www.intel.com/ design/literature.htm. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http:// www.intel.com/ or from the OEM or retailer. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Tests document performance of components on a particular test, in specific systems.
    [Show full text]
  • Kubernetes As an Availability Manager for Microservice Based Applications Leila Abdollahi Vayghan
    Kubernetes as an Availability Manager for Microservice Based Applications Leila Abdollahi Vayghan A Thesis in the Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science at Concordia University Montreal, Quebec, Canada August 2019 © Leila Abdollahi Vayghan, 2019 CONCORDIA UNIVERSITY SCHOOL OF GRADUATE STUDIES This is to certify that the thesis prepared By: Leila Abdollahi Vayghan Entitled: Kubernetes as an Availability Manager for Microservice Based Applications and submitted in partial fulfillment of the requirements for the degree of Master in Computer Science complies with the regulations of the University and meets the accepted standards with respect to originality and quality. Signed by the final examining committee: ________________________________________________ Chair Dr. P. Rigby ________________________________________________ Internal Examiner Dr. D. Goswami ________________________________________________ Internal Examiner Dr. J. Rilling ________________________________________________ Co-Supervisor Dr. F. Khendek ________________________________________________ Co-Supervisor Dr. M. Toeroe Approved by: ___________________________________ Dr. L. Narayanan, Chair Department of Computer Science and Software Engineering _______________ 2019___ __________________________________ Dr. Amir Asif, Dean, Faculty of Engineering and Computer Science ii ABSTRACT Kubernetes as an Availability Manager for Microservice Based Applications Leila
    [Show full text]
  • Immutable Infrastructure, Containers, & the Future of Microservices
    Immutable infrastructure, containers, & the future of microservices Adam Miller Senior Software Engineer, Red Hat 2015-07-25 What we'll cover in this session ● Define “microservices” ● Define “containers” in the context of Linux systems ● Container Implementations in Linux ● What Immutable Infrastructure is – Example of what Immutable Infrastructure deployment workflow looks like ● Red Hat Enterprise Linux Atomic Host – How RHEL Atomic enables and enhances these concepts ● Kubernetes – Orchestrating the Immutable Infrastructure ● OpenShift – Enabling the development and container building pipeline Microservices Microservices are not entirely new. ● The vocabulary term is “new-ish” (2012 – James Lewis and Martin Fowler) ● The idea is very old – Microkernels have existed since the 1980s – Could argue that system admins have been doing this with shell scripts and pipes for years ● Applying this concept to services higher in Monolithic Kernel Microkernel the stack is a newer trend based Operating System based Operating System – Application Heavily influenced by popular technologies System Call such as web microframeworks and containers. user mode VFS IPC, File System Application UNIX Device File IPC Server Driver Server Scheduler, Virtual Memory kernel mode Device Drivers, Dispatcher, ... Basic IPC, Virtual Memory, Scheduling Hardware Hardware What are Microservices? ● Services, “the UNIX Way” – Do one thing, do it well. – Decouple tightly coupled services, make the architecture more modular. ● Loosely coupled services using programming language agnostic APIs for communication – Example: REST APIs The mythical cloud The mythical cloud Micro services Containers What are containers? ● Operating-system-level Virtualization – We (the greater Linux community) like to call them “containers” ● OK, so what is Operating-system-level Virtualization? – The multitenant isolation of multiple user Traditional OS Containers space instances or namespaces.
    [Show full text]
  • Kubernetes As an Availability Manager for Microservice Applications
    Kubernetes as an Availability Manager for Microservice Applications Leila Abdollahi Vayghan Mohamed Aymen Saied Maria Toeroe Ferhat Khendek Engineering and Computer Engineering and Computer Ericsson Inc. Engineering and Computer Science Science Montreal, Canada Science Concordia University Concordia University [email protected] Concordia University Montreal, Canada Montreal, Canada Montreal, Canada [email protected] [email protected] [email protected] Abstract— The move towards the microservice based services that can be deployed and scaled independently by fully architecture is well underway. In this architectural style, small and automated deployment machinery, with minimum centralized loosely coupled modules are developed, deployed, and scaled management [2]. Microservices are built around separate independently to compose cloud-native applications. However, for business functionalities. Each microservice runs in its own carrier-grade service providers to migrate to the microservices process and communicates through lightweight mechanisms, architectural style, availability remains a concern. Kubernetes is often using APIs [3]. Microservices address the drawbacks of an open source platform that defines a set of building blocks which monolithic applications. They are small and can restart faster at collectively provide mechanisms for deploying, maintaining, the time of upgrade or failure recovery. Microservices are scaling, and healing containerized microservices. Thus, loosely coupled, and failure of one microservice will not affect Kubernetes hides the complexity of microservice orchestration while managing their availability. In a preliminary work we other microservices of the system. The fine granularity of this evaluated Kubernetes, using its default configuration, from the architectural style makes the scaling more flexible and more availability perspective in a private cloud settings.
    [Show full text]
  • Docker and Kubernetes: Changing the Opentext Documentum Deployment Model
    White paper Docker and Kubernetes: Changing the OpenText Documentum deployment model Containerization with Docker and Kubernetes’ cloud-first technology is not only a game changer for effectively managing on-premises ™ ™ OpenText Documentum solutions, it also paves the way for deploying EIM solutions in the cloud. Contents New deployment models 3 Customer case study—Part I 3 What is containerization? 4 What are Docker containers? 4 Docker container advantages 5 Available containers 6 What is Kubernetes? 6 Kubernetes advantages 6 Customer case study—Part II 7 EIM in the cloud 7 What is the cloud? 7 Cloud EIM 8 Customer case study—Part III 9 ™ OpenText Managed Services 9 Summary 9 Docker and Kubernetes: Changing the OpenText Documentum deployment model 2/10 New deployment models ™ ™ OpenText Documentum administrators can face two challenges: 1. Effectively managing complex Documentum deployments. Highly customized, mission-critical applications consume disproportionate administrative cycles, budgets and resources to install, upgrade, maintain and enhance. Upgrading these applications requires significant investments in change management. As a result, applications are often not upgraded in a timely fashion and do not leverage the latest technology. 2. Developing a cloud strategy for Enterprise Information Management (EIM) applications. Corporate IT is under intense pressure to produce an enterprise cloud strategy. Leveraging cloud technology for Enterprise Information Management (EIM) applications can be a big win, as long as it does not impact adoption, productivity and governance. Containerization enables new deployment models to help organizations meet these challenges, effectively managing on-premises solutions and paving the way for deploying EIM solutions in the cloud. Customer case study—Part I This real-world customer case study illustrates how containerization can benefit existing Documentum customers.
    [Show full text]
  • LXC, Docker, and the Future of Software Delivery
    LXC, Docker, and the future of software delivery Linuxcon – New Orleans, 2013 Jérôme Petazzoni, dotCloud Inc. Best practices in development and deployment, with Docker and Containers February 2014—Docker 0.8.1 Introduction to Docker - Patel Jitendra Cluster Management with Kubernetes Please open the gears tab below for the speaker notes Satnam Singh [email protected] Work of the Google Kubernetes team and many open source contributors University of Edinburgh, 5 June 2015 Kubernetes An Introduction The promise of cloud computing Cloud software deployment is soul destroying Typically a cloud cluster node is a VM running a specific version of Linux. User applications comprise components each of which may have different and conflicting requirements from libraries, runtimes and kernel features. Applications are coupled to the version of the host operating system: bad. Evolution of the application components is coupled to (and in tension with) the evolution of the host operating system: bad. Also need to deal with node failures, spinning up and turning down replicas to deal with varying load, updating components with disruption … You thought you were a programmer but you are now a sys-admin. Why Linux Containers? What are we trying to solve? The Matrix From Hell Many payloads ● backend services (API) ● databases ● distributed stores ● webapps Many payloads ● Go ● Java ● Node.js ● PHP ● Python ● Ruby ● … Many targets ● your local development environment ● your coworkers' developement environment ● your Q&A team's test environment ● some random
    [Show full text]
  • Full Scalable Media Cloud Solution with Kubernetes Orchestration
    Full Scalable Media Cloud Solution with Kubernetes Orchestration Zhenyu Wang, Xin(Owen)Zhang Agenda • Media in the Network and Cloud • Intel Media Server Reference Software Stack • Container with MSS enablement • Kubernetes with Container integration • Kubernetes with Container enabling on VCA2 • Kubernetes device plugin/Intel GPU plugin • Use Case(1080p): VCA transcoding & k8s scheduling on VCA nodes Media in the Network and Cloud Visual Understanding Video Delivery Graphics in the Cloud Object Recognition & Tracking Cloud and Comms: Remote Desktop Indexing / Search Ingest / Storage / Edge Remote Workstation Transcode / Trans-size / Trans-rate Smart Cities, Security and Cloud Gaming Surveillance Video Conferencing Rendering Intel® Media Server Reference Software Stack Provisioning OpenStack* Kubernetes Cloud Management Guest Video Video Video Streaming Media Applications FFmpeg-qsv Conference Surveillance Index/Search Intel® Media Server Studio Intel® SDK for OpenGL* Software Software Guest Media OpenCL™ Audio Video Intel® HD Graphics Driver for Linux* Codecs Codecs Software Stack Guest i915 Driver Linux 3.x/4.x Kernel * Container Docker Hardware Acceleration Path Guest OS (Linux) Host * * Host i915 Driver Xen KVM VMWare HyperV Host Kernel & Hypervisor Host OS (Linux) Container with MSS enablement • More Containers can be run than VMs • Almost same performance with Native • Package application and dependencies integrated • Share same kernel as the host • No need providing hardware based on the isolation I915 device node Kubernetes with
    [Show full text]
  • Koordinator: a Service Approach for Replicating Docker Containers in Kubernetes
    Koordinator: A Service Approach for Replicating Docker Containers in Kubernetes Hylson Vescovi Netto∗, Aldelir Fernando Luiz∗, Miguel Correiay, Luciana de Oliveira Rechz, Caio Pereira Oliveiraz ∗College of Blumenau, Federal Institute Catarinense - Brazil yINESC-ID, Instituto Superior Tecnico,´ Universidade de Lisboa - Portugal zDepartment of Informatics and Statistics, Federal University of Santa Catarina - Brazil fhylson.vescovi, [email protected], [email protected], [email protected], [email protected] Abstract—Container-based virtualization technologies such as is a backend data store, access concurrency must be managed Docker and Kubernetes are being adopted by cloud service to prevent corruption. providers due to their simpler deployment, better performance, This paper presents Koordinator, a new container replica and lower memory footprint in relation to hypervisor-based virtualization. Kubernetes supports basic replication for availa- coordination approach that provides availability and integrity bility, but does not provide strong consistency and may corrupt with strong consistency in Kubernetes. Koordinator is based application state in case there is a fault. This paper presents a on state machine replication (SMR) [8], an approach that keeps state machine replication scheme for Kubernetes that provides replicas consistent even if some of them fail, providing high high availability and integrity with strong consistency. Replica availability and integrity. Koordinator is provided as a service, coordination is provided as a service, with lightweight coupling to applications. Experimental results show the solution feasibility. i.e., on top of Kubernetes, with lightweight coupling with the application being replicated. I. INTRODUCTION The rest of this paper is organized as follows. Section II briefly introduces container-based virtualization.
    [Show full text]
  • Reliable Storage for HA, DR, Clouds and Containers Philipp Reisner, CEO LINBIT LINBIT - the Company Behind It
    Reliable Storage for HA, DR, Clouds and Containers Philipp Reisner, CEO LINBIT LINBIT - the company behind it COMPANY OVERVIEW TECHNOLOGY OVERVIEW • Developer of DRBD • 100% founder owned • Offices in Europe and US • Team of 30 highly experienced Linux experts • Partner in Japan REFERENCES 25 Linux Storage Gems LVM, RAID, SSD cache tiers, deduplication, targets & initiators Linux's LVM logical volume snapshot logical volume Volume Group physical volume physical volume physical volume 25 Linux's LVM • based on device mapper • original objects • PVs, VGs, LVs, snapshots • LVs can scatter over PVs in multiple segments • thinlv • thinpools = LVs • thin LVs live in thinpools • multiple snapshots became efficient! 25 Linux's LVM thin-LV thin-LV thin-sLV LV snapshot thinpool VG PV PV PV 25 Linux's RAID RAID1 • original MD code • mdadm command A1 A1 • Raid Levels: 0,1,4,5,6,10 A2 A2 • Now available in LVM as well A3 A3 A4 A4 • device mapper interface for MD code • do not call it ‘dmraid’; that is software for hardware fake-raid • lvcreate --type raid6 --size 100G VG_name 25 SSD cache for HDD • dm-cache • device mapper module • accessible via LVM tools • bcache • generic Linux block device • slightly ahead in the performance game 25 Linux’s DeDupe • Virtual Data Optimizer (VDO) since RHEL 7.5 • Red hat acquired Permabit and is GPLing VDO • Linux upstreaming is in preparation • in-line data deduplication • kernel part is a device mapper module • indexing service runs in user-space • async or synchronous writeback • Recommended to be used below LVM 25 Linux’s targets & initiators • Open-ISCSI initiator IO-requests • Ietd, STGT, SCST Initiator Target data/completion • mostly historical • LIO • iSCSI, iSER, SRP, FC, FCoE • SCSI pass through, block IO, file IO, user-specific-IO • NVMe-OF • target & initiator 25 ZFS on Linux • Ubuntu eco-system only • has its own • logic volume manager (zVols) • thin provisioning • RAID (RAIDz) • caching for SSDs (ZIL, SLOG) • and a file system! 25 Put in simplest form DRBD – think of it as ..
    [Show full text]