Virtual Containers: Asset Management Best Practices and Licensing Considerations
Total Page:16
File Type:pdf, Size:1020Kb
Virtual Containers: Asset Management Best Practices and Licensing Considerations Virtual containers have seen tremendous adoption and growth within all industries. However, in terms of IT asset management, cont- ainers are not being managed and are an unknown area of risk for many of our clients. Because it is a newer technology, there is very little information about managing containers and how to address the emerging SAM & ITAM challenges they bring. Due to this lack of public information, Anglepoint has published this whitepaper on navigating the world of containers, with an empha- sis on asset management and licensing. We will cover everything from the history of containers, to what containers are, the benefits of containers, asset management best practices, and some publisher-specific licensing considerations. A BRIEF HISTORY OF VIRTUAL CONTAINERS The first proper containers came from the Linux world as LXC (LinuX Containers) in 2008. However, it wasn’t until 2013 that containers entered the IT public consciousness, when Docker came onto the scene with Enterprise usage in mind. Even then, though, it was more of an enthusiast’s technology. In 2015, Google released and open sourced Kubernetes which manages and ‘orchestrates’ containers. However, it wasn’t until 2017 that Docker and Kubernetes had matured enough to be considered for production use within corporate environments. 2017 also saw VMware, Microsoft, and Amazon beginning to support and offer solutions for Kubernetes and Docker on their top-tier cloud infrastructure. WHAT IS A CONTAINER? Often, people conflate the term ‘container’ with multiple technologies that make up the container ecosystem. Let’s look at what a modern container is at the most fundamental level. [email protected] | 1.855.512.6453 | Anglepoint.com desktop. All these environments are likely different, with different versions of a dependency installed or perhaps the hardware configuration is slightly different which would create additional trouble shooting efforts. Containers, however, obfuscate the hardware layer. They are platform agnostic. You could run the container on a laptop, server, or the cloud and it’s going to run the same. Using the traditional model, migrating an application from on-premises to the cloud or across cloud platforms is an onerous process. However, this process is streamlined and overall Diagram 1 greatly simplified with containers. On the left side of diagram 1 is an Operating System which So we‘ve gone over containers themselves, but there are other has several different processes (applications) that are installed terms and technologies in the container ecosystem that we need and running. These processes are all installed in the same to be familiar with. Let’s take a look at those. environment, or namespace if you are talking about Linux, and can interact with each-other. A container is simply the CONTAINER IMAGE isolation of a single process and wrapping it up in – just Container images are what most people are referring to when as it sounds – a container. This container is isolated from the they talk about a container. A container image is the actual host-operating system and can only “see” and interact with what static container file or bit that contains the process and its is explicitly allowed. See the example below to illustrate our point. dependencies. A container image becomes a container when running. Example: Let’s start with a traditional model in which we are installing Container images themselves are immutable; all changes made applications on the OS: In this example, we‘ve installed NGINX to a container image become new ‘layers’ of the image. This Web Server (a process), but there are also several dependencies happens because when changes are made a git-like push/ installed that support the main application, NGINX Web Server. pull mechanism is used. One benefit of image ‘layers’ is that they create a natural audit trail when used in conjunction with Let’s say that we also want to install NodeJS, which requires some a container registry (defined below). All changes are visible of the same dependencies as NGINX Web Server , but perhaps over time, we can see the details of each change including by the version of NodeJS requires a different version of those whom each change was made. A hierarchical nature to these dependencies. Using the traditional model, this would require a ‘layers’ also exists, and container images can have parent/child complicated configuration to ensure that each of our applications relationships. E.g.: In our previous example container NGINX are pointing to the correct versions of the dependencies. It was running, but let’s say that we also needed a container would also be important to ensure that once an application or running NGINX and PHP. A child container could be created that dependency was updated, the configuration changes were references and builds off our main NGINX container. maintained. Example: Now if we were to use containers in this scenario, it would Let’s imagine that we discovered a vulnerability in one of the become easier to manage. The process (NGINX Web Server dependencies we had deployed. In the traditional virtual machine in this example) would be bundled in a container with the (VM) world we would have to patch each of our VMs that had dependencies that it relies on. When we want to add another this vulnerability. Now hopefully we would have an automated process (NodeJS), it resides in its own container along with its way of doing this, but even still, verifying that the patches were dependencies. This way, we don’t have to worry about version successful and the applications unaffected would be extremely conflicts as everything is isolated. time-consuming tasks. With containers, we would only need to update the container image and all containers running from Using containers is especially useful when developing that image would be updated. Additionally, any child container applications. Someone might be developing on a laptop, testing images referencing the now updated parent image would be on a server, and then deploying to the cloud or a co-worker’s [email protected] | 1.855.512.6453 | Anglepoint.com updated as well. VIRTUAL CONTAINERS VS. VIRTUAL MACHINES CONTAINER MANIFEST Another way to understand containers is comparing them with Part of the container image is the manifest, better known as virtual machines, as people are more familiar with them as a a ‘Dockerfile’ - if using Docker‘s terminology. The container technology. manifest is a structured text file that contains the configuration settings and instructions needed to build the container image. CONTAINER REGISTRY The container registry is a repository of container images. Public registries exist, such as Dockerhub, as do private registries which organizations can run to host their own internally developed images or clone public images. NODES & CLUSTERS Diagram 2 A node is the hardware supporting the container environment. This could be a server, VM, or a cloud instance. In some cases, a Referring to diagram 2, we see that both VMs and containers group of nodes will be working together to support a container start with Infrastructure, which could be a physical host or a cloud environment – this is referred to as a cluster. platform like AWS or Azure. The Host Operating System comes next, this would be something like Windows Server or ESX. After PODS & ORCHESTRATOR the Host OS, comes the hypervisor technology for VMs and the A pod is one or more containers which are grouped and container runtime (e.g. Docker) for containers. managed by an orchestrator. An orchestrator is where rules and operations for scaling, failover, and running container workloads Now, on the VM side, we see that each individual VM has a are created. So, while Docker offers tools and solutions for full OS installed – the applications and dependencies are also container creation and deployment, Kubernetes is an example of installed on the VMs. Additionally, the hypervisor is virtualizing an orchestrator. the hardware the VMs are running on which requires compute resources. [email protected] | 1.855.512.6453 | Anglepoint.com Conversely, with containers, we don’t need to install an entire in the cloud. The cloud is touted for its scalability and elastici- OS. All that runs in the container is the process and its depen- ty, however, dynamically scaling traditional Infrastructure as dencies; this means that from a storage standpoint, the container a Service (IaaS) workloads (be it right-sizing the instances or size is only a fraction of the size of the VM. The container is also deploying them based on need) is much easier said than done. much less resource-intensive to run from a computational power Containers, on the other hand, were built with this functionality standpoint. in mind and orchestration makes scaling and meeting demand simple. Another significant difference between virtual machines and containers is that a VM is typically running on a start-and-stop Often, when talking about containers in the cloud we hear the schedule. Whereas the lifecycle of a container traditionally term CaaS (Containers as a Service). CaaS could be regarded as mirrors the lifecycle of the process it’s running. Or in other words a sub-category of IaaS, except we don’t need to manage the OS - when the process starts, the container starts, when the process itself; we are just managing the containers and container runtime. ends, the container stops running. Let’s illustrate this: Google is one of the largest contributors to the container platform. When we go to Google Search or YouTube, these are processes running in containers. Starting YouTube, for example, creates a new container and when we exit YouTube, that kills the container. In fact, Google starts and stops over 2 billion containers each week and they are able to manage demand dynamically using container orchestration.