<<

Virtual Containers: Asset Management Best Practices and Licensing Considerations

Virtual containers have seen tremendous adoption and growth within all industries. However, in terms of IT asset management, cont- ainers are not being managed and are an unknown area of risk for many of our clients. Because it is a newer technology, there is very little information about managing containers and how to address the emerging SAM & ITAM challenges they bring.

Due to this lack of public information, Anglepoint has published this whitepaper on navigating the world of containers, with an empha- sis on asset management and licensing. We will cover everything from the history of containers, to what containers are, the benefits of containers, asset management best practices, and some publisher-specific licensing considerations.

A BRIEF HISTORY OF VIRTUAL CONTAINERS The first proper containers came from the Linux world as LXC (LinuX Containers) in 2008. However, it wasn’t until 2013 that containers entered the IT public consciousness, when came onto the scene with Enterprise usage in mind. Even then, though, it was more of an enthusiast’s technology. In 2015, Google released and open sourced which manages and ‘orchestrates’ containers. However, it wasn’t until 2017 that Docker and Kubernetes had matured enough to be considered for production use within corporate environments. 2017 also saw VMware, Microsoft, and Amazon beginning to support and offer solutions for Kubernetes and Docker on their top-tier cloud infrastructure.

WHAT IS A CONTAINER? Often, people conflate the term ‘container’ with multiple technologies that make up the container ecosystem. Let’s look at what a modern container is at the most fundamental level.

[email protected] | 1.855.512.6453 | Anglepoint.com desktop. All these environments are likely different, with different versions of a dependency installed or perhaps the hardware configuration is slightly different which would create additional trouble shooting efforts. Containers, however, obfuscate the hardware layer. They are platform agnostic. You could run the container on a laptop, server, or the cloud and it’s going to run the same. Using the traditional model, migrating an application from on-premises to the cloud or across cloud platforms is an onerous process. However, this process is streamlined and overall Diagram 1 greatly simplified with containers.

On the left side of diagram 1 is an which So we‘ve gone over containers themselves, but there are other has several different processes (applications) that are installed terms and technologies in the container ecosystem that we need and running. These processes are all installed in the same to be familiar with. Let’s take a look at those. environment, or namespace if you are talking about Linux, and can interact with each-other. A container is simply the CONTAINER IMAGE isolation of a single process and wrapping it up in – just Container images are what most people are referring to when as it sounds – a container. This container is isolated from the they talk about a container. A container image is the actual host-operating system and can only “see” and interact with what static container file or bit that contains the process and its is explicitly allowed. See the example below to illustrate our point. dependencies. A container image becomes a container when running. Example: Let’s start with a traditional model in which we are installing Container images themselves are immutable; all changes made applications on the OS: In this example, we‘ve installed NGINX to a container image become new ‘layers’ of the image. This Web Server (a process), but there are also several dependencies happens because when changes are made a git-like push/ installed that support the main application, NGINX Web Server. pull mechanism is used. One benefit of image ‘layers’ is that they create a natural audit trail when used in conjunction with Let’s say that we also want to install NodeJS, which requires some a container registry (defined below). All changes are visible of the same dependencies as NGINX Web Server , but perhaps over time, we can see the details of each change including by the version of NodeJS requires a different version of those whom each change was made. A hierarchical nature to these dependencies. Using the traditional model, this would require a ‘layers’ also exists, and container images can have parent/child complicated configuration to ensure that each of our applications relationships. E.g.: In our previous example container NGINX are pointing to the correct versions of the dependencies. It was running, but let’s say that we also needed a container would also be important to ensure that once an application or running NGINX and PHP. A child container could be created that dependency was updated, the configuration changes were references and builds off our main NGINX container. maintained.

Example: Now if we were to use containers in this scenario, it would Let’s imagine that we discovered a vulnerability in one of the become easier to manage. The process (NGINX Web Server dependencies we had deployed. In the traditional in this example) would be bundled in a container with the (VM) world we would have to patch each of our VMs that had dependencies that it relies on. When we want to add another this vulnerability. Now hopefully we would have an automated process (NodeJS), it resides in its own container along with its way of doing this, but even still, verifying that the patches were dependencies. This way, we don’t have to worry about version successful and the applications unaffected would be extremely conflicts as everything is isolated. time-consuming tasks. With containers, we would only need to update the container image and all containers running from Using containers is especially useful when developing that image would be updated. Additionally, any child container applications. Someone might be developing on a laptop, testing images referencing the now updated parent image would be on a server, and then deploying to the cloud or a co-worker’s

[email protected] | 1.855.512.6453 | Anglepoint.com updated as well. VIRTUAL CONTAINERS VS. VIRTUAL MACHINES CONTAINER MANIFEST Another way to understand containers is comparing them with Part of the container image is the manifest, better known as virtual machines, as people are more familiar with them as a a ‘Dockerfile’ - if using Docker‘s terminology. The container technology. manifest is a structured text file that contains the configuration settings and instructions needed to build the container image.

CONTAINER REGISTRY The container registry is a repository of container images. Public registries exist, such as Dockerhub, as do private registries which organizations can run to host their own internally developed images or clone public images.

NODES & CLUSTERS Diagram 2 A node is the hardware supporting the container environment. This could be a server, VM, or a cloud instance. In some cases, a Referring to diagram 2, we see that both VMs and containers group of nodes will be working together to support a container start with Infrastructure, which could be a physical host or a cloud environment – this is referred to as a cluster. platform like AWS or Azure. The Host Operating System comes next, this would be something like Windows Server or ESX. After PODS & ORCHESTRATOR the Host OS, comes the technology for VMs and the A pod is one or more containers which are grouped and container runtime (e.g. Docker) for containers. managed by an orchestrator. An orchestrator is where rules and operations for scaling, failover, and running container workloads Now, on the VM side, we see that each individual VM has a are created. So, while Docker offers tools and solutions for full OS installed – the applications and dependencies are also container creation and deployment, Kubernetes is an example of installed on the VMs. Additionally, the hypervisor is virtualizing an orchestrator. the hardware the VMs are running on which requires compute resources.

[email protected] | 1.855.512.6453 | Anglepoint.com Conversely, with containers, we don’t need to install an entire in the cloud. The cloud is touted for its scalability and elastici- OS. All that runs in the container is the process and its depen- ty, however, dynamically scaling traditional Infrastructure as dencies; this means that from a storage standpoint, the container a Service (IaaS) workloads (be it right-sizing the instances or size is only a fraction of the size of the VM. The container is also deploying them based on need) is much easier said than done. much less resource-intensive to run from a computational power Containers, on the other hand, were built with this functionality standpoint. in mind and orchestration makes scaling and meeting demand simple. Another significant difference between virtual machines and containers is that a VM is typically running on a start-and-stop Often, when talking about containers in the cloud we hear the schedule. Whereas the lifecycle of a container traditionally term CaaS (Containers as a Service). CaaS could be regarded as mirrors the lifecycle of the process it’s running. Or in other words a sub-category of IaaS, except we don’t need to manage the OS - when the process starts, the container starts, when the process itself; we are just managing the containers and container runtime. ends, the container stops running. Let’s illustrate this: Google is one of the largest contributors to the container platform. When we go to Google Search or YouTube, these are processes running in containers. Starting YouTube, for example, creates a new container and when we exit YouTube, that kills the container. In fact, Google starts and stops over 2 billion containers each week and they are able to manage demand dynamically using container orchestration.

This is an extreme use-case and it may not make sense to start up and kill other instances on whim, like SQL server for example. However, containers do give a scalability and elasticity that isn’t easily achieved through traditional means. Diagram 3 With IaaS, the cloud provider is providing the infrastructure and QUICK RECAP OF CONTAINER BENEFITS we just manage the OS and applications. With CaaS, the cloud Let’s quickly recap the benefits of containers. provider is also managing the container platform, so we are just managing the containers themselves along with any orchestration. • Containers are lightweight • Containers are predictable in offering a consistent sandbox Public cloud providers including Google, Amazon Web Ser- • Containers are isolated vices (AWS), IBM, and Rackspace all have some type of CaaS • Containers are platform agnostic offering. They offer the ability to run the entire container platform including registry, orchestration, etc. on the cloud. These benefits allow for an increase in development agility and ease-of-deployment. That’s why container adoption rates ASSET MANAGEMENT and growth have increases year-over-year. In fact, Gartner Once containers are in use, they must be managed and licensed predicts that by 2020, 50% of organizations will have deployed properly. Unfortunately, the very features that make containers so containers in their environments. technologically compelling also make them difficult to manage.

CONTAINERS IN THE CLOUD AND CAAS Trustworthy Data Containers are a great fit for the cloud. As covered earlier, The first thing that should always be considered when managing containers are much less resource-intensive to run than VMs. This IT assets is ‘Trustworthy Data’. You simply cannot manage what means that containers are less expensive to run in the cloud than you are unaware of. So, before anything else, you must have VMs. a way to gather data from your existing physical and virtual infrastructure and ensure that the data is trustworthy. This means Financial savings are not the only benefits of running containers putting the proper tools in place as well as processes to check the

[email protected] | 1.855.512.6453 | Anglepoint.com accuracy and completeness of the data gathered. Containers lower the barrier to entry in deploying enterprise- grade software. One consequence of this is that it becomes much So how do we get the reliable data needed to effectively manage easier for admins and others, who may not be aware of the cost containers? and licensing implications, to install costly commercial software. Proper policies and procedures can greatly mitigate such risks. Due to the nature of containers, traditional data collection It is therefore crucial that these policies and procedures not only methods won‘t work. Physical machines and VMs have an OS exist, but that employees are educated and trained accordingly installed that supports SSH, WMI, or an agent that information so that such a mistake does not happen. can be gathered from. Containers don’t have this same level of access available so a tool would have to directly interface with LICENSING CONSIDERATIONS the container platform (typically Docker) and any orchestrators Licensing can also be complicated by running applications (e.g. Kubernetes). In 2019, SAM tools don’t fully support cont- in containers. In this section we will look at specific licensing ainers. As such, discovery and tracking currently require efforts considerations for Microsoft, Oracle, and IBM. that are much more manual. Because of the current manual nature of tracking containers, it is likely you will need to take a tiered Microsoft approach to understand high-risk applications that should be clo- Microsoft has two different kinds of containers: Windows Server sely monitored versus low-risk applications which may only need containers and Hyper-V containers. infrequent or ad-hoc monitoring. Windows Server containers provide application isolation However, one positive benefit for ITAM is that containers can through a process and namespace isolations technology. Like have audit trails in place. The container registry contains a master traditional Linux containers, Windows Server containers share catalogue of the container images used in the environment and their kernel (the most core instructions/functions of an Operating containers are typically tagged and grouped, which should help System) with the container host OS. However, because they identify what is running in the container. Some registries, such as share the kernel, these containers require the same kernel version Dockerhub and Azure Container Registry (ACR) keep of log of and configuration. From a licensing standpoint, you can run each deployment of an image as well as who built it; such a log unlimited Windows Server containers without additional licensing is an invaluable resource for an ITAM manager. The container considerations. manifest, which is different from the registry, will also reference any parent containers. To date, Hyper-V containers have been essentially optimized virtual machines. The kernel of Hyper-V containers is not shared People & Processes with the host, meaning that the configurations and versions do not need to match. However, these containers will be much larger During the annual Docker conference in 2017, the keynote speech in size. Because the containers are redistributing the OS kernel, included the following hypothetical situation: the container OS must be licensed. From a licensing standpoint, Microsoft treats these containers as if they were VMs. This means Two engineers come back from vacation to discover that they that licensing the container OS is straightforward for those who need to quickly stand up an application that also requires an are already familiar with licensing Windows Servers on VMs: Oracle Database. Using containers, they quickly deploy the Once the physical hosts cores have been licensed, Windows Oracle database by downloading the container image from the Server Standard can cover up to two containers (and be stacked official Oracle Container Registry and turn what would normally multiple times to cover additional containers) while Windows be an onerous and multi-day endeavor into a process that only Server Datacenter can cover an unlimited number of containers takes a few minutes. running on that host.

While the scenario highlights the benefits of containers it also Microsoft also has two Windows Server operating system edi- shows how an incredibly powerful benefit could quickly become tions that are commonly used in containers – Windows Server a nightmare if the proper policies and procedures are not in Core and Windows Server Nano Server. These editions are ideal place. for containers as the OS has a smaller file size. Windows Nano

[email protected] | 1.855.512.6453 | Anglepoint.com Server in particular is designed for scenarios that require “fewer licensing which allows customers to take advantage of sub- patches, faster restarts, and tighter security”. Because Windows capacity licensing. However, there is some specific guidance from Server 2016 Nano Server receives updates using the ‘Semi-An- IBM on ensuring eligibility for PVU licensing: nual Channel’, Software Assurance (SA) is required on both the server licenses as well as the CALs (Client Access Licenses). “Docker is not a sub-capacity eligible virtualization, but it can be used in combination with a sub-capacity virtualization. … Just like Windows Server operating systems, Microsoft treats con- Apart from discovering IBM software that is installed in Docker tainers like VMs when licensing applications. Here’s an examp- containers, License Metric Tool also reports its license metric le: when licensing SQL Server within containers, like VMs, you utilization. When the Docker is deployed on a physical host, can license the subset of CPUs cores that are dedicated to that license metric utilization is calculated on the level of the host. container rather than licensing all of the physical cores supporting When it is deployed on a virtual machine, utilization is calculated the container platform. Additionally, if all of the host cores are on the level of the virtual machine.” licensed with SQL Server Enterprise w/ SA, an unlimited number of SQL Server containers can be run. Keep in mind, however, that This means that if we deployed containers on a physical host, unlike VMs, where we explicitly assign CPU, RAM, etc., cont- we would need to license the PVU equivalent for all the host’s ainers will, by default, use all resources available unless specifi- cores, regardless of how many were accessible to the container. cally configured otherwise and you have to go out of your way to However, if we deployed the container runtime in a VM using explicitly define how many resources a container can use. virtualization that is eligible for sub-capacity licensing, then we would only need to license the cores assigned to the VM. This Oracle also requires ILMT or BigFIx clients on the host or VM. ILMT Oracle has more of a hardline approach with its products and added Docker software scanning support in December 2017 and virtualization technologies. Oracle has a ‘partitioning policy’ it is available from 9.2.5 onward. with listed supported technologies, which means that when we are using a supported technology, then we only need to pay CONCLUSION the licensing costs associated with the CPUs supporting those Containers are only going to continue to grow in usage and partitioned workloads. This is referred to as ‘hard partitioning’. popularity and that’s having a solid grasp on how they operate Everything else falls into the definition of ‘soft partitioning’. is required in order to effectively manage them. Containers, cloud-based services, and other serverless functions necessitate Containers, such as Docker, are considered ‘soft virtualization’ by the need for a mature SAM and ITAM program, processes, and Oracle. Should an Oracle product be deployed within a con- reporting. It also increases the need for tools and solutions that tainer, all physical infrastructure that sits underneath that cont- can provide real-time actionable information to optimize costs ainer – including all servers within the cluster, farm, etc. – must and mitigate security risks. If you have any questions regarding be licensed. There is one exception to this: versions 9.x and up of SAM and containers, reach out to us and we’ll be happy to help. Oracle 10 Solaris containers, also known as Solaris Zones, are recognized as a ‘hard partition’ technology if they are ‘capped ABOUT ANGLEPOINT zones’. Anglepoint is a global professional services firm delivering high value licensing and compliance services to Fortune 500 While it’s not possible to license the subset of CPUs supporting companies and others around the globe. Our team of subject a workload with Docker, once the infrastructure is licensed, we matter experts and technical specialists have decades of industry can run an unlimited number of container instances on it without experience in providing clients with innovative and proactive additional licensing considerations. And because containers are solutions that have real and measurable impact on the bottom lighter weight than VMs, we can also typically run more of them. line.

IBM Contact us to learn more at [email protected] or call IBM has taken a very similar route as Microsoft in that it treats 1.855.512.6453. containers like it treats VMs. This is great for those already familiar with IBM’s licensing models, especially its PVU metric-based

[email protected] | 1.855.512.6453 | Anglepoint.com