Understanding Containers On

Understanding Containers On

Understanding Containers on AWS Containers on AWS and why they matter Understanding Containers on AWS Every so often there is a game changer in technology that completely disrupts how people operate. Over the last six years that game changer has been cloud computing — in the past two years it has been containers (often interchangeably known as Docker). Strictly speaking, containerization is not a new concept; as early as on Kubernetes, while machine learning specialists are actively 1979 the concept of containerization began with chroot, which isolates contemplating how to generate and train their predictive models and restricts namespaces of a Unix process (and its children) to a new efficiently leveraging container orchestration tools. Containers have location in the file system. begun to underpin the evolution of the different distributed computing disciplines. Docker has offered a certain “ease of adoption” for containers in current technology. Around since 2014, Docker has been adopted into business While containers as a packaging solution has a strong impact on the strategy instead of being relegated to just a technological trend. modern “DevOps” culture, they’ve also gained popularity because they go hand-in-hand in the continuous evolution of software architecture. Why Containers Matter? A conversation on containers usually starts with the technology, but quickly transforms to a conversation of change management, safe and scalable collaboration across different groups, community-driven initiatives and other disciplines that are normally kept separate. These days, data scientists are talking about deploying their data workloads Every so often there is a game changer in technology that completely disrupts how people operate. Understanding Containers on AWS | 2 Before Containers Monolithic applications were once the norm. As servers were tedious and difficult to configure, it was worthwhile to reduce the pain by using as few servers as possible—combining front-end user interfaces, business logic, and backend services into one single package deployed to a single server. Every major upgrade translated directly to downtime. Servers were expensive both in terms of upfront investment and ongoing maintenance costs; less was more. While fewer servers with larger installations looked reasonable on paper, as the complexity of the system increased and demands of new features hastened, the development and testing of such complex applications broke down and easily grinded to a halt. The more complex the system was the more time is required on testing. Releases became slower and larger in scope, with more features slotted in for fear of “missing the boat”, and production rollout became a multi-hour/day affair. These methods were unsustainable. As Internet-based business became more prominent, users expectations meant that the haphazard way of massive deployment needs to be completely eradicated. The “microservices” paradigm was introduced to address this problem. The complex installation of all features every release was broken down into independent releases of much smaller components. As long as the agreed upon interfaces did not change, it became the prerogative of individual component owners to update and change the logic as seen fit instead of needing to worry about every single test case of in the test suite. Understanding Containers on AWS | 3 Microservices and Virtual Machines While microservices were a good response to these complex applications, it was not free from challenges. For example, with each microservice on an independent release cycle having slightly different dependencies or running on different versions of an operating system, it was necessary to provision dedicated servers for each of the services. By breaking the monolithic applications to microservices, the number of servers would exponentially increase from tens to hundreds, and the monolithic days exponentially to tens or hundreds. Better Internet connectivity also meant users and systems were no longer siloed. Peak usage no longer meant hundreds or more calls within the hour, but within a minute. The tens or hundreds of servers to serve the full collection of microservices now had to be increased to hundreds and thousands. The rise of virtual machines helped solve the problem. Sitting on top of hardware, the hypervisor is able to create and run one or more virtual machines on the same hardware. Each of the virtual machines can operate independently on the shared resources, and can be configured using configuration management tools. This increases the number of servers on the same hardware, therefore more microservices can run on the same hardware. Running microservices on virtual machines is not a perfect solution to every use case. Managing a fleet of VMs and hypervisors introduces its own set of overhead challenges around managing load, machine density, horizontal and vertical scaling, as well as configuration drift and OS maintenance. Configuration management tools such as Chef, Puppet, and Ansible coupled with platforms such as OpsWorks and AWS SSM can often eliminate or greatly reduce these challenges and for many use cases. For some applications, this overhead is a fair trade for the flexibility required, especially for large but still-evolving enterprise applications. However, as applications evolve further towards the segmentation of microservices, this balance between management overhead and hosting flexibility can skew unfavorably. For developers focused on maintaining a large set of many smaller systems, a new solution was required. Understanding Containers on AWS | 4 Why do containers work well with the cloud? One prominent feature of Containers and Microservices cloud computing is elasticity: cloud infrastructure can scale up and down based on demand. Raising a new virtual machine to service the For many, that solution was Docker. demand is fine, but it takes minutes, which can result in significant loss in business. Scaling up containers takes seconds 1 instead of minutes, Docker represents a user-friendly container deployment methodology. meeting scaling demands much more efficiently. The speediness of By adopting Docker, multiple applications can run on the same virtual container deployment aligns with the requirement of the cloud that machine/bare metal server. Since Docker packages all of an demands rapid changes. application's dependencies within a single image, conflicting dependencies between different services can exist. As long as the Because containers run on an abstraction layer on top of virtual services share the same kernel as the host machine, the different Docker machines, it is further separated from the underlying compute resources. processes run harmoniously with one another. Now the hundreds and A Docker image that can run on-premise can also run on AWS and other thousands of machines can drop drastically back while not sacrificing environments. Since cloud transcends physical locations and service the release independence and integrity of the applications. providers, containers work well with the cloud. Cloud native architectural patterns state that scaling horizontally (more servers) is preferable to Another advantage of containers is immutability and therefore scaling vertically (more powerful servers). Docker provides a way to run consistency. Upgrading a containerized application is equivalent to the applications across different servers easily. Because containers are stopping an existing process and starting a new one based on a newer immutable, it poses less operational costs with less margin of error. As image. This removes the potential drift in configurations. The removal of there are increasing demands on hybrid cloud computing for disaster configuration ambiguities also helps introduce a more streamlined recovery and high availability purpose, a deployment mechanism that process. Since the dependencies are already packaged within the promises working across the different physical environments is certainly container image, the overhead is drastically reduced. This is analogous very attractive. to compiled binary applications where all dependencies are encapsulated at build time. Container versus Container Orchestration While running one container is indeed easy, managing a number of Containers and Cloud-Nativeness containers - or generally what is known as “container orchestration” Go Hand-in-Hand - is a lot more complex. Another movement that helped promote adoption of containers is Container orchestration can be a heavy operational overhead. Unless cloud computing. With the advent of cloud, “cloud-native” technologies your core business is managing container processes, mastering the became the building blocks for developing cloud-based applications. managing and orchestrating of containers often does not increase the Containers are an enabling technology, facilitating encapsulation of bottom line. The good news is that because of the popularity of independent services, fully utilized compute infrastructure, scalability, containers, a number of people have been trying to solve these and rapid development. problems, and services like AWS have been introducing a variety of 1 Because of the overlay file system, which supports inter-application and inter-version sharing solutions to help orchestrate containers. Understanding Containers on AWS | 5 Cross-Provider Containers on AWS All-in AWS Services (incl. On Premise) Amazon Elastic Complex While it has always been possible to run containers directly on Amazon

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us