A Comparison of Resource Utilization and Deployment Time for Open-Source Software Deployment Tools
Total Page:16
File Type:pdf, Size:1020Kb
A COMPARISON OF RESOURCE UTILIZATION AND DEPLOYMENT TIME FOR OPEN-SOURCE SOFTWARE DEPLOYMENT TOOLS Bachelor’s Degree Project in Information Technology with a Specialisation in Network and Systems Administration, IT604G G2E, 22,5 Högskolepoäng Spring term 2017 Jonathan Johansson [email protected] Supervisor: Johan Zaxmy Examiner: Birgitta Lindström Abstract The purpose of this study is to compare the software deployment tools Ansible, Chef and SaltStack regarding deployment time and their respective resource utilization, and the findings of this study are also compared to the previous works of Benson et al. (2016), which also studied deployment time. However, there is no previous research which mentions resource utilization, which is equally important. The study consists of an experiment performed in one of the laboratory rooms at the University of Skövde where all three software deployment tools are configured to deploy a set amount of packages to three hosts each. By measuring deployment time with the most stable releases (as of 2017-04-22) for each software deployment tool, as well as resource utilization for each host and server, this study may assist system administrators to make more informed decisions when deciding which application to use to manage their computers and infrastructure. The results of the study show that Chef is the fastest software deployment tool in terms of deployment time. Chef is also shown to be the most optimized application, as its usage of resources is better than both Ansible and SaltStack. Keywords: Software Deployment Tools, Ansible, Chef, SaltStack Swedish translation Syftet med denna studie är att studera och jämföra fjärrinstallationsprogrammen Ansible, Chef och SaltStack gällande den tid det att installera en mängd program på ett antal klienter, och deras respektive resursutnyttjande. Resultaten från denna studie jämförs även med tidigare studier av Benson et al. (2016), som också studerat tidsåtgången för Ansible, Chef och SaltStack. Det finns emellertid ingen tidigare forskning som nämner resursutnyttjande, vilket är lika viktigt. Studien består av ett experiment som utförs i ett av laboratorierna vid Högskolan i Skövde där alla tre program konfigureras och användes för att installera en viss mängd paket till tre klientdatorer vardera. Genom att mäta tiden det tar för varje program med de senaste stabila utgåvorna (2017-04-22), samt resursutnyttjandet för varje klientdator och server, kan systemadministratörer läsa denna studie för att fatta mer informerade beslut när de bestämmer vilken applikation som ska användas för att hantera deras datorer och infrastruktur. Resultaten av studien visar att Chef är det snabbaste fjärrinstallationsprogrammet för att installera paket på klienter. Chef visar sig också vara den mest optimerade applikationen, eftersom dess resursutnyttjande är bättre än både Ansible och SaltStack. Nyckelord: Fjärrinstallationsprogram, Ansible, Chef, SaltStack Configuration Management and the Cloud Software configuration management tools, also known as software deployment tools, are ways to semi-automate or fully automate time-craving processes such as updating software, re-configuring parameters and the likes on one, or even hundreds of thousands of clients and servers. The three software configuration management tools which this study looks at are Ansible, Chef and SaltStack. Ansible is written in Python, a scripting language, and uses SSH (Secure Shell, a cryptographic network protocol) to deploy packages. Chef is written in the programming language Erlang and requires the client(s) to be installed with an agent, known as the Chef-client. SaltStack is also written in Python, and like Chef, also requires an agent, called a Salt Minion. By measuring the three tools in regards to not only the time it takes for each tool to install a set amount of packages, but also the amount of resources that they utilize during the installations, they can be compared more closely than done in previous research, such as that by Benson et al. (2016). The three tools are tested by setting up an experiment where each server, installed with Ansible, Chef or SaltStack, are accompanied by three hosts each, and given the same resources to operate on to accurately measure each tool with identical environments. Results show that Chef, the Erlang-written tool, is almost twice as fast as both Ansible and SaltStack when deploying applications, and also use a lot more resources – Chef’s memory usage, for example, is shown to operate at an average of 1641 megabytes, which compared to SaltStack’s 643 megabytes and Ansible’s 132 megabytes, is a large difference. System administrators and other IT professionals who handle large amounts of clients and servers may use this research as a basis for making more informed decisions when deciding which tool to implement to handle their infrastructures. With cloud computing being an ever-increasing market with seemingly no end, the need for software configuration management tools too has become very important to handle the large amounts of clients and servers correctly. Table of Contents 1 Introduction .......................................................................................................................................... 1 2 Background ........................................................................................................................................... 2 2.1 Cloud computing ........................................................................................................................... 2 2.1.1 Infrastructure as a Service ...................................................................................................... 2 2.2 Software configuration management ........................................................................................... 4 2.3 Ansible ........................................................................................................................................... 4 2.4 Chef ............................................................................................................................................... 4 2.5 SaltStack ........................................................................................................................................ 5 2.6 Related research ............................................................................................................................ 5 2.6.1 Related benchmark research ................................................................................................. 6 3 Problem definition ................................................................................................................................ 7 3.1 Aim ................................................................................................................................................ 7 3.2 Research question ......................................................................................................................... 7 3.3 Hypotheses .................................................................................................................................... 7 3.4 Motivation ..................................................................................................................................... 8 3.5 Objectives ...................................................................................................................................... 8 3.5 Limitations ..................................................................................................................................... 8 4 Methodology ........................................................................................................................................ 9 4.1 Gathering data ............................................................................................................................... 9 4.1.2 Experiment ................................................................................................................................. 9 4.2 Experimental design ...................................................................................................................... 9 4.2.1 Experimental environment................................................................................................... 10 4.2.2 Experiment topology ............................................................................................................ 12 4.3 Validity ......................................................................................................................................... 13 4.3.1 Validity threats ......................................................................................................................... 13 4.4 Ethics ........................................................................................................................................... 14 5 Results ................................................................................................................................................ 15 5.1 Server Comparison ...................................................................................................................... 15 5.1.1 Server deployment time comparison ................................................................................... 15 5.1.2 Server resource utilization comparison ............................................................................... 16 5.2 Host comparison ........................................................................................................................