Proc. of 2016 3rd Int. Conf. on Information Tech., Computer, and Electrical Engineering (ICITACEE), Oct 19-21st, 2016, Semarang, Indonesia Commodity Cluster Using Based on / for High-Performance Computing

Iwan Setiawan and Eko Murdyantoro Department of Electrical Engineering Universitas Jenderal Soedirman Purwokerto, Indonesia {stwn, eko.murdyantoro}@unsoed.ac.id

Abstract—Commodity Information Technology (IT) applications with 16 PC-based homogeneous nodes. It has been infrastructure, including hardware, software, and networking, is inspiring academics, researchers, and practitioners to build commonly used in computing facilities, such as computer their own clusters for HPC with commodity infrastructure, laboratory. Unfortunately, the infrastructure in that particular including software that is commonly Free/Libre/ Source place need to be updated and upgraded regularly to cater to the Software (FLOSS). For instance, Linux for the OS and MPICH demands of users and applications for more and more computing for the parallel library, one of the implementations of Message resources, especially in scientific computation. The existing Passing Interface (MPI). infrastructure can be utilized to tackle the issue by forming it into cluster computing. We report our exploration results in Many organizations, specifically universities and research constructing Vincster, a low-cost and scalable commodity cluster institutes, employ commodity IT infrastructure in their for high-performance computing (HPC) to utilize the existing IT computing facilities, e.g. computer laboratory. One of the infrastructure in a computer laboratory. The cluster uses one issues in computer laboratory is how to keep up with the additional computer to serve as a head node containing a Single increasingly requests of more computing resources for users System Image (SSI) (OS) based on Linux with and their applications, especially in scientific computation. Kerrighed as the kernel-level SSI support, along with Hence, the infrastructure in that site need to be updated and middleware and application softwares that build up the SSI upgraded regularly to meet the demands. The existing approach. Vincster has been constructed and implemented with 7 infrastructure can be utilized to overcome the issue by forming SSI key services/features utilizing 17 compute nodes from 20 it into scalable commodity cluster for HPC at minimal cost. available computers. We evaluated computational performance of the cluster by running parallel matrix multiplication program. However, there is a challenge in building cluster computing with traditional method, i.e., to administer the cluster manually Keywords—commodity cluster; high-performance computing; or with tools including installation and maintenance, since the single system image; linux; kerrighed; cots; computer laboratory compute nodes with their computing resources are distributed. It is not easy to make the nodes coherent and identical to one I. INTRODUCTION another, and disk replication technique that applies across Nowadays, the need of computing resources is increasingly nodes is not flexible. Thus, users and applications still see demanding because of the big amount of data to be handled cluster as a system with distributed resources, not as a single and more CPU cycles to be used. Barroso, Dean, and Hӧ lzle and unified system. [1] stated that “on average, a single query on Google reads Single System Image (SSI), “the property of a system that hundreds of megabytes of data and consumes tens of billions of hides the distributed and heterogeneous of the available CPU cycles.” This trend coupled with the high cost and low resources and presents them to users and applications as a accessibility of traditional , along with current single unified computing resource” as described by Buyya, advancement of high-performance microprocessors, high-speed Cortes, and Jin [5], is an approach to tackle the issue networks, and standard tools for HPC, makes commodity aforestated. By abstracting the distributed resources, the cluster cluster, cluster computing that uses fully commodity will be easier to use, administer, and maintain. SSI can be components or commodity-off-the-self (COTS) [2] such as implemented at different levels of abstraction: hardware, Personal Computer (PC), appealing solution for cost-effective OS/kernel, middleware, and application [3] [5] [6]. Each parallel computing [3]. The increasing need of computing implementation level has its advantages and disadvantages. resources trend itself is not just for commercial applications but The more low level the implementation, the more difficult the also computational science applications [3]. implementation has to be constructed, but with more Beowulf is an example of commodity cluster [2], which transparency to the higher levels of the system layer. was first built by Sterling et al. [4] for Earth and space science Cooperation between levels is needed to build a good SSI [5].

978-1-5090-0890-2/16/$31.00 c 2016 IEEE 367 There is a study of three implementations of SSI OS based the cluster should include important or key SSI on Linux kernel, namely openMosix, OpenSSI, and Kerrighed. services/features in [5]. We attempt to incorporate any This study was conducted by Lottiaux, Gallard, Vallée, Morin, accessible methods to include the features into the cluster. The and Boissinot [7]. According to the study, Kerrighed offers the second one is the cluster need to be scalable in terms of best performance, notably regarding its Inter-Process flexibility for nodes to join or disjoin to the cluster (loosely Communication (IPC) and filesystem. Kerrighed mainly coupled system). The third one is the cluster should be low-cost targeted in kernel-level SSI providing the most transparent SSI, by minimizing efforts in buying, changing, and reconfiguring compared to other patchset-based implementations [6]. It also existing commodity IT infrastructure. has interesting features, e.g. cluster-wide process management, support for cluster-wide , cluster filesystem, Furthermore, there are environment characteristics in transparent process checkpointing, high-availability for user computer laboratory that should be addressed, related to the applications, and customizable SSI features. It supports several construction of the cluster. These characteristics include hardware architectures, i386/x86-32/IA32 for version <= 2.3.0, number of compute nodes and the heterogeneity of their and x86-64 for version >=2.4.0 [8]. specifications, including the types and number of cores/processors on each node. As a case study, we conducted Armay, Zulfikar, and Simaremare [9] have built a this research in a computer laboratory of Department of commodity cluster with 9 PC-based homogeneous nodes in a Engineering (now Faculty of Engineering), Universitas computer laboratory with traditional method. There is no single Jenderal Soedirman (Unsoed). We started this research by and unified-view of the cluster, since they do not use SSI doing survey of the existing commodity infrastructure in that approach. Moreover, they demonstrate a test using MPICH1 to laboratory to formulate the design and develop system proof that parallel processing can be run on the cluster, but architecture of the cluster. there is no computational performance presented in the study. Sandhya and Raju [10] built a commodity cluster using SSI Lastly, we use “Vincster” as the name of the cluster that approach based on Linux/Kerrighed with four PC-based nodes. stands for “vintage commodity cluster.” It represents our hope These nodes have two different specifications in processor and of this cluster’s goal, that is bringing back to life outdated memory. They report evaluation results of computational computing resources to harvest more its values. performance for 10 to 100 procesess computing the value of pi on the cluster, but they do not explain how those processes run III. SYSTEM DESCRIPTION on each of the compute nodes. In the study, the relation In this section, we describe the system architecture of between performance of the cluster, number of compute nodes Vincster, and also the cost needed for its construction. It and their cores/processors, along with various problem sizes, is consists of a head node, and a number of compute nodes. The not described. head node provides network services for the compute nodes, We attempt to explore on constructing cluster that addresses including network booting via Preboot Execution Environment the issues aforementioned. To summarize, our contributions are (PXE), Dynamic Host Configuration Protocol (DHCP) for as follows. assigning IP addresses dynamically, Trivial File Transfer Protocol (TFTP) for transfering kernel/OS image to compute 1. We revisit Beowulf/commodity cluster for HPC combined nodes, Network (NFS) for provisioning the root with SSI approach to address the issues by designing and filesystem and sharing storage that are needed to run full- constructing Vincster, a low-cost and scalable cluster based fledged SSI OS on compute nodes. on Linux OS with Kerrighed as the kernel-level SSI support together with middleware and application softwares that We use network booting and centralized storage technique build up the SSI approach. As a case study, we implement in Vincster. All PCs in the laboratory are booted by loading the design and deploy the commodity cluster over existing PXE first from ROM/BIOS or flash drive. It typically includes IT infrastructure in a computer laboratory that consists of network driver that should correspond to the Network Interface 20 PCs with several types of specifications. Card (NIC) of each PC. After PXE is loaded and the NIC is detected, each PC requests IP address to the head node via 2. We verify and validate the SSI cluster system, test the SSI DHCP, fetches SSI kernel image using TFTP, and mounts root services/features and compare them to the key services that filesystem through NFS. All PCs boot in similar manner and have been described in [5]. Moreover, we evaluate the they have the same unique cluster and auto node identifiers. computational performance of the cluster by running They become compute nodes when administrator start the parallel matrix multiplication program using MPI, along cluster, or they can be initialized on boot automatically. with scripts that compile and run the program. The scripts However, the auto cluster initialization could not be used in include detection of the numbers of joined nodes and Vincster, since each PC in the laboratory did not boot at the available cores/processors in the cluster. Further, the results same time and was not ready to be started as a compute node. will be used as MPI parameters (numbers of machines and processors) for the parallel program. 1. Hardware The laboratory has 20 PCs with several kinds of processors, II. DESIGN CONSIDERATIONS main memories (RAM), and network cards or NICs. We In building the commodity cluster for HPC using SSI grouped them into 5 types according to their specifications. approach, we have some design considerations. The first one is The specifications of the PCs in the laboratory are presented in TABLE I.

ICITACEE 2016 368 There were Parallel ATA hard drives installed in each PC, head node, mounted and shared to the compute nodes through but we did not use them, since we designed each of which NFS (nfsroot). relied on network storage that was shared by the head node. The table shows that the specifications of PCs that are used in 1. SSI kernel using Linux kernel version 2.6.20 and Kerrighed the laboratory are relatively heterogeneous and also lagged Subversion Trunk 4977, a development variant of version behind today’s recommended computer specifications. 2.3.0, as the kernel-level SSI software. We added a newer PC without monitor (headless), which 2. Complete OS environment for the compute nodes, was built from commodity components/COTS, to the including client packages related to the network services laboratory as the head node in which each PC in the laboratory provided by the head node. The environment was based on was network-booted, initialized, and joined to the cluster. The GNU/Linux, and its basic system was created using specifications of the head node is presented in TABLE II. debootstrap. Along with the complete OS and network service clients, it includes Kerrighed library and utilities, also application programs and libraries for computational TABLE I. SPECIFICATIONS OF PCS IN THE LABORATORY needs such as cpuburn, mpich, and their dependencies. This environment is exported to compute nodes by NFS server Type Processor RAM NIC (driver) Total Intel 82801EB/ER on the head node. 1 Pentium D 2.66 GHz 512 MiB 14 (e100) 3. PXE service client using PXE-enabled NIC via PC’s BIOS, Pentium Dual Core 1024 RTL8101E/8102E 2 3 2.00 GHz MiB (r8169) and gPXE. We used flash drive containing gPXE and it Pentium Dual Core should matched to the NIC of each type of PC. 3 512 MiB VIA Rhine (rhine) 1 1.8 GHz As aforestated, the root filesystem that is used by compute 4 Pentium 4 2.4 GHz 512 MiB SiS 900 (sis900) 1 nodes includes the SSI kernel. This kernel mainly Linux kernel 5 Pentium 4 1.7 GHz 512 MiB SiS 900 (sis900) 1 and it was patched with Kerrighed. We configured and compiled its source code on the head node. Later, we saved the

TABLE II. HEAD NODE SPECIFICATIONS results in the root filesystem so compute nodes can access and boot the kernel image. Before doing that, we added Processor RAM NIC (driver) HDD development packages from Debian repository to provide tools RTL8111/8168B SATA AMD Athlon X2 II 3 GHz 2048 MiB and libraries for the compilation step. (r8168) 500 GB We configured some network drivers to be built in the 2. Software kernel. This drivers should correspond with the types of NICs The head node and compute nodes in Vincster have specific of the PCs in the laboratory. In our case, they were e100, softwares on them, especially for building up the commodity r8169, rhine, and sis900. The rest of the NIC drivers were set to SSI cluster according to the design considerations. modules, not built in the kernel. After the compilation process finished, we installed the results. For the sake of convenience, All softwares in both nodes were set up and configured on we packaged them into a Debian package. Thus, it gives us the head node, because we implemented network booting and easy way to install and update the package in the future. centralized storage management. There is no action needed in the compute nodes, as they depend on the head node for all After installing SSI kernel package in the root filesystem, their operations. This techniques alleviate administration and we added configurations that enable Kerrighed and its legacy maintenance tasks, e.g. updating, upgrading, troubleshooting scheduler, including configfs as the pseudo filesystem for the the system. Generally, administrator of the cluster only need to global scheduler. We also configured cluster identifier, number update and upgrade the head node as needed. of nodes, boot parameters for the compute nodes such as kernel parameters of “session_id” and “autonodeid”. File /etc/hosts The softwares we use on the head node for building up was edited so that NFS service can find the head node and Vincster system are consist of: compute nodes by name. Further, it is also used by parallel 1. complete OS as the base of the head node system, which program using MPI to initialize processing nodes and their IP uses Kuliax 7.0. It is based on Debian GNU/Linux. We addresses. updated the package index list and upgraded all packages to The NFS service is one of the key middleware support for the current release from near Debian repository sites, after SSI approach in the cluster by providing single-file hierarchy we installed the distribution. We added configuration for service/feature, which is not provided by Kerrighed. Even the NIC of the head node by setting it up to static IP though there is a module called KerFS for the service, it is only address; available from the previous version of Kerrighed 1.02 with 2. network services that were installed and configured on the Linux kernel 2.4. There is also kernel/kerrighed Distributed head node, including dhcp3-server for DHCP, atftpd for File System (kDFS) in newer version, Kerrighed 2.4.x, but TFTP, and nfs-kernel-server for NFS servers, also syslinux unfortunately this module could not reach its stable version, package which includes PXELINUX for PXE service. and its development has stopped in 2011 [11]. The softwares that we used on compute nodes are as We used passwordless SSH remote login when utilizing follows. These softwares, except PXE service client, were MPI for running parallel matrix multiplication in the installed and configured in the root filesystem stored in the performance evaluation, instead of RSH that is used in [9].

ICITACEE 2016 369 This technique was applied by copying public key of the IV. EVALUATIONS AND RESULTS cluster user to authorized_keys file in .ssh directory at home After Vincster has been built, we evaluated it by testing the directory. SSI cluster system, the SSI key services, and the computational performance of single node compared to cluster with two, four, 3. Networking and eight compute nodes. The existing networking infrastructure of the laboratory at that time of the research uses star topology, Category 5e UTP 1. SSI Cluster System Testing cables, and commodity Ethernet interconnect through Fast We tested the SSI cluster system on the head node by Ethernet switch 24 ports with rate of 10/100 Mbit/s. The verifying/validating its boot, SSI system, as well as network existing infrastructure was not changed during deployment of services that were used by the cluster, namely PXE, DHCP, the SSI cluster, in terms of topology, cabling, and placement of TFTP, and NFS. For the compute nodes, we verified/validated the network devices. We only added a single UTP cable to them by doing network boot for each node. We also ensured interconnect head node to the network. Network topology in that all network services were accessed successfully, and the the laboratory with the head node is illustrated in Fig. 1. head node served all compute nodes from the available PCs in Typically, each of PC in the network should have a NIC the laboratory. The results of the system testing are presented in with capability of network booting via PXE in its BIOS, but TABLE III. most of PCs in the laboratory do not have that capability. We prepared some flash drives to be flashed by gPXE images, each TABLE III. RESULTS OF THE SSI CLUSTER SYSTEM TESTING of which contained driver that corresponded with each NIC in every PC in the network. There was PXE support in a few PCs, Node Boot DHCP NFS SSI Notes so we enabled and used the feature to help network booting for SSI cluster OS was built, them. installed, and booted correctly. Network boot Head yes yes yes yes services (PXE, DHCP, node TFTP, NFS) were installed, run, and served compute nodes well. PCs in the laboratory that could be booted and up Compute were 17. Two PCs had an yes yes* yes* yes* node issue with NIC support, and one PC could not be switched on (broken). 2. SSI Services Testing There are 10 SSI key services [5] that need to be tested in Vinscter. The summary results of the SSI services testing are as follows. 1. Single user interface: Vincster has this service with command-line interface (CLI). Buyya, Cortes, and Jin [5] stated that it should be graphical user interface (GUI). We argue that it should not be the requirement, since GUI consumes more resources, additional softwares, Fig. 1. Network topology in the laboratory with the head node. configurations, and inflexible especially when it is accessed remotely. In that case, GUI should be taken as a recommendation, rather than essential requirement. 4. Cost The cost for constructing the commodity SSI cluster was 2. Single process space: running processes that run on a minimal. The cost of PC for the head node, without monitor, compute node in Vincster can be seen and monitored from was 2,348,000 rupiah (approximately US$178). If we add a other compute nodes, and they also can be terminated/killed single 4-meter Category 5e UTP cable plus several RJ45 by the user or privileged user. Each process identification connectors for interconnecting the head node to the existing (ID) is unique across the cluster. If a user knows the process network switch, the cost adds 35,000 rupiah to the expense. ID, he/she can manage that process from every compute Accordingly, the total cost is 2,383,000 rupiah (approximately node in the cluster. The output of ps and top commands US$181). show this service. Network booting and centralized resource management that 3. Single memory space: the system has provided an illusion we used in Vincster is common in the deployment of diskless of big centralized memory consists of distributed local or thin client networks. This technique gives us more memories in compute nodes. The free and top commands opportunities to reduce Capital Expenditure (CAPEX) and show this service. We could not test Dynamic Shared Operational Expenditure (OPEX), among other things such as Memory (DSM) in Vincster. supporting green technology.

ICITACEE 2016 370 4. Single-file hierarchy: each compute nodes in Vincster has latency. The results of the computational performance the same filesystem hierarchy, from root (/) filesystem evaluation of Vincster are presented in Fig. 2 for matrices from down to user’s directory. The location of the files or 100x100 until 600x600, and the performance trends for directories that are stored in the filesystem is the same. matrices from 100x100 until 2000x2000 in Fig. 3. It is shown that Vincster improves performance of the computation. 5. Single job management system: there is a configurable However, the performance did not always increase when we global scheduler in the system, but it needs to be setup first. added more nodes to Vincster. The number of nodes should be Users’ job can be submitted from and to any node in the corresponded to the problem size, i.e., the number of rows and cluster. columns. Besides, there was a sudden latency rise occurred in 6. Single control point and management: the entire cluster and the evaluation when Vincster was processing matrix each compute node can be configured, monitored, tested, 1600x1600. This issue was presumably because of the and controlled from a user interface. The available interface accumulated traffic, that needed for maintaining single-file in Vinscter is CLI. The krgadm command can be used to hierarchy and communications among nodes, slowed down the start and stop the cluster, show the status of the cluster, and computation operations. also reboot or poweroff each compute node. We attempted to run the parallel program with activated 7. Checkpoint and : the cluster has process process migration service, and found that there was a difficulty migration service/feature to dynamically load balance for processes to know the location of the other processes, since among nodes. We can activate the capability by running each process that was involved in the computation migrated to krgcapset command with -d +CAN_MIGRATE parameter. another node, and the process that previously communicated Moreover, we can also migrate a process manually to other with it lost the communications. node by using migrate command along with process identification and the target node. We have limited test on this service and we only did test by running multiple burnMMX processes that overloaded the processors. The scheduler migrates the processess to other node to balance the load using MOSIX-like load balancing algorithm with round robin task placement for target node selection [12]. Vinscter does not have single entry point, single I/O space, and single virtual networking services/features.

3. Performance Evaluation Computational performance of the SSI cluster was evaluated by running parallel matrix multiplication program Fig. 2. Performance of single node, two nodes, four nodes, and eight nodes using asynchronous message passing. This program is a C of Vincster running parallel matrix multiplication program with various sizes program using MPICH1 library. of matrices from 100x100 until 600x600. When the program runs, first it generates random numbers for elements in two square matrices. The number of rows and columns in each matrix are equal, and each of which is incremented from 100 until 2000. The program run on a single node, then on a cluster of two, four, and eight nodes with PC Type 1 for the evaluation. We used shell scripts to run the computational performance evaluation. One of the scripts detect the number of active compute nodes and the aggregate number of cores/microprocessors included in the cluster. The number of processes that involved in the computation corresponds to the detection results, with one process as the manager. For instance, if the script detects four nodes, then it checks the total of the aggregate cores/microprocessors of the four nodes. If Fig. 3. Performance trends of single node, two nodes, four nodes, and eight each node has 2 cores, then the script runs 8 processors for the nodes of Vincster running parallel matrix multiplication program with various computation using MPI. The script also set sizes of matrices from 100x100 until 2000x2000. P4_RSHCOMMAND variable to “ssh”, instead of “rsh”. In After the computational performance evaluation data were each evaluation of the computation, all compute nodes were collected, we calculated the speedup of each configuration and rebooted to clean the system environment, e.g. against cache compare the results to each other. The speedup was calculated effect. based on Amdahl’s law as described in Hennessy and Patterson The evaluation compares computational performance of [2] that is “execution time for entire task without using the single node and cluster of nodes in terms of execution time or enhancement divided by execution time for entire task using

ICITACEE 2016 371 the enhancement when possible”. Speedup itself is a latency demonstrated its evaluation results by testing the SSI cluster ratio of two different systems, one with enhancement, and the system, SSI key services, and computational performance of other without one. The enhancement in this case is the cluster parallel matrix multiplication with various sizes of matrices. with a number of nodes. The speedups of cluster of two, four, The SSI approach has given a single and unified-view of and eight nodes with matrices from 100x100 until 600x600 are Vincster with 7 SSI key services. The performance evaluation illustrated in Fig. 4 and the speedup trends of the same number results show that Vincster improves computational of nodes with matrices from 100x100 until 2000x2000 is performance with favorable speedup, but there should be showed in Fig. 5. We gained super linear speedups in two-node consideration regarding the number of nodes that should cluster computing matrices from 500x500 and above, but we correspond to the problem size of the computation. did not see this phenomenon in four- or eight-node clusters with the range of the available evaluation datasets, except for There are several issues in Vincster. First, all compute the anomaly rise in matrix 1600x1600. nodes are dependent on the head node for their booting, systems, applications, and storage. This issue will add single point of failure to the system. Second, Linux kernel, that is used in Vincster, is relatively old and it needs to be upgraded to a newer version to attain the benefits of hardware supports and security fixes. Unfortunately, it depends on the Kerrighed version that only applies to the specific kernel version, in this case 2.6.20 for 32-bit machine. Third, the existing cluster interconnect, which uses Fast Ethernet, is limited by its maximum bit rate/speed for communications to maintain the cluster system and also the operations when computation run on the cluster.

ACKNOWLEDGMENT

Fig. 4. Speedups of two nodes, four nodes, and eight nodes of Vincster We wish to thank to Department of Electrical Engineering running parallel matrix multiplication program with various sizes of matrices and Faculty of Engineering Unsoed for letting us conduct this from 100x100 until 600x600. research in one of their laboratories. This work was supported by DIPA Unsoed/LPPM Unsoed.

REFERENCES [1] L.A. Barroso, J. Dean, and U. Hӧ lzle, “Web search for a planet: the Google cluster architecture”, IEEE Micro, Vol. 23, No. 2, 2003. [2] J.L. Hennessy and D.A. Patterson, “Computer Architecture: A Quantitative Approach”, Fifth Edition, Morgan Kaufmann, 2012. [3] C.S. Yeo, et al.,“Cluster computing: high-performance, high-availability, and high-throughput processing on a network of computers”, Handbook of Nature-Inspired and Innovative Computing, 2006. [4] T. Sterling, et al., “Beowulf: a parallel workstation for scientific computation”, International Conference on Parallel Processing, 1995. [5] R. Buyya, T. Cortes, and H. Jin, “Single System Image (SSI)”, The Fig. 5. Speedup trends of two nodes, four nodes, and eight nodes of Vincster International Journal of High Performance Computing Applications, running parallel matrix multiplication program with various sizes of matrices Volume 15, No. 2, 2001, pp. 124-135. from 100x100 until 2000x2000. [6] P. Healy, T. Lynn, E. Barrett, and J.P. Morrison, “Single system image: a survey”, Journal of Parallel and , Volume 90-91, 2016, pp. 35-51. V. CONCLUSIONS [7] R. Lottiaux, P. Gallard, G. Vallée, C. Morin, and B. Boissinot, “OpenMosix, OpenSSI, and Kerrighed: a comparative study”, IEEE We have described our exploration results in constructing International Symposium on Cluster Computing and the Grid, 2005. Vincster, a low-cost and scalable commodity cluster for HPC [8] Kerrighed. (2010). What is Kerrighed? [Online]. Available: using SSI approach based on Linux/Kerrighed. It has been http://kerrighed.org/wiki/index.php/Main_Page. implemented and deployed with the addition of one headless [9] E.F. Armay, A. Zulfikar, and H. Simaremare, “Building a cluster of PC PC to be the head node of the cluster utilizing 17 compute in computer laboratory of Electrical Engineering of UIN Suska Riau”, nodes from 20 available PCs in a laboratory. Vincster can be International Conference on Distributed Frameworks for Multimedia used as a solution for utilization existing outdated commodity Applications, 2010. IT infrastructure at minimal cost. In addition, due to its cluster [10] K.V. Sandhya and G. Raju, “Single System Image clustering using Kerrighed”, Third International Conference on Advanced Computing, scalability, it is easy to add more compute nodes to Vincster in 2011. the future. [11] Kerrighed. (2011). kernel/kerrighed Distributed File System [Online]. The system description has been presented including Available: http://kerrighed.org/wiki/index.php/KernelDevelKdFS. hardware, software, networking, as well as cost. We also have [12] Kerrighed. (2011). Configurable scheduler framework [Online]. Available: http://kerrighed.org/wiki/index.php/SchedConfig.

ICITACEE 2016 372