Clusterproc: Linux Kernel Support for Clusterwide Process Management
Total Page:16
File Type:pdf, Size:1020Kb
Clusterproc: Linux Kernel Support for Clusterwide Process Management Bruce J. Walker Laura Ramirez John L. Byrne Hewlett-Packard Hewlett-Packard Hewlett-Packard [email protected] [email protected] [email protected] Abstract single root filesystem and a single namespace for processes, files, networking and interpro- cess communication objects. It provided high There are several kernel-based clusterwide pro- availability as well as a simple management cess management implementations available paradigm and load balancing of processes. today, providing different semantics and ca- Mosix focused on process load balancing. The pabilities (OpenSSI, openMosix, bproc, Ker- concepts of Locus have moved to Linux via the righed, etc.). We present a set of hooks to allow OpenSSI[3] open source project. Mosix has various installable kernel module implementa- moved to Linux via the openmosix[4] project. tions, with a high degree of flexibility and vir- tually no performance impact. Optional capa- OpenSSI and Mosix were not initially tar- bilities that can be implemented via the hooks geted at large scale parallel programming clus- include: clusterwide unique pids, single init, ters (eg. those using MPI). The BProc[5] heterogeneity, transparent visibility and access CPM project has targeted that environment to to any process from any node, ability to dis- speed up job launch and simplify process man- tribute processes at exec or fork or thru mi- agement and cluster management. More re- gration, file inheritance and full controlling ter- cent efforts by Kerrighed[6] and USI[7] (now minal semantics, node failure cleanup, cluster- Cassat[8]) were also targeted at HPC environ- wide /proc/<pid>, checkpoint/restart and ments, although Cassat is now interested in scale to thousands of nodes. In addition, we commercial computing. describe an OpenSSI-inspired implementation using the hooks and providing all the features These 5 CPM implementations have somewhat described above. different cluster models (different forms of SSI) and thus fairly different implementations, in part driven by the environment they were orig- 1 Background inally developed for. The “Introduction to SSI” paper[10] details some of the differences. Here we outline some of the characteristics relevant Kernel based cluster process management to CPM. Mosix started as a workstation tech- (CPM) has been around for more than 20 nology that allowed a user on one workstation years, with versions on Unix by Locus[1] and to utilize cpu and memory from another work- Mosix[2]. The Locus system was a general pur- station by moving running processes (process pose Single System Image (SSI) cluster, with a migration) to the other workstation. The mi- • 251 • 252 • Clusterproc: Linux Kernel Support for Clusterwide Process Management grated processes had to see the OS view of the 2 Goals and Requirements for the original workstation (home node) since there Clusterproc Hooks was no enforced common view of resources such as processes, filesystem, ipc objects, bina- ries, etc. To accomplish the home-node view, The general goals for the hooks are to enable most system calls had to be executed back on a variety of CPM implementations while being the home node—the process was effectively non-invasive enough to be accepted in the base split, with the kernel part on the home node kernel. First we look at the base kernel require- and the application part elsewhere. What this ments and then some of the functional require- means to process ids is that home nodes gener- ments. ate ids which are not clusterwide unique. Mi- grated processes retain their home node pid in Changes to the base kernel should retain the ar- a private data structure but are assigned a lo- chitectural cleanliness and not affect the per- cal pid by the current host node (to avoid pid formance. Base locking should be used and conflicts). The BProc model is similar except copies of base routines should be avoided. The there is a single home node (master node) that clusterproc implementations should be instal- all processes are created on. These ids are thus lable modules. It should be possible to build clusterwide unique and a local id is not needed the kernel with the hooks disabled and that ver- on the host node. sion should have no impact on performance. If the hooks are enabled, the module should be The model in OpenSSI, Kerrighed and Cassat is optional. Without the module loaded, perfor- different. Processes can be created on any node mance impact should be negligible. With the and are given a single clusterwide pid when module loaded, one would have a one node they are created. They retain that pid no matter cluster and performance will depend on the where they execute. Also, the node the process CPM implementation. was created on does not retain part of the pro- cess. What this means to the CPM implementa- The hooks should enable at least the following tion is that actions to be done against processes functionality: are done where the process is currently execut- ing and not on the creation or home node. • optionally have a per process data struc- ture maintained by the CPM module; There are many other differences among the CPM implementations. For example, OpenSSI • allowing for the CPM module to allocate has a single, highly available init process while clusterwide process ids; most/all other implementations do not. Addi- tionally, BProc does not retain a controlling ter- • support for distributed process rela- minal (or any other open files) when processes tionships including parent/child, process move, while other implementations do. Some group and session; optional support for implementations, like OpenSSI, support clus- distributed thread groups and ptrace par- terwide ptrace, while others do not. ent; With some of these differences in mind, we • optional ability to move running pro- next look at the goals for a set of CPM hooks cesses from one node to another either at that would satisfy most of the CPM implemen- exec/fork time or at somewhat arbitrary tations. points in their execution; 2005 Linux Symposium • 253 • optional ability to transparently check- structure additions and a set of entry points point/restart processes, process groups and are proposed. The data structure additions are thread groups; a pointer in the task structure (CPM imple- mentations could then allocate a per process • optional ability to have process continue to structure that this pointer points to), and 2 flag execute even if the node they were created bits. The infrastructure for the hooks is pat- on leaves the cluster; terned after the security hooks, although not ex- • optional ability to retain relationships of actly the same. If CONFIG_CLUSTERPROC is remaining processes, no matter which not set, the hooks are turned into inline func- nodes may have crashed; tions that are either empty or return the de- fault value. With CONFIG_CLUSTERPROC de- • optional ability to have full controlling ter- fined, the hook functions call clusterproc ops if minal semantics for processes running re- they are defined, otherwise returning the default motely from their controlling terminal de- value. The ops can be replaced, and the clus- vice; terproc install-able module will replace the ops with routines to provide the particular CPM im- • full, but optional /proc/<pid> capabil- plementation. The clusterproc module would ity for all processes from all nodes; be loaded early in boot. All the code to sup- • capability to support either an “init” pro- port the clusterwide process model would be cess per node or a single init for the entire under GPL. To enable the module some addi- cluster; tional symbols will have to exported to GPL modules. • capability to function within a shared root environment or in an environment with a The proposed hooks are grouped into cat- root filesystem per node; egories below. Each CPM implementation can provide op functions for all or some • capability to be an installable module of the hooks in each category. For each that can be installed either from the category we list the relevant hook functions ramdisk/initramfs or shortly thereafter; in pseudo-C. The names would actually be • support for clusters of up to 64000 nodes, clusterproc_xxx but to fit here we leave with optional code to support larger; out the clustproc_ part. The parameters are abreviated. For each category, we de- In the next section we detail a set of hooks de- scribe the general purpose of the hooks in signed to meet the above set of goals and re- that category and how the hooks could be quirements. Following that is the design of the used in different CPM implementations. The OpenSSI 3.0, as adapted to the proposed hooks. categories are: Init and Reaper; Allocation/ Free; Update Parent; Process lock/unlock; Exit/Wait/Reap; Signalling; Priority and Capa- 3 Proposed Hook Architecture, bility; Setpgid/Setsid; Ptrace; Controlling Ter- minal; and Process movement; Hook Categories and Hooks 3.1 Init and Reaper To enable the optional inclusion of clusterwide process management (referred also as “clus- terproc” or CPM) capability, very small data void single_init(); 254 • Clusterproc: Linux Kernel Support for Clusterwide Process Management void child_reaper(*pid); are only used in those implementations present- ing a clusterwide single pid space. The pid_ alloc hook is called in alloc_pidmap() in One of the goals was to allow the cluster to pid.c. It takes a locally unique pid and re- run with a single init process for the cluster. turns a clusterwide unique pid, possibly by en- The single_init hook in init/main.c, coding a node number in some of the high order init() can be used in a couple of ways. First, bits. The local_pid and strip_pid hooks if there is to be a single init, this routine can are in free_pidmap(), also in pid.c The spawn a “reaper” process that will locally reap local_pid hook returns 1 if this pid was gen- the orphan processes that init normally reaps.