Clusterproc: Linux Kernel Support for Clusterwide Process Management

Total Page:16

File Type:pdf, Size:1020Kb

Clusterproc: Linux Kernel Support for Clusterwide Process Management Clusterproc: Linux Kernel Support for Clusterwide Process Management Bruce J. Walker Laura Ramirez John L. Byrne Hewlett-Packard Hewlett-Packard Hewlett-Packard [email protected] [email protected] [email protected] Abstract single root filesystem and a single namespace for processes, files, networking and interpro- cess communication objects. It provided high There are several kernel-based clusterwide pro- availability as well as a simple management cess management implementations available paradigm and load balancing of processes. today, providing different semantics and ca- Mosix focused on process load balancing. The pabilities (OpenSSI, openMosix, bproc, Ker- concepts of Locus have moved to Linux via the righed, etc.). We present a set of hooks to allow OpenSSI[3] open source project. Mosix has various installable kernel module implementa- moved to Linux via the openmosix[4] project. tions, with a high degree of flexibility and vir- tually no performance impact. Optional capa- OpenSSI and Mosix were not initially tar- bilities that can be implemented via the hooks geted at large scale parallel programming clus- include: clusterwide unique pids, single init, ters (eg. those using MPI). The BProc[5] heterogeneity, transparent visibility and access CPM project has targeted that environment to to any process from any node, ability to dis- speed up job launch and simplify process man- tribute processes at exec or fork or thru mi- agement and cluster management. More re- gration, file inheritance and full controlling ter- cent efforts by Kerrighed[6] and USI[7] (now minal semantics, node failure cleanup, cluster- Cassat[8]) were also targeted at HPC environ- wide /proc/<pid>, checkpoint/restart and ments, although Cassat is now interested in scale to thousands of nodes. In addition, we commercial computing. describe an OpenSSI-inspired implementation using the hooks and providing all the features These 5 CPM implementations have somewhat described above. different cluster models (different forms of SSI) and thus fairly different implementations, in part driven by the environment they were orig- 1 Background inally developed for. The “Introduction to SSI” paper[10] details some of the differences. Here we outline some of the characteristics relevant Kernel based cluster process management to CPM. Mosix started as a workstation tech- (CPM) has been around for more than 20 nology that allowed a user on one workstation years, with versions on Unix by Locus[1] and to utilize cpu and memory from another work- Mosix[2]. The Locus system was a general pur- station by moving running processes (process pose Single System Image (SSI) cluster, with a migration) to the other workstation. The mi- • 251 • 252 • Clusterproc: Linux Kernel Support for Clusterwide Process Management grated processes had to see the OS view of the 2 Goals and Requirements for the original workstation (home node) since there Clusterproc Hooks was no enforced common view of resources such as processes, filesystem, ipc objects, bina- ries, etc. To accomplish the home-node view, The general goals for the hooks are to enable most system calls had to be executed back on a variety of CPM implementations while being the home node—the process was effectively non-invasive enough to be accepted in the base split, with the kernel part on the home node kernel. First we look at the base kernel require- and the application part elsewhere. What this ments and then some of the functional require- means to process ids is that home nodes gener- ments. ate ids which are not clusterwide unique. Mi- grated processes retain their home node pid in Changes to the base kernel should retain the ar- a private data structure but are assigned a lo- chitectural cleanliness and not affect the per- cal pid by the current host node (to avoid pid formance. Base locking should be used and conflicts). The BProc model is similar except copies of base routines should be avoided. The there is a single home node (master node) that clusterproc implementations should be instal- all processes are created on. These ids are thus lable modules. It should be possible to build clusterwide unique and a local id is not needed the kernel with the hooks disabled and that ver- on the host node. sion should have no impact on performance. If the hooks are enabled, the module should be The model in OpenSSI, Kerrighed and Cassat is optional. Without the module loaded, perfor- different. Processes can be created on any node mance impact should be negligible. With the and are given a single clusterwide pid when module loaded, one would have a one node they are created. They retain that pid no matter cluster and performance will depend on the where they execute. Also, the node the process CPM implementation. was created on does not retain part of the pro- cess. What this means to the CPM implementa- The hooks should enable at least the following tion is that actions to be done against processes functionality: are done where the process is currently execut- ing and not on the creation or home node. • optionally have a per process data struc- ture maintained by the CPM module; There are many other differences among the CPM implementations. For example, OpenSSI • allowing for the CPM module to allocate has a single, highly available init process while clusterwide process ids; most/all other implementations do not. Addi- tionally, BProc does not retain a controlling ter- • support for distributed process rela- minal (or any other open files) when processes tionships including parent/child, process move, while other implementations do. Some group and session; optional support for implementations, like OpenSSI, support clus- distributed thread groups and ptrace par- terwide ptrace, while others do not. ent; With some of these differences in mind, we • optional ability to move running pro- next look at the goals for a set of CPM hooks cesses from one node to another either at that would satisfy most of the CPM implemen- exec/fork time or at somewhat arbitrary tations. points in their execution; 2005 Linux Symposium • 253 • optional ability to transparently check- structure additions and a set of entry points point/restart processes, process groups and are proposed. The data structure additions are thread groups; a pointer in the task structure (CPM imple- mentations could then allocate a per process • optional ability to have process continue to structure that this pointer points to), and 2 flag execute even if the node they were created bits. The infrastructure for the hooks is pat- on leaves the cluster; terned after the security hooks, although not ex- • optional ability to retain relationships of actly the same. If CONFIG_CLUSTERPROC is remaining processes, no matter which not set, the hooks are turned into inline func- nodes may have crashed; tions that are either empty or return the de- fault value. With CONFIG_CLUSTERPROC de- • optional ability to have full controlling ter- fined, the hook functions call clusterproc ops if minal semantics for processes running re- they are defined, otherwise returning the default motely from their controlling terminal de- value. The ops can be replaced, and the clus- vice; terproc install-able module will replace the ops with routines to provide the particular CPM im- • full, but optional /proc/<pid> capabil- plementation. The clusterproc module would ity for all processes from all nodes; be loaded early in boot. All the code to sup- • capability to support either an “init” pro- port the clusterwide process model would be cess per node or a single init for the entire under GPL. To enable the module some addi- cluster; tional symbols will have to exported to GPL modules. • capability to function within a shared root environment or in an environment with a The proposed hooks are grouped into cat- root filesystem per node; egories below. Each CPM implementation can provide op functions for all or some • capability to be an installable module of the hooks in each category. For each that can be installed either from the category we list the relevant hook functions ramdisk/initramfs or shortly thereafter; in pseudo-C. The names would actually be • support for clusters of up to 64000 nodes, clusterproc_xxx but to fit here we leave with optional code to support larger; out the clustproc_ part. The parameters are abreviated. For each category, we de- In the next section we detail a set of hooks de- scribe the general purpose of the hooks in signed to meet the above set of goals and re- that category and how the hooks could be quirements. Following that is the design of the used in different CPM implementations. The OpenSSI 3.0, as adapted to the proposed hooks. categories are: Init and Reaper; Allocation/ Free; Update Parent; Process lock/unlock; Exit/Wait/Reap; Signalling; Priority and Capa- 3 Proposed Hook Architecture, bility; Setpgid/Setsid; Ptrace; Controlling Ter- minal; and Process movement; Hook Categories and Hooks 3.1 Init and Reaper To enable the optional inclusion of clusterwide process management (referred also as “clus- terproc” or CPM) capability, very small data void single_init(); 254 • Clusterproc: Linux Kernel Support for Clusterwide Process Management void child_reaper(*pid); are only used in those implementations present- ing a clusterwide single pid space. The pid_ alloc hook is called in alloc_pidmap() in One of the goals was to allow the cluster to pid.c. It takes a locally unique pid and re- run with a single init process for the cluster. turns a clusterwide unique pid, possibly by en- The single_init hook in init/main.c, coding a node number in some of the high order init() can be used in a couple of ways. First, bits. The local_pid and strip_pid hooks if there is to be a single init, this routine can are in free_pidmap(), also in pid.c The spawn a “reaper” process that will locally reap local_pid hook returns 1 if this pid was gen- the orphan processes that init normally reaps.
Recommended publications
  • Sprite File System There Are Three Important Aspects of the Sprite ®Le System: the Scale of the System, Location-Transparency, and Distributed State
    Naming, State Management, and User-Level Extensions in the Sprite Distributed File System Copyright 1990 Brent Ballinger Welch CHAPTER 1 Introduction ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ This dissertation concerns network computing environments. Advances in network and microprocessor technology have caused a shift from stand-alone timesharing systems to networks of powerful personal computers. Operating systems designed for stand-alone timesharing hosts do not adapt easily to a distributed environment. Resources like disk storage, printers, and tape drives are not concentrated at a single point. Instead, they are scattered around the network under the control of different hosts. New operating system mechanisms are needed to handle this sort of distribution so that users and application programs need not worry about the distributed nature of the underlying system. This dissertation explores the approach of centering a distributed computing environment around a shared network ®le system. The ®le system is chosen as a starting point because it is a heavily used service in stand-alone systems, and the read/write para- digm of the ®le system is a familiar one that can be applied to many system resources. The ®le system described in this dissertation provides a distributed name space for sys- tem resources, and it provides remote access facilities so all resources are available throughout the network. Resources accessible via the ®le system include disk storage, other types of peripheral devices, and user-implemented service applications. The result- ing system is one where resources are named and accessed via the shared ®le system, and the underlying distribution of the system among a collection of hosts is not important to users.
    [Show full text]
  • Tutorials Point, Simply Easy Learning
    Tutorials Point, Simply Easy Learning UML Tutorial Tutorialspoint.com UNIX is a computer Operating System which is capable of handling activities from multiple users at the same time. Unix was originated around in 1969 at AT&T Bell Labs by Ken Thompson and Dennis Ritchie. This tutorial gives an initial push to start you with UNIX. For more detail kindly check tutorialspoint.com/unix What is Unix ? The UNIX operating system is a set of programs that act as a link between the computer and the user. The computer programs that allocate the system resources and coordinate all the details of the computer's internals is called the operating system or kernel. Users communicate with the kernel through a program known as the shell. The shell is a command line interpreter; it translates commands entered by the user and converts them into a language that is understood by the kernel. Unix was originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. There are various Unix variants available in the market. Solaris Unix, AIX, UP Unix and BSD are few examples. Linux is also a flavour of Unix which is freely available. Several people can use a UNIX computer at the same time; hence UNIX is called a multiuser system. A user can also run multiple programs at the same time; hence UNIX is called multitasking. Unix Architecture: Here is a basic block diagram of a UNIX system: 1 | P a g e Tutorials Point, Simply Easy Learning The main concept that unites all versions of UNIX is the following four basics: Kernel: The kernel is the heart of the operating system.
    [Show full text]
  • Distributed Virtual Machines: a System Architecture for Network Computing
    Distributed Virtual Machines: A System Architecture for Network Computing Emin Gün Sirer, Robert Grimm, Arthur J. Gregory, Nathan Anderson, Brian N. Bershad {egs,rgrimm,artjg,nra,bershad}@cs.washington.edu http://kimera.cs.washington.edu Dept. of Computer Science & Engineering University of Washington Seattle, WA 98195-2350 Abstract Modern virtual machines, such as Java and Inferno, are emerging as network computing platforms. While these virtual machines provide higher-level abstractions and more sophisticated services than their predecessors from twenty years ago, their architecture has essentially remained unchanged. State of the art virtual machines are still monolithic, that is, they are comprised of closely-coupled service components, which are thus replicated over all computers in an organization. This crude replication of services forms one of the weakest points in today’s networked systems, as it creates widely acknowledged and well-publicized problems of security, manageability and performance. We have designed and implemented a new system architecture for network computing based on distributed virtual machines. In our system, virtual machine services that perform rule checking and code transformation are factored out of clients and are located in enterprise- wide network servers. The services operate by intercepting application code and modifying it on the fly to provide additional service functionality. This architecture reduces client resource demands and the size of the trusted computing base, establishes physical isolation between virtual machine services and creates a single point of administration. We demonstrate that such a distributed virtual machine architecture can provide substantially better integrity and manageability than a monolithic architecture, scales well with increasing numbers of clients, and does not entail high overhead.
    [Show full text]
  • Workstation Operating Systems Mac OS 9
    15-410 “Now that we've covered the 1970's...” Plan 9 Nov. 25, 2019 Dave Eckhardt 1 L11_P9 15-412, F'19 Overview “The land that time forgot” What style of computing? The death of timesharing The “Unix workstation problem” Design principles Name spaces File servers The TCP file system... Runtime environment 3 15-412, F'19 The Land That Time Forgot The “multi-core revolution” already happened once 1982: VAX-11/782 (dual-core) 1984: Sequent Balance 8000 (12 x NS32032) 1985: Encore MultiMax (20 x NS32032) 1990: Omron Luna88k workstation (4 x Motorola 88100) 1991: KSR1 (1088 x KSR1) 1991: “MCS” paper on multi-processor locking algorithms 1995: BeBox workstation (2 x PowerPC 603) The Land That Time Forgot The “multi-core revolution” already happened once 1982: VAX-11/782 (dual-core) 1984: Sequent Balance 8000 (12 x NS32032) 1985: Encore MultiMax (20 x NS32032) 1990: Omron Luna88k workstation (4 x Motorola 88100) 1991: KSR1 (1088 x KSR1) 1991: “MCS” paper on multi-processor locking algorithms 1995: BeBox workstation (2 x PowerPC 603) Wow! Why was 1995-2004 ruled by single-core machines? What operating systems did those multi-core machines run? The Land That Time Forgot Why was 1995-2004 ruled by single-core machines? In 1995 Intel + Microsoft made it feasible to buy a fast processor that fit on one chip, a fast I/O bus, multiple megabytes of RAM, and an OS with memory protection. Everybody could afford a “workstation”, so everybody bought one. Massive economies of scale existed in the single- processor “Wintel” universe.
    [Show full text]
  • Distributed Operating Systems
    Distributed Operating Systems ANDREW S. TANENBAUM and ROBBERT VAN RENESSE Department of Mathematics and Computer Science, Vrije Universiteit, Amsterdam, The Netherlands Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden. Categories and Subject Descriptors: C.2.4 [Computer-Communications Networks]: Distributed Systems-network operating system; D.4.3 [Operating Systems]: File Systems Management-distributed file systems; D.4.5 [Operating Systems]: Reliability-fault tolerance; D.4.6 [Operating Systems]: Security and Protection-access controls; D.4.7 [Operating Systems]: Organization and Design-distributed systems General Terms: Algorithms, Design, Experimentation, Reliability, Security Additional Key Words and Phrases: File server INTRODUCTION more convenient to use than the bare ma- chine. Examples of well-known centralized Everyone agrees that distributed systems (i.e., not distributed) operating systems are are going to be very important in the future. CP/M,’ MS-DOS,’ and UNIX.3 Unfortunately, not everyone agrees on A distributed operating system is one that what they mean by the term “distributed looks to its users like an ordinary central- system.” In this paper we present a view- ized operating system but runs on multi- point widely held within academia about ple, independent central processing units what is and is not a distributed system, we (CPUs).
    [Show full text]
  • Process Manager Ext2fs Proc Devfs Process Management Memory Manager Scheduler Minix Nfs Msdos Memory Management Signaling
    Embedded Systems Prof. Myung-Eui Lee (A-405) [email protected] Embedded Systems 1-1 KUT Linux Structure l Linux Structure proc1 proc2 proc3 proc4 proc5 procn User Space System Call Interface Filesystem Manager Process Manager Ext2fs proc devfs Process Management Memory Manager Scheduler minix nfs msdos Memory Management Signaling Buffer Cache Kernel Space Device Manager Network Manager char block Ipv6 ethernet Console KBD SCSI CD-ROM PCI network ….. Device Interface dev1 dev2 dev3 dev4 devn Embedded Systems 1-2 KUT Linux Structure l Linux Structure Applications: Graphics UI, Compilers, Media player UI Text UI = Shell: sh, csh, ksh, bash, tcsh, zsh Compiler libraries (libc.a) API System Shared Libraries (System Call Interface) Memory File Process management management management Kernel Device Drives BIOS Computer Hardware Embedded Systems 1-3 KUT System Call Interface Standard C Lib. System Call C program invoking printf() library call, API – System call – OS Relationship which calls write() system call ar –t /usr/lib/libc.a | grep printf /usr/src/linux-2.4/arch/i386/kernel/entry.S Embedded Systems 1-4 KUT System Call l System calls define the programmer interface to Linux system » Interface between user-level processes and hardware devices u CPU, memory, disks etc. » Make programming easier u Let kernel take care of hardware-specific issues » Increase system security u Let kernel check requested service via system call » Provide portability u Maintain interface but change functional implementation l Roughly five categories of system calls
    [Show full text]
  • Processes in Linux/Unix
    Processes in Linux/Unix A program/command when executed, a special instance is provided by the system to the process. This instance consists of all the services/resources that may be utilized by the process under execution. • Whenever a command is issued in unix/linux, it creates/starts a new process. For example, pwd when issued which is used to list the current directory location the user is in, a process starts. • Through a 5 digit ID number unix/linux keeps account of the processes, this number is call process id or pid. Each process in the system has a unique pid. • Used up pid’s can be used in again for a newer process since all the possible combinations are used. • At any point of time, no two processes with the same pid exist in the system because it is the pid that Unix uses to track each process. Initializing a process A process can be run in two ways: 1. Foreground Process : Every process when started runs in foreground by default, receives input from the keyboard and sends output to the screen. When issuing pwd command $ ls pwd Output: $ /home/geeksforgeeks/root When a command/process is running in the foreground and is taking a lot of time, no other processes can be run or started because the prompt would not be available until the program finishes processing and comes out. 2. Backround Process : It runs in the background without keyboard input and waits till keyboard input is required. Thus, other processes can be done in parallel with the process running in background since they do not have to wait for the previous process to be completed.
    [Show full text]
  • A Single System Image Java Operating System for Sensor Networks
    A SINGLE SYSTEM IMAGE JAVA OPERATING SYSTEM FOR SENSOR NETWORKS Emin Gun Sirer Rimon Barr John C. Bicket Daniel S. Dantas Computer Science Department Cornell University Ithaca, NY 14853 {egs, barr, bicket, ddantas}@cs.cornell.edu Abstract In this paper we describe the design and implementation of a distributed operating system for sensor net- works. The goal of our system is to extend total system lifetime through power-aware adaptation for sensor networking applications. Our system achieves this goal by providing a single system image of a unified Java virtual machine to applications over an ad hoc collection of heterogeneous sensors. It automatically and transparently partitions applications into components and dynamically finds a placement of these components on nodes within the sensor network to reduce energy consumption and increase system longevity. This paper describes the design and implementation of our system and examines the question of where and when to mi- grate components in a sensor network. We evaluate two practical, power-aware, general-purpose algorithms for object placement, as well as an adaptive scheme for deciding the time granularity of object migration. We demonstrate that our algorithms can increase sensor network longevity by a factor of four to five by effec- tively distributing energy consumption and avoiding hotspots. 1. Introduction able to components at each node, in particular the available power and bandwidth may change over Sensor networks simultaneously promise a radi- time and necessitate the relocation of application cally new class of applications and pose signifi- components. Further, event sources that are being cant challenges for application development.
    [Show full text]
  • View Article(3467)
    Problems of information technology, 2018, №1, 92–97 Kamran E. Jafarzade DOI: 10.25045/jpit.v09.i1.10 Institute of Information Technology of ANAS, Baku, Azerbaijan [email protected] COMPARATIVE ANALYSIS OF THE SOFTWARE USED IN SUPERCOMPUTER TECHNOLOGIES The article considers the classification of the types of supercomputer architectures, such as MPP, SMP and cluster, including software and application programming interfaces: MPI and PVM. It also offers a comparative analysis of software in the study of the dynamics of the distribution of operating systems (OS) for the last year of use in supercomputer technologies. In addition, the effectiveness of the use of CentOS software on the scientific network "AzScienceNet" is analyzed. Keywords: supercomputer, operating system, software, cluster, SMP-architecture, MPP-architecture, MPI, PVM, CentOS. Introduction Supercomputer is a computer with high computing performance compared to a regular computer. Supercomputers are often used for scientific and engineering applications that need to process very large databases or perform a large number of calculations. The performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of millions of instructions per second (MIPS). Since 2015, the supercomputers performing up to quadrillion FLOPS have started to be developed. Modern supercomputers represent a large number of high performance server computers, which are interconnected via a local high-speed backbone to achieve the highest performance [1]. Supercomputers were originally introduced in the 1960s and bearing the name or monogram of the companies such as Seymour Cray of Control Data Corporation (CDC), Cray Research over the next decades. By the end of the 20th century, massively parallel supercomputers with tens of thousands of available processors started to be manufactured.
    [Show full text]
  • CENG251: Assignment #5
    Out: March 30, 2021 Due: Wed Apr 14, 2021 There are 30 marks. Please keep and submit a time log to account for what you’ve done. CENG251: Assignment #5 Part I: C Language Features (8 marks) In class we demonstrated re-entrant code by comparing the use of ctime vs ctime_r. We showed that ctime always returned the same memory location and that ctime_r avoided this problem by passing the memory to be filled which was then returned with a different result; Write a single small C program that 1. Demonstrates that getpwuid and getpwnam are not reentrant by calling each of these twice with different valid arguments and displaying the return address and the content of the passwd structure returned. Prepare your output string using fmemopen and then writing the output to a string using a file pointer before writing the string to the terminal. An example of using fmemopen can be found in the example program stringIODemo.c (3) 2. Continue the program and call getpwuid_r and getpwnam_r with correct arguments. Prepare your output string using sprintf and then display the string to the terminal. Examples of long structured format strings can be seen in the example program stringIODemo.c (3) 3. In ctimeDemo.c we demonstrated the use of strdup to preserve the result of ctime from one function call to the next. Use malloc and memcpy to do the same for getpwuid and show that this worked displaying the values from the copy. (2) Part II: Signals (8 marks) The following exercises should be done as a single program.
    [Show full text]
  • Distributed-Operating-Systems.Pdf
    Distributed Operating Systems Overview Ye Olde Operating Systems OpenMOSIX OpenSSI Kerrighed Quick Preview Front Back Distributed Operating Systems vs Grid Computing Grid System User Space US US US US US US Operating System OS OS OS OS OS OS Nodes Nodes Amoeba, Plan9, OpenMosix, Xgrid, SGE, Condor, Distcc, OpenSSI, Kerrighed. Boinc, GpuGrid. Distributed Operating Systems vs Grid Computing Problems with the grid. Programs must utilize that library system. Usually requiring seperate programming. OS updates take place N times. Problems with dist OS Security issues – no SSL. Considered more complicated to setup. Important Note Each node, even with distributed operating systems, boots a kernel. This kernel can vary depending on the role of the node and overall architecture of the system. User Space Operating System OS OS OS OS OS OS Nodes Amoeba Andrew S. Tanenbaum Earliest documentation: 1986 What modern language was originally developed for use in Amoeba? Anyone heard of Orca? Sun4c, Sun4m, 386/486, 68030, Sun 3/50, Sun 3/60. Amoeba Plan9 Started development in the 1980's Released in 1992 (universities) and 1995 (general public). All devices are part of the filesystem. X86, MIPS, DEC Alpha, SPARC, PowerPC, ARM. Union Directories, basis of UnionFS. /proc first implemente d here. Plan9 Rio , the Plan9 window manager showing ”faces(1), stats(8), acme(1) ” and many more things. Plan9 Split nodes into 3 distinct groupings. Terminals File servers Computational servers Uses the ”9P” protocol. Low level, byte protocol, not block. Used from filesystems, to printer communication. Author: Ken Thompso n Plan9 / Amoeba Both Plan9 and Amoeba make User Space groupings of nodes, into specific categories.
    [Show full text]
  • Unit-Iv Process,Signals and File Locking
    UNIT-IV PROCESS, SIGNALS AND FILE LOCKING III-I R14 - 14BT50502 PROCESS: When you execute a program on your UNIX system, the system creates a special environment for that program. This environment contains everything needed for the system to run the program as if no other program were running on the system. Whenever you issue a command in UNIX, it creates, or starts, a new process. When you tried out the ls command to list directory contents, you started a process. A process, in simple terms, is an instance of a running program. When you start a process (run a command), there are two ways you can run it − Foreground Processes Background Processes FOREGROUND PROCESSES: By default, every process that you start runs in the foreground. It gets its input from the keyboard and sends its output to the screen. Example: $ls ch*.doc When a program is running in foreground and taking much time, we cannot run any other commands (start any other processes) because prompt would not be available until program finishes its processing and comes out. BACKGROUND PROCESSES: A background process runs without being connected to your keyboard. If the background process requires any keyboard input, it waits. The advantage of running a process in the background is that you can run other commands; you do not have to wait until it completes to start another! The simplest way to start a background process is to add an ampersand (&) at the end of the command. Example: $ls ch*.doc & PROCESS IDENTIFIERS: The process identifier – normally referred to as the process ID or just PID, is a number used by most operating system kernels such as that of UNIX, Mac OS X or Microsoft Windows — to uniquely identify an active process.
    [Show full text]