Processes and Threads

Total Page:16

File Type:pdf, Size:1020Kb

Processes and Threads 9/17/15! DISTRIBUTED COMPUTER SYSTEMS PROCESSES AND THREADS Dr. Jack Lange Computer Science Department University of Pittsburgh Fall 2015 Outline n" Heavy Weight Processes n" Threads and Thread Implementation n" User Level Threads n" Kernel Level Threads n" Light Weight Processes n" Scheduler Activation n" Client/Server Implementation n" Multithreaded Server n" Xwindow n" Server General Design Issues 1! 9/17/15! Process and Process Management PROCESS CONCEPT Process Concept n" A primary task of an operating system is to execute and manage processes n" What is a program and what is a process? n" A program is a specification, which contains data type definitions, how data can be accessed, and set of instructions that when executed accomplishes useful “work” n" A sequential process is the activity resulting from the execution of a program with its data by a sequential processor n" Program is passive, while a process is active A process is an instance of a program in execution! 2! 9/17/15! Process Traditional Structure n" Address Space n" Code – Binary code n" Static and dynamic data n" Execution stack – Contains data such as subroutine parameter, return address, and temporary variables n" Process CPU State n" Process Status Word (PSW) – Execution mode, Last operation outcome, Interrupt level, … n" Instruction Register (IR) – Current instruction being executed n" Program Counter (PC) – Next instruction to be executed n" Stack pointer (SP) n" General purpose registers Process and Process Management PROCESS ADDRESS SPACE 3! 9/17/15! Memory Layout 0xffffffff! 0xffffffff! ! ! ! Kernel Space ! ! ! ! (1GB) ! 0xc0000000! ! Kernel Space ! ! ! ! (2GB) ! ! ! ! ! ! ! 0x80000000! ! ! ! ! ! User Mode Space ! ! ! ! (3GB) ! ! ! User Mode Space ! ! ! ! (2GB) ! ! ! ! ! ! 0! 0! 32bit Linux Kernel/User 32bit Windows Default Memory Split Memory Split Process Address Space Max Name Interpretation Stack Code Segment (Text) Program executable statements Data Segment 1."Initialized Data 1."Static or global data initialized with non-zero values 2." Zero-Initialized or Block 2."Uninitialized static or global Started by Symbol (BSS) data data Heap Categories occupy a single contiguous area. Data Heap Dynamic memory allocation Stack Segment •" Local variables of the scope. Text •" Function parameters 0 4! 9/17/15! Process and Process Management PROCESS STATE Process States n" Process in one of 5 states Created! n" Created n" Ready 1! n" Running n" Blocked Ready! 2! n" Exit n" Transitions between states 5! 1 - Process enters ready queue 3! 2 - Scheduler picks this process Blocked# Running! 3 - Scheduler picks a different (waiting)! process 4! 4 - Process waits for event (such as I/O) 5 - Event occurs 7! 6 - Process exits 6! 7! 7 - Process ended by another process Exit! 5! 9/17/15! Processes Representation n" To users and to other processes, a process is identified by its unique Process ID (PID) n" PID <= 30,000 in Unix n" In the OS, processes are represented by entries in a Process Table (PT) n" PID is index to (or pointer to) a PT entry n" PT entry = Process Control Block (PCB) n" PCB is a large data structure that contains or points to all information about the process n" Linux, defined in task_struct – It contains about 70 fields n" NT – defined in EPROCESS – It contains about 60 fields What’s In A Process Table Entry? Process management! File management! May be# Registers! Root directory! stored# Program counter! Working (current) directory! on stack! CPU status word! File descriptors! Stack pointer! User ID! Process state! Group ID! Priority / scheduling parameters! Process ID! Parent process ID! Memory management! Signals! Pointers to text, data, stack! Process start time! !or! Total CPU usage! Pointer to page table! 6! 9/17/15! Process Information Process ID! UID GID EUID EGID CWD! Memory Map! Signal Dispatch Table! Priority! Signal Mask! Stack! File Descriptors! CPU State! Process CPU State l" The CPU state is defined by the registers’ contents n" Process Status Word (PSW) n" exec. mode, last op. outcome, interrupt level n" Instruction Register (IR) n" Current instruction being executed n" Program counter (PC) n" Stack pointer (SP) n" General purpose registers 7! 9/17/15! Process control block (PCB) PCB kernel user CPU state PSW memory text IR files data accounting PC priority heap SP user general CPU registers purpose storage stack registers Operating System Control Structure Memory Table! Process 1 PCB Process Table! Process 1 Memory Process 2 Process 1 PCB Processes Process n Device Table! Devices Files Process 1 PCB File Table! 8! 9/17/15! Process and Process Management CONTEXT SWITCHING Processes in the OS n" Two “layers” for processes n" Lowest layer of process-structured OS handles interrupts, scheduling n" Above that layer are sequential processes n" Processes tracked in the process table n" Each process has a process table entry Processes! 0 1 …! N-2 N-1 Scheduler 9! 9/17/15! Process Management n" In multi-programming and time sharing environments, process management is required n" A program may require multiple processes n" Many processes may be instances of the same program n" Many processes from different programs can run simultaneously n" How does the OS manage multiple processes running concurrently? n" What kind of information must be kept? n" What functions must the OS perform to run processes correctly. Switching Between Processes n" An important task of the OS is to manage CPU allocation among concurrent processes n" When the OS takes away the CPU from a running process, the OS must perform a switch between the processes n" This referred to as context switch n" It is also referred to as a “state save” and “state restore” n" What is a context? n" What operations must take place to achieve this switch? n" How can the OS guarantee that the interrupted process can resume correctly? 10! 9/17/15! Process Context n" Process Context is the necessary information of the current process operational state that must be stored in order to later resume process operation from an identical position as when the process was interrupted. n" Context information is system dependent and typically includes: n" User Level Context n" Register Level Context n" System Level Context n" Address space, stack space, Program Counter (PC), Stack Pointer (SP), Instruction Register (IR), Program Status Word (PSW) and other general processor registers, profiling or accounting information, associated kernel data structures and current state of the process CPU Switch From Process to Process 11! 9/17/15! Process Context Switch n" Save processor context, including program counter and other registers n" Update the process control block with the new state and any accounting information n" Move process control block to appropriate queue - ready, blocked n" Select another process for execution n" Update the process control block of the process selected n" Update memory-management data structures n" Restore context of selected process and set the PC to the that process code – typically, implicit in the interrupt return instruction Context Switch Design Issues n" Context can be costly and must be minimized n" Context switch is purely system overhead, as no “useful” work accomplished during context switching n" The actual cost depends OS and on the support provided by the hardware n" The more complex the OS and the PCB -> longer the context switch n" Some hardware provides multiple sets of registers per CPU, allowing multiple contexts to be loaded at once n" A “full” process switch may require a significant number of instruction execution. 12! 9/17/15! What Happens On A Trap/Interrupt? 1." Hardware saves program counter (on stack or in a special register) 2." Hardware loads new PC, identifies interrupt 3." Assembly language routine saves registers 4." Assembly language routine sets up stack 5." Assembly language calls C to run service routine 6." Service routine calls scheduler 7." Scheduler selects a process to run next (might be the one interrupted…) 8." Assembly language routine loads PC & registers for the selected process Process Concept Revisited Heavy and Light Weight Process Model 13! 9/17/15! Process Characteristics n" In modern operating systems, the two main characteristics of a process are treated separately n" The unit of resource ownership is usually referred to as a process or task n" The unit of execution is usually referred to a thread or a “lightweight process” Heavyweight Processes n" A virtual address space Code Data Files which holds the process image Registers Stack n" Protected access to processors, files, and other I/O resources n" Use message passing or shared memory to communicate with other Thread! processes Single-Threaded Process! 14! 9/17/15! “Heavyweight” Process Model n" Simple, uni-threaded model n" Security provided by address space boundaries n" High cost for context switch n" Coarse granularity limits degree of concurrency . User Kernel Threads User and Kernel Level Threads 15! 9/17/15! Threads: “Processes” Sharing Memory n" Process == Address Space n" Thread == Program Counter / Stream of Instructions n" Two examples n" Three processes, each with one thread n" One process with three threads Process 1! Process 2! Process 3! Process 1! User# space! Threads! Threads! System# Kernel! Kernel! space! Process and Thread Information Per process items! Address space! Open files! Child processes! Signals & handlers! Accounting info! Global variables! Per Thread Items! Per Thread Items! Per Thread Items! Program counter! Program counter! Program counter! Registers!
Recommended publications
  • Distributed & Cloud Computing: Lecture 1
    Distributed & cloud computing: Lecture 1 Chokchai “Box” Leangsuksun SWECO Endowed Professor, Computer Science Louisiana Tech University [email protected] CTO, PB Tech International Inc. [email protected] Class Info ■" Class Hours: noon-1:50 pm T-Th ! ■" Dr. Box’s & Contact Info: ●"www.latech.edu/~box ●"Phone 318-257-3291 ●"Email: [email protected] Class Info ■" Main Text: ! ●" 1) Distributed Systems: Concepts and Design By Coulouris, Dollimore, Kindberg and Blair Edition 5, © Addison-Wesley 2012 ●" 2) related publications & online literatures Class Info ■" Objective: ! ●"the theory, design and implementation of distributed systems ●"Discussion on abstractions, Concepts and current systems/applications ●"Best practice on Research on DS & cloud computing Course Contents ! •" The Characterization of distributed computing and cloud computing. •" System Models. •" Networking and inter-process communication. •" OS supports and Virtualization. •" RAS, Performance & Reliability Modeling. Security. !! Course Contents •" Introduction to Cloud Computing ! •" Various models and applications. •" Deployment models •" Service models (SaaS, PaaS, IaaS, Xaas) •" Public Cloud: Amazon AWS •" Private Cloud: openStack. •" Cost models ( between cloud vs host your own) ! •" ! !!! Course Contents case studies! ! •" Introduction to HPC! •" Multicore & openMP! •" Manycore, GPGPU & CUDA! •" Cloud-based EKG system! •" Distributed Object & Web Services (if time allows)! •" Distributed File system (if time allows)! •" Replication & Disaster Recovery Preparedness! !!!!!! Important Dates ! ■" Apr 10: Mid Term exam ■" Apr 22: term paper & presentation due ■" May 15: Final exam Evaluations ! ■" 20% Lab Exercises/Quizzes ■" 20% Programming Assignments ■" 10% term paper ■" 20% Midterm Exam ■" 30% Final Exam Intro to Distributed Computing •" Distributed System Definitions. •" Distributed Systems Examples: –" The Internet. –" Intranets. –" Mobile Computing –" Cloud Computing •" Resource Sharing. •" The World Wide Web. •" Distributed Systems Challenges. Based on Mostafa’s lecture 10 Distributed Systems 0.
    [Show full text]
  • Task Scheduling Balancing User Experience and Resource Utilization on Cloud
    Rochester Institute of Technology RIT Scholar Works Theses 7-2019 Task Scheduling Balancing User Experience and Resource Utilization on Cloud Sultan Mira [email protected] Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Mira, Sultan, "Task Scheduling Balancing User Experience and Resource Utilization on Cloud" (2019). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Task Scheduling Balancing User Experience and Resource Utilization on Cloud by Sultan Mira A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Software Engineering Supervised by Dr. Yi Wang Department of Software Engineering B. Thomas Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, New York July 2019 ii The thesis “Task Scheduling Balancing User Experience and Resource Utilization on Cloud” by Sultan Mira has been examined and approved by the following Examination Committee: Dr. Yi Wang Assistant Professor Thesis Committee Chair Dr. Pradeep Murukannaiah Assistant Professor Dr. Christian Newman Assistant Professor iii Dedication I dedicate this work to my family, loved ones, and everyone who has helped me get to where I am. iv Abstract Task Scheduling Balancing User Experience and Resource Utilization on Cloud Sultan Mira Supervising Professor: Dr. Yi Wang Cloud computing has been gaining undeniable popularity over the last few years. Among many techniques enabling cloud computing, task scheduling plays a critical role in both efficient resource utilization for cloud service providers and providing an excellent user ex- perience to the clients.
    [Show full text]
  • Distributed Algorithms with Theoretic Scalability Analysis of Radial and Looped Load flows for Power Distribution Systems
    Electric Power Systems Research 65 (2003) 169Á/177 www.elsevier.com/locate/epsr Distributed algorithms with theoretic scalability analysis of radial and looped load flows for power distribution systems Fangxing Li *, Robert P. Broadwater ECE Department Virginia Tech, Blacksburg, VA 24060, USA Received 15 April 2002; received in revised form 14 December 2002; accepted 16 December 2002 Abstract This paper presents distributed algorithms for both radial and looped load flows for unbalanced, multi-phase power distribution systems. The distributed algorithms are developed from tree-based sequential algorithms. Formulas of scalability for the distributed algorithms are presented. It is shown that computation time dominates communication time in the distributed computing model. This provides benefits to real-time load flow calculations, network reconfigurations, and optimization studies that rely on load flow calculations. Also, test results match the predictions of derived formulas. This shows the formulas can be used to predict the computation time when additional processors are involved. # 2003 Elsevier Science B.V. All rights reserved. Keywords: Distributed computing; Scalability analysis; Radial load flow; Looped load flow; Power distribution systems 1. Introduction Also, the method presented in Ref. [10] was tested in radial distribution systems with no more than 528 buses. Parallel and distributed computing has been applied More recent works [11Á/14] presented distributed to many scientific and engineering computations such as implementations for power flows or power flow based weather forecasting and nuclear simulations [1,2]. It also algorithms like optimizations and contingency analysis. has been applied to power system analysis calculations These works also targeted power transmission systems. [3Á/14].
    [Show full text]
  • Resource Access Control Facility (RACF) General User's Guide
    ----- --- SC28-1341-3 - -. Resource Access Control Facility -------_.----- - - --- (RACF) General User's Guide I Production of this Book This book was prepared and formatted using the BookMaster® document markup language. Fourth Edition (December, 1988) This is a major revision of, and obsoletes, SC28-1341- 3and Technical Newsletter SN28-l2l8. See the Summary of Changes following the Edition Notice for a summary of the changes made to this manual. Technical changes or additions to the text and illustrations are indicated by a vertical line to the left of the change. This edition applies to Version 1 Releases 8.1 and 8.2 of the program product RACF (Resource Access Control Facility) Program Number 5740-XXH, and to all subsequent versions until otherwise indicated in new editions or Technical Newsletters. Changes are made periodically to the information herein; before using this publication in connection with the operation of IBM systems, consult the latest IBM Systemj370 Bibliography, GC20-0001, for the editions that are applicable and current. References in this publication to IBM products or services do not imply that IBM· intends to make these available in all countries in which IBM operates. Any reference to an IBM product in this publication is not intended to state or imply that only IBM's product may be used. Any functionally equivalent product may be used instead. This statement does not expressly or implicitly waive any intellectual property right IBM may hold in any product mentioned herein. Publications are not stocked at the address given below. Requests for IBM publications should be made to your IBM representative or to the IBM branch office serving your locality.
    [Show full text]
  • 3.A Fixed-Priority Scheduling
    2017/18 UniPD - T. Vardanega 10/03/2018 Standard notation : Worst-case blocking time for the task (if applicable) 3.a Fixed-Priority Scheduling : Worst-case computation time (WCET) of the task () : Relative deadline of the task : The interference time of the task : Release jitter of the task : Number of tasks in the system Credits to A. Burns and A. Wellings : Priority assigned to the task (if applicable) : Worst-case response time of the task : Minimum time between task releases, or task period () : The utilization of each task ( ⁄) a-Z: The name of a task 2017/18 UniPD – T. Vardanega Real-Time Systems 168 of 515 The simplest workload model Fixed-priority scheduling (FPS) The application is assumed to consist of tasks, for fixed At present, the most widely used approach in industry All tasks are periodic with known periods Each task has a fixed (static) priority determined off-line This defines the periodic workload model In real-time systems, the “priority” of a task is solely derived The tasks are completely independent of each other from its temporal requirements No contention for logical resources; no precedence constraints The task’s relative importance to the correct functioning of All tasks have a deadline equal to their period the system or its integrity is not a driving factor at this level A recent strand of research addresses mixed-criticality systems, with Each task must complete before it is next released scheduling solutions that contemplate criticality attributes All tasks have a single fixed WCET, which can be trusted as The ready tasks (jobs) are dispatched to execution in a safe and tight upper-bound the order determined by their (static) priority Operation modes are not considered Hence, in FPS, scheduling at run time is fully defined by the All system overheads (context-switch times, interrupt handling and so on) are assumed absorbed in the WCETs priority assignment algorithm 2017/18 UniPD – T.
    [Show full text]
  • Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
    Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles
    [Show full text]
  • The CICS/MQ Interface
    1 2 Each application program which will execute using MQ call statements require a few basic inclusions of generated code. The COPY shown are the one required by all applications using MQ functions. We have suppressed the list of the CMQV structure since it is over 20 pages of COBOL variable constants. The WS‐VARIABLES are the minimum necessary to support bot getting and putting messages from and to queues. The MQ‐CONN would normally be the handle acquired by an MQCONN function, but that call along with the MQDISC call are ignored by CICS / MQ interface. It is CICS itself that has performed the MQCONN to the queue manager. The MQ‐HOBJ‐I and MQ‐HOBJ‐O represent the handles, acquired on MQOPEN for a queue, used to specify which queue the other calls relate to. The MQ‐COMPCODE and MQ‐REASON are the variables which the application must test after every MQ call to determine its success. 3 This slide contains the message input and output areas used on the MQGET and MQPUT calls. Our messages are rather simple and limited in size. SO we have put the data areas in the WORKING‐STORAGE SECTION. Many times you will have these as copy books rather than native COBOL variables. Since the WORKING‐STORAGE area is being copied for each task using the program, we recommend that larger messages areas be put into the LINKAGE SECTION. Here we use a size of 200‐300 K as a suggestion. However, that size will vary depending upon your CICS environment. When the application is using message areas in the LINKAGE SECTION, it will be responsible to perform a CICS GETMAIN command to acquire the storage for the message area.
    [Show full text]
  • Parallel Programming
    Parallel Programming Parallel Programming Parallel Computing Hardware Shared memory: multiple cpus are attached to the BUS all processors share the same primary memory the same memory address on different CPU’s refer to the same memory location CPU-to-memory connection becomes a bottleneck: shared memory computers cannot scale very well Parallel Programming Parallel Computing Hardware Distributed memory: each processor has its own private memory computational tasks can only operate on local data infinite available memory through adding nodes requires more difficult programming Parallel Programming OpenMP versus MPI OpenMP (Open Multi-Processing): easy to use; loop-level parallelism non-loop-level parallelism is more difficult limited to shared memory computers cannot handle very large problems MPI(Message Passing Interface): require low-level programming; more difficult programming scalable cost/size can handle very large problems Parallel Programming MPI Distributed memory: Each processor can access only the instructions/data stored in its own memory. The machine has an interconnection network that supports passing messages between processors. A user specifies a number of concurrent processes when program begins. Every process executes the same program, though theflow of execution may depend on the processors unique ID number (e.g. “if (my id == 0) then ”). ··· Each process performs computations on its local variables, then communicates with other processes (repeat), to eventually achieve the computed result. In this model, processors pass messages both to send/receive information, and to synchronize with one another. Parallel Programming Introduction to MPI Communicators and Groups: MPI uses objects called communicators and groups to define which collection of processes may communicate with each other.
    [Show full text]
  • Operating System Support for Parallel Processes
    Operating System Support for Parallel Processes Barret Rhoden Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2014-223 http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-223.html December 18, 2014 Copyright © 2014, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Operating System Support for Parallel Processes by Barret Joseph Rhoden A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Eric Brewer, Chair Professor Krste Asanovi´c Professor David Culler Professor John Chuang Fall 2014 Operating System Support for Parallel Processes Copyright 2014 by Barret Joseph Rhoden 1 Abstract Operating System Support for Parallel Processes by Barret Joseph Rhoden Doctor of Philosophy in Computer Science University of California, Berkeley Professor Eric Brewer, Chair High-performance, parallel programs want uninterrupted access to physical resources. This characterization is true not only for traditional scientific computing, but also for high- priority data center applications that run on parallel processors. These applications require high, predictable performance and low latency, and they are important enough to warrant engineering effort at all levels of the software stack.
    [Show full text]
  • Real-Time Performance During CUDA™ a Demonstration and Analysis of Redhawk™ CUDA RT Optimizations
    A Concurrent Real-Time White Paper 2881 Gateway Drive Pompano Beach, FL 33069 (954) 974-1700 www.concurrent-rt.com Real-Time Performance During CUDA™ A Demonstration and Analysis of RedHawk™ CUDA RT Optimizations By: Concurrent Real-Time Linux® Development Team November 2010 Overview There are many challenges to creating a real-time Linux distribution that provides guaranteed low process-dispatch latencies and minimal process run-time jitter. Concurrent Real Time’s RedHawk Linux distribution meets and exceeds these challenges, providing a hard real-time environment on many qualified hardware configurations, even in the presence of a heavy system load. However, there are additional challenges faced when guaranteeing real-time performance of processes while CUDA applications are simultaneously running on the system. The proprietary CUDA driver supplied by NVIDIA® frequently makes demands upon kernel resources that can dramatically impact real-time performance. This paper discusses a demonstration application developed by Concurrent to illustrate that RedHawk Linux kernel optimizations allow hard real-time performance guarantees to be preserved even while demanding CUDA applications are running. The test results will show how RedHawk performance compares to CentOS performance running the same application. The design and implementation details of the demonstration application are also discussed in this paper. Demonstration This demonstration features two selectable real-time test modes: 1. Jitter Mode: measure and graph the run-time jitter of a real-time process 2. PDL Mode: measure and graph the process-dispatch latency of a real-time process While the demonstration is running, it is possible to switch between these different modes at any time.
    [Show full text]
  • Grid Computing: What Is It, and Why Do I Care?*
    Grid Computing: What Is It, and Why Do I Care?* Ken MacInnis <[email protected]> * Or, “Mi caja es su caja!” (c) Ken MacInnis 2004 1 Outline Introduction and Motivation Examples Architecture, Components, Tools Lessons Learned and The Future Questions? (c) Ken MacInnis 2004 2 What is “grid computing”? Many different definitions: Utility computing Cycles for sale Distributed computing distributed.net RC5, SETI@Home High-performance resource sharing Clusters, storage, visualization, networking “We will probably see the spread of ‘computer utilities’, which, like present electric and telephone utilities, will service individual homes and offices across the country.” Len Kleinrock (1969) The word “grid” doesn’t equal Grid Computing: Sun Grid Engine is a mere scheduler! (c) Ken MacInnis 2004 3 Better definitions: Common protocols allowing large problems to be solved in a distributed multi-resource multi-user environment. “A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.” Kesselman & Foster (1998) “…coordinated resource sharing and problem solving in dynamic, multi- institutional virtual organizations.” Kesselman, Foster, Tuecke (2000) (c) Ken MacInnis 2004 4 New Challenges for Computing Grid computing evolved out of a need to share resources Flexible, ever-changing “virtual organizations” High-energy physics, astronomy, more Differing site policies with common needs Disparate computing needs
    [Show full text]
  • Unit: 4 Processes and Threads in Distributed Systems
    Unit: 4 Processes and Threads in Distributed Systems Thread A program has one or more locus of execution. Each execution is called a thread of execution. In traditional operating systems, each process has an address space and a single thread of execution. It is the smallest unit of processing that can be scheduled by an operating system. A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. Thread Structure Process is used to group resources together and threads are the entities scheduled for execution on the CPU. The thread has a program counter that keeps track of which instruction to execute next. It has registers, which holds its current working variables. It has a stack, which contains the execution history, with one frame for each procedure called but not yet returned from. Although a thread must execute in some process, the thread and its process are different concepts and can be treated separately. What threads add to the process model is to allow multiple executions to take place in the same process environment, to a large degree independent of one another. Having multiple threads running in parallel in one process is similar to having multiple processes running in parallel in one computer. Figure: (a) Three processes each with one thread. (b) One process with three threads. In former case, the threads share an address space, open files, and other resources. In the latter case, process share physical memory, disks, printers and other resources.
    [Show full text]