06450737.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
2 PARALLEL COMPUTATION: SYNCHRONIZATION, SCHEDULING, AND SCHEMES by JEFFREY MARTIN JAFFE Submitted to the Department of Electrical Engineering and Computer Science on August 10, 1979 in partial fulfillment of the requirements for the Degree of Doctor of Philosophy ABSTRACT There are two primary means of resource allocation in computer systems. There is the powerful mechanism of using a centralized resource manager that allocates the resources. An apparently weaker mechanism is for the asynchronous processes of the system to allocate resources with some type of message passing between themselves. This thesis provides a unifying treatment of these two methods. It is shown that a managed system may be simulated by the processes. As a corollary, a wide variety of synchronization algorithms may be accomplished without a manager. The simulation works correctly even in an environment with unrel iabilities. Processes may die in an undetectable manner and the memory of the system may be faulty. Thus our general simulation provides the first known algorithm for synchronizing dying processes in a faulty memory environment. Scheduling jobs on processors of different capabilities is studied. Algorithms are presented for two machine environments that have better performance than previously studied algorithms. These environments are processors of uniformly different speeds with partially ordered tasks and unrelated processors with independent tasks. In addition, the class of all preemptive schedules for uniform processor systems are studied. These schedules aro shown to be more effective than previous analyses. Scheduling jobs on functionally dedicated processors is introduced. Algorithms are presented for scheduling jobs on such processors, both in the case that the processors are equally fast and in the case that they are of different speeds. The expressive power of the data flow schemes of Dennis is evaluated. It is shown that data flow schemes have the power to express an arbitrary determinate functional. The proof involves a demonstration that "restricted data flow schemes" can simulate Turing Machines.- This provides a new, simple basis for computability. THESIS SUPERVISOR:Albert R. Meyer TITLE:Professor of Electrical Engineering and Computer Science 3 Keywords Chapter 2: distributed control fair mutual exclusion fault tolerant computing mutual exclusion process systems resource allocation resource manager simulation stable state sychronization unrel iable processes Chapter 3: independent tasks list schedules maximal usage schedules nonpreemptive scheduling partially ordered tasks preemptive scheduling sc hedul ing task systems typed task systems uniform processor system unrelated processors worst case performance bounds Chapter 4: computabi Ii ty data flow schemes effective functionals r.e. program schemes Turing machines 4 Acknowledgements During my three years as a graduate student the primary source of direction has been provided by my thesis supervisor Albert Meyer. He has suggested exciting research directions to pursue, encouraged me to simplify and clarify my ideas, and pr.ovided inspiring technical contributions. Most importantly., he has unselfishly devoted an enormous amount of time to the development of the concepts in this thesis and to the presentation of the resul ts. My two thesis readers are to be credited for sparking my interest in the general topics covered in this thesis. Ron Rivest deserves primary credit for training me in the techniques of algorithm design and analysis and giving me much feedback on the presentation of the thesis. The data flow machine project of Jack Dennis helped develop my interest in parallel computation. There are many others that have contributed to my research by providing new insights or working together with me on some of the technical res-ults. While I cannot begin to thank them all for the various levels of help they have provided I would like to single out those who have made the most outstanding contributions: A. Baratz, E. Davis, I. Greif, D. Harel, E. Jaffe, D. Kessler, A. LaPaugh, E. Lloyd, M. Loui, C. Papadimitriou, V. Pratt, M. Rabin, A. Shamir, and G. Stark. Since no research can be done in a vaccuum, I must thank all of those friends and family whose kinship and support are necessary ingredients of a research effort. Most importantly I must express my eternal gratitude to my wife, Esther for all of her help. It is a rare blessing to be close to someone that provides not only the necessary love and moral support at home, but also a vast reservoir of technical assistance in my work. This thesis was prepared with the support of a National Science Foundation graduate fellowship, and National Science Foundation grant no. MCS77-19754. S Table of Contents. Abstract ...... .. .. .. .. .. .. .. .. .. .. .2 Keywords *.....................'--------... .. ... 3 Acknowledgements ...... 4........................4 Table of Contents...........................e....s2.............- 5 Index of Theorems and informal description of content ... 7 Index of Lemmas ........ ..... ... .. .... 9 Index of Equations ............................. ............... 10 Index of Figures and Tables ............................ 11 1. Introduction ............................... ... ......... 12 1. 1 Major goals and accomplishments of thesis ............ 12 1.2 Reasons for parallel computation ..................... 15 1. 3 Some problems associated with parallel computation ..... 19 2. Synchroni zation ...... ..... -26 2.1 Introduction ................ * 26 2.1.1 Background and motivation .................... 26 2.1.2 Unreliability assumptions and their motivation................ .. .. .. 30 2.2 Basic definitions ... 0.... ..... .................. 34 2.2.1 Process systems and the definition of simulation . .. .. .. .. 34 2.2.2 Properties of simulation ..................... 41 2. 2.3 Process systems with unreliable memory ..... 45 2. 2. 4 Managed system of processes and the main results of Chapter 2 . 52 2.3 An Example .... ... .. ... .. ..... ... ....0 8-N--. ... ... 56 2.4 Cooperating system of n simulators ............... *...* 59 2.4.1 Overview of the system ........... ......... 59 2.4.2 Memory of the cooperating system ............. 64 2.4.3 State transitions of the first n processes..... 71 2. 4.4 Simulation of the RM (state transitions of the simulators) ...... .hu..... ... ..... 74 2.5 Proof of Theorem 2.1 .................. ...... 6 81 2.6 Discussion of unreliability properties ....... 104 2.7 Open problems and furtherwork ..................... 109 3. Scheduling Theory ........................ .-.--- I........---.111 3.1 Introduction ......... ..... 111 3. 2 Scheduling tasks on processors of uniformly different speeds . ........... .. 6 119 3.2.1 Basic definitions and models ............. 119 3.2.2 Nonpreemptive scheduling of tasks on uniform processors ..................... 123 3.2.3 Preemptive scheduling of tasks on uniform processors ..... ,... ... ***e..... .. 143 3.2.4 Scheduling with limited information ..... 159 3. 3 Nonpreemptive scheduling of independent tasks on unrelated processors 0aaaaa000a ............. ... .. 162 3.4 Scheduling tasks on processors of different types .... 185 6 3.4.1lBasic definitions and models ........ 185 3.4.2 Nonpreemptive scheduling of tasks on equally fast processors of different types 188 3.4.3 Nonpreemptive scheduling of tasks on processors of different types and different speeds 196 3.4.4 Maximal usage preemptive scheduling of tasks on processors of different types and different speeds .................... 214 3.5 Open problems and further work .................... 216 4. Schemes.. ... .. ... .. ... ... .. 219 4. 1 Introduction ......... *. .. ..*0.0. .... 219 4.2 Syntax and semantics of schemes ..................... 222 4.3 Programming techniques ............................ a 228 4.4 Simulating Turing Machine computations with restricted data flow schemes 235 4.5 Simulating arbitrary r.e. program schemes with data flow schemes ............................ 243 4. 6 Conclusion and future work ............................. 255 References 256 Biographical note ............................................. 0a 00 00 0 0 a 6 6 264 7 Index of Theorems and informal description of content Theorem 2. 1 Cooperating systems simulate managed systems 54 Theorem 2. 2 Every history for a cooperating system matches some history for the corresponding managed system ........ 97 Theorem 2. 3 Managed systems may be partially simulated by systems with n processes 101 Theorem 2. 4 Fair mutual exclusion may be accomplished even with dying processes and unreliable memory ............. 101 Theorem 2.5 Cooperating systems without errors simulate managed systems ..................................... 102 Theorem 3. 1 List schedules on the fastest i processors are at most 1 + 2-/m times worse than optimal where m is the number of processors 130 Theorem 3.2 List schedules on the fastest i processors are at 1 most Irn + 0(m ) times worse than optimal0........ 131 Theorem 3.3 Any preemptive schedule may be transformed into a maximal usage preemptive schedule in polynomial time 152 Theorem 3. 4 A bound on maximal, usage preemptive schedules related to Theorem 3.2 .......................... 153 Theorem 3. 5 Maximal usage preemptive schedules are at most Irn + (1/2) times worse than optimal ............. 157 Theorem 3.6 An algorithm for independent tasks on unrelated processors is at most 2.5/rn times worse than optimal 176 Theorem 3.7 An algorithm which is similar to the one analyzed in Theorem