Uva-DARE (Digital Academic Repository)
Total Page:16
File Type:pdf, Size:1020Kb
UvA-DARE (Digital Academic Repository) On the construction of operating systems for the Microgrid many-core architecture van Tol, M.W. Publication date 2013 Link to publication Citation for published version (APA): van Tol, M. W. (2013). On the construction of operating systems for the Microgrid many-core architecture. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:01 Oct 2021 CHAPTER 2 Operating Systems 2.1 Introduction This chapter covers the background and state of the art to lay a foundation for the rest of this thesis. This background information is divided over three parts. We start off with looking at the history of operating systems in order to understand what an operating system is, and what it is designed to do as they developed over time. Then, we identify the fundamental low level abstractions that an operating system is required to provide, which we will use throughout this thesis as a target of what we try to achieve. We conclude our historical perspective with an overview of the main types of operating system (kernel) designs. After having formed our idea of the minimal set of functions we have to implement, we move over to the hardware side in the second part of this chapter to identify on what kind of hardware architectures we have to apply these ideas. We show why we are moving towards more and more cores on a single chip, and try to give an outlook of what such systems might look like taking recent many-core research into account. We observe that future many-core systems are likely to have a more distributed memory organization, can be heterogeneous, and that there is a case to be made to investigate hardware supported concurrency management. This outlook to future hardware platforms gives us the context in which we want to look at operating systems, and this is what we do in the third part of this chapter. As we have observed a move towards, and a requirement for, more distributed or- ganizations, we review several pioneering parallel and distributed operating systems. Then, we discuss several recent designs in the field of multi- and many-core operating systems. We conclude the third part with a discussion on contemporary systems, and the problems they face in scaling to many-core architectures. We also show how they have been used to scale up cluster architectures and what kind of programming systems have been proposed to utilize these. 11 12 CHAPTER 2. OPERATING SYSTEMS 2.2 What is an operating system: A historical per- spective 2.2.1 Early history At the dawn of the computer age in the 1940s, computers were expensive and con- sisted of very large room-sized installations. When programmers would want to run a program on such a machine, they would get a time slot allocated, go to the computer room at the given time and have the whole machine for themselves. They would have full control over the entire machine, input their program, and receive the results as soon as the machine produced them. This situation started to change in the 1950s, at the same time the first high-level programming languages such as FORTRAN, Lisp and COBOL started to emerge, accompanied by their respective compilers and interpreters. Libraries were developed for often used functions, but also for control- ling external devices, so that programmers would not be required to go through the cumbersome process of writing complicated low level device control code for each of their programs. Programming became easier and more people started to compete for computer time. In order to use the expensive computer time as efficiently as possible, the computer would be controlled by a dedicated person, the operator, who would accept input jobs (usually on paper card decks or paper tape) from the programmers, run them on the computer and deliver back the output. The first pieces of software that could be called an operating system were to automate, and therefore speed up, the process done by the operator; to load and run a job, store or print the output, and prepare the machine again to proceed to the next job. These first operating systems, or batch processing systems, emerged in the mid 1950s; the director tape [128, 127] was developed at MIT to automatically control their WhirlWind system, and around the same time a job batching system was developed at General Motors Research Laboratories for the IBM 701 and 704, which was then integrated with the first FORTRAN compiler from IBM [150]. 2.2.2 1960s: Multiprogramming and time-sharing systems These batch processing systems were further expanded in the 1960s into multiprogram- ming systems, which could run multiple batch jobs at the same time. The main reason for this was that a computer, and therefore its CPU time, was expensive. It could be used much more efficiently by running multiple jobs and switching between them when one has to wait on a slow external unit, for example by using the (otherwise idle) cycles of the processor for a different task while it is waiting on an I/O operation. This was the first time concurrency and parallelism1 were introduced in systems, posing new challenges; "The tendency towards increased parallelism in computers is noted. Exploitation of this parallelism presents a number of new problems in machine design and in programming systems" was stated in the abstract of a 1959 paper [38] on a 1We separate the concepts of concurrency and parallelism as follows: Concurrency, the relaxation of scheduling constraints in programs, and parallelism, the simultaneous execution of operations in time which is made possible by the existence of concurrency. 2.2. WHAT IS AN OPERATING SYSTEM: A HISTORICAL PERSPECTIVE 13 multiprogramming operating system for the revolutionary IBM STRETCH computer (the predecessor of the successful System/360 mainframe series). The downside of the batch and multiprogramming systems was the long turnaround time of running an individual job. The programmers could no longer interactively de- bug their programs as they could years before when they had the whole machine for themselves, but instead would have to wait for hours or days before the output of their job was returned to them. With the invention of a new memory technol- ogy (core memory), memories became cheaper and therefore larger, allowing multiple programs to be loaded into the system memory at the same time. To increase the productivity and efficiency of the programmers, interactive or time-sharing systems were designed [40] where multiple users would be interactively connected to the sys- tem executing commands, compiling and running jobs and getting feedback from the system immediately. This required additional techniques to safely share the resources of the system, for example, using memory protection. CPU sharing was achieved by time-slicing the execution of programs, switching between them multiple times per second, giving the illusion of simultaneous execution to the users. The development of these systems also lead to improved hardware features in the mid 1960s to support them, such as virtual memory and complete machine virtualization [42]. One of the early time-sharing systems was Multics [41], which lead to the de- velopment of Unix [124], which together laid the foundations for the principles and design of many operating systems that are currently in use today. Multics was envi- sioned to be used as a utility computing facility, similar to telephone and electricity services. Therefore it was designed to run continuously and reliably, and to service a lot of users. In order to support this it employed many techniques that are still ubiquitous in systems today, such as virtual memory, paging, dynamic linking and processes [49, 45], multiprocessor support, security, hot-swapping and much more. 2.2.3 Operating system abstractions These operating systems laid the foundations for the most important concepts and abstractions used in the operating systems of today. Many new developments have been made in operating systems since then, for example networking and integration with Internet technology, as well as graphical user interfaces. However, even though some of these new developments have been deeply embedded into several modern operating systems, the core functionality of these systems did not change. These developments can be seen as systems that build on top of the original foundations, and that might make use of specific I/O devices such as network adapters or graphics cards. Taking the history of operating system into consideration, we identify the basic definition and core purpose of an operating system as follows: • An operating system gives a uniform abstraction of resources in a machine to programs running on top of it using well defined interfaces. This allows for portability; a program can run on an other platform without having to be aware of the differences in underlying hardware.