Introduction Course Outline Why Does This Fail? Lectures Tutorials

Total Page:16

File Type:pdf, Size:1020Kb

Introduction Course Outline Why Does This Fail? Lectures Tutorials Course Outline • Prerequisites – COMP2011 Data Organisation • Stacks, queues, hash tables, lists, trees, heaps,…. Introduction – COMP2021 Digital Systems Structure • Assembly programming • Mapping of high-level procedural language to assembly COMP3231/9201/3891/9283 language – or the postgraduate equivalent (Extended) Operating Systems – You are expected to be competent Kevin Elphinstone programmers!!!! • We will be using the C programming language – The dominant language for OS implementation. – Need to understand pointers, pointer arithmetic, explicit 2 memory allocation. Why does this fail? Lectures • Common for all courses (3231/3891/9201/9283) void func(int *x, int *y) • Wednesday, 2-4pm { • Thursday, 5-6pm *x = 1; *y = 2; – All lectures are here (EE LG03) – The lecture notes will be available on the course web site } • Available prior to lectures, when possible. void main() – The lecture notes and textbook are NOT a substitute for { attending lectures. int *a, *b; func(a,b); } 3 4 Tutorials Assignments • Assignments form a substantial component of • Start in week 2 your assessment. • A tutorial participation mark will • They are challenging!!!! – Because operating systems are challenging contribute to your final assessment. • We will be using OS/161, – Participation means participation, NOT – an educational operating system attendance. – developed by the Systems Group At Harvard – Comp9201/3891/9283 students excluded – It contains roughly 20,000 lines of code and comments • You will only get participation marks in your enrolled tutorial. 5 6 1 Assignments Assignments • Assignments are in pairs • Don’t under estimate the time needed to do the – Info on how to pair up available soon assignments. • We usually offer advanced versions of the – ProfQuotes: [About the midterm] "We can't keep you working assignments on it all night, it's not OS.“ Ragde, CS341 – Available bonus marks are small compared to amount of • If you start a couple days before they are due, you will be late. effort required. • To encourage you to start early, – Student should do it for the challenge, not the marks. – Bonus 10% of max mark of the assignment for finishing a – Attempting the advanced component is not a valid excuse week early for failure to complete the normal component of the – To iron out any potential problems with the spec, 5% bonus assignment for finishing within 48 hours of assignment release. • Extended OS students (COMP3891/9283) are – See course handout for exact details • Read the fine print!!!! expected to attempt the advanced assignments 7 8 Assignments Assignments • Four assignments • Late penalty – due roughly week 3, 6, 9,13 – 4% of total assignment value per day • The first one is trivial • Assignment is worth 20% • You get 18, and are 2 days late – It’s a warm up to have you familiarize • Final mark = 18 – (20*0.04*2) = 16 (16.4) yourself with the environment and easy marks. • Assignments are only accepted up to – Do not use it as a gauge for judging the one week late. 8+ days = 0 difficulty of the following assignments. 9 10 Assignments Plagiarism • To help you with the assignments • We take cheating seriously!!! – We dedicate a tutorial per-assignment to • Penalties include discuss issues related to the assignment – Copying of code: 0 FL – Prepare for them!!!!! – Help with coding: negative half the assignment’s max marks – Originator of a plagiarised solution: 0 for the particular assignment – Team work outside group: 0 for the particular assignments 11 12 2 Cheating Statistics Exams Session 1998/S1 1999/S1 2000/S1 2001/S1 2001/S2 2002/S1 2002/S2 2003/S1 2003/S2 enrolment 178 410 320 300 107 298 156 333 133 • There is NO mid-session suspected cheaters 10(6%) 26(6%) 22(7%) 26(9%) 20(19%) 15(5%) ???(?%) 13 (4%) ???(?%) • The final written exam is 2 hours full penalties * * * * 2 6 9 14 109521 reduced penalties 7157754229 • Supplementary exams are oral. cheaters failed 4 101616101254 ? cheaters – Supplementaries are available according to suspended 001 001 000 school policy, not as a second chance. *Note: Full penalty 0 FL not applied prior to 2001/S1 13 14 Assessment 3891/9201/9283 • Exam Mark • Class Mark • No tutorial participation component Component Component • Assignment marks scaled to 100 – Max mark of 100 – Max mark of 100 • Based solely on the • 10% tutorial final exam participation • 90% Assignments 15 16 Undergrad Assessment Postgrads (9201/9283) • Maximum of a 50/50 weighted harmonic • The final assessment is the harmonic mean and a 20/80 harmonic mean mean of the exam and class – Can weight final mark heavily on exam if you can’t commit the time to the assignments component. – You are rewarded for seriously attempting the • If E >= 40, assignments 2EC • if E >= 40, 2EC 5EC M = M = max E+C, E+4C E + C ³ ´ 17 18 3 Harmonic Mean (Class Mark = 100 - Exam Mark) Assessment 100 90 • If E < 40 80 70 60 Harm 50/50 Arith 50/50 50 Arith 20/80 2EC Final Mark 40 Harm 20/80 M = min44, 30 20 E + C 10 0 0 102030405060708090100 Exam Mark 19 20 Final Mark = 50 100 Assessment 90 80 • You need to perform reasonably 70 consistently in both exam and class 60 Harm 50/50 components. 50 Harm 20/80 40 • Harmonic mean only has significant 30 effect with significant variation. 20 Exam Mark Required to Pass to Exam Required Mark • Reserve the right to scale, and scale 10 0 courses individually if required. 0 102030405060708090100 – Warning: We have not scaled in the past. Class Mark 21 25 Textbook References • Andrew • A. Silberschatz and P.B. Galvin, Operating System Concepts, 6th ed., Addison Wesley (2001) Tanenbaum, • William Stallings, Operating Systems: Internals and Design Modern Operating Principles, 4th edition, 2001, Prentice Hall. nd • A. Tannenbaum, A. Woodhull, Operating Systems--Design and Systems, 2 Implementation, Prentice Hall, ??? Edition, Prentice • John O'Gorman, Operating Systems, MacMillan, 2000 Hall • Uresh Vahalla, UNIX Internals: The New Frontiers, Prentice Hall, 1996 • McKusick et al., The Design and Implementation of the 4.4 BSD Operating System, Addison Wesley, 1996 26 27 4 Consultations/Questions Course Outline • Questions should be directed to the forum. • Processes and threads • Admin related queries to Charles Gray [email protected] • Concurrency control • Personal queries can be directed to me [email protected] • Memory Management • We reserve the right to ignore email sent directly to • File Systems us (including tutors) if it should have been directed to the forum. • I/O and Devices • Consultation Times • Security – TBA • Scheduling 28 29 What is an Operating System? Introduction to Operating Systems Chapter 1 – 1.3 31 Viewing the Operating System as an Abstract Machine • Extends the basic hardware with added functionality • Provides high-level abstractions – More programmer friendly – Common core for all applications • It hides the details of the hardware – Makes application code portable 32 33 5 Disk Users Viewing the Operating System as a Resource Manager Memory • Responsible for allocating resources to users and processes • Must ensure CPU – No Starvation – Progress Network – Allocation is according to some desired policy Bandwidth • First-come, first-served; Fair share; Weighted fair share; limits (quotas), etc… – Overall, that the system is efficiently used 34 35 Dated View: the Operating The Operating System is System as the Privileged Privileged Privileged Mode Component • Applications should not be able to interfere or bypass the operating system Operating System – OS can enforce the “extended machine” – OS can enforce its resource allocation policies – Prevent applications from interfering with each other Requests Applications • Note: Some Embedded OSs have no privileged (System Calls) component, e.g. PalmOS User Mode – Can implement OS functionality, but cannot enforce it. Applications Applications • Note: Some operating systems implement significant OS functionality in user-mode, e.g. User-mode Linux 36 37 Why Study Operating (A brief) Operating System History Systems? • Largely parallels hardware • There are many interesting problems in development operating systems. • First Generation machines • For a complete, top-to-bottom view of a – Vacuum tubes system. – Plug boards • Programming via wiring • Understand performance implications of • Users were simultaneously application behaviour. designers, engineers, and programmers • Understanding and programming large, • “single user” complex, software systems is a good skill to • difficult to debug (hardware) acquire. – No Operating System 38 39 6 Second Generation Machines Batch System Batch Systems Operating Systems • IBM 7094 • Sometimes called “resident job monitor” – 0.35 MIPS, 32K x 36-bit memory • Managed the Hardware – 3.5 million dollars • Simple Job Control Language (JCL) • Batching used to more efficiently use the – Load compiler hardware – Compile job – Share machine amongst many users – Run job – One at a time – End job – Debugging a pain • Drink coffee until jobs • No resource allocation issues finished 40 – “one user” 41 Problem: Keeping Batch Third Generation Systems - Systems Busy Multiprogramming • Divided memory among several • Reading tapes or punch cards was time Memory consuming loaded jobs • Expensive CPU was idle waiting for input • While one job is loading, CPU works on another Job 1 • With enough jobs, CPU 100% busy Job 2 • Needs special hardware to isolate memory partitions from each other Job 3 – This hardware was notably absent on early 2nd gen. batch systems OS 42 43 Multiprogramming Example Job turn-around
Recommended publications
  • Access Control and Operating System
    Outline (may not finish in one lecture) Access Control and Operating Access Control Concepts Secure OS System Security • Matrix, ACL, Capabilities • Methods for resisting • Multi-level security (MLS) stronger attacks OS Mechanisms Assurance • Multics • Orange Book, TCSEC John Mitchell – Ring structure • Common Criteria • Amoeba • Windows 2000 – Distributed, capabilities certification • Unix Some Limitations – File system, Setuid • Information flow • Windows • Covert channels – File system, Tokens, EFS • SE Linux – Role-based, Domain type enforcement Access control Access control matrix [Lampson] Common Assumption Objects • System knows who the user is File 1 File 2 File 3 … File n – User has entered a name and password, or other info • Access requests pass through gatekeeper User 1 read write - - read – OS must be designed monitor cannot be bypassed User 2 write write write - - Reference Subjects monitor User 3 - - - read read User process ? Resource … User m read write read write read Decide whether user can apply operation to resource Two implementation concepts Capabilities Access control list (ACL) File 1 File 2 … Operating system concept • “… of the future and always will be …” • Store column of matrix User 1 read write - Examples with the resource User 2 write write - • Dennis and van Horn, MIT PDP-1 Timesharing Capability User 3 - - read • Hydra, StarOS, Intel iAPX 432, Eros, … • User holds a “ticket” for … • Amoeba: distributed, unforgeable tickets each resource User m read write write • Two variations References – store
    [Show full text]
  • Clustering with Openmosix
    Clustering with openMosix Maurizio Davini (Department of Physics and INFN Pisa) Presented by Enrico Mazzoni (INFN Pisa) Introduction • What is openMosix? – Single-System Image – Preemptive Process Migration – The openMosix File System (MFS) • Application Fields • openMosix vs Beowulf • The people behind openMosix • The openMosix GNU project • Fork of openMosix code 12/06/2003 HTASC 2 The openMosix Project MileStones • Born early 80s on PDP-11/70. One full PDP and disk-less PDP, therefore process migration idea. • First implementation on BSD/pdp as MS.c thesis. • VAX 11/780 implementation (different word size, different memory architecture) • Motorola / VME bus implementation as Ph.D. thesis in 1993 for under contract from IDF (Israeli Defence Forces) • 1994 BSDi version • GNU and Linux since 1997 • Contributed dozens of patches to the standard Linux kernel • Split Mosix / openMosix November 2001 • Mosix standard in Linux 2.5? 12/06/2003 HTASC 3 What is openMOSIX • Linux kernel extension (2.4.20) for clustering • Single System Image - like an SMP, for: – No need to modify applications – Adaptive resource management to dynamic load characteristics (CPU intensive, RAM intensive, I/O etc.) – Linear scalability (unlike SMP) 12/06/2003 HTASC 4 A two tier technology 1. Information gathering and dissemination – Support scalable configurations by probabilistic dissemination algorithms – Same overhead for 16 nodes or 2056 nodes 2. Pre-emptive process migration that can migrate any process, anywhere, anytime - transparently – Supervised by adaptive
    [Show full text]
  • Research Purpose Operating Systems – a Wide Survey
    GESJ: Computer Science and Telecommunications 2010|No.3(26) ISSN 1512-1232 RESEARCH PURPOSE OPERATING SYSTEMS – A WIDE SURVEY Pinaki Chakraborty School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi – 110067, India. E-mail: [email protected] Abstract Operating systems constitute a class of vital software. A plethora of operating systems, of different types and developed by different manufacturers over the years, are available now. This paper concentrates on research purpose operating systems because many of them have high technological significance and they have been vividly documented in the research literature. Thirty-four academic and research purpose operating systems have been briefly reviewed in this paper. It was observed that the microkernel based architecture is being used widely to design research purpose operating systems. It was also noticed that object oriented operating systems are emerging as a promising option. Hence, the paper concludes by suggesting a study of the scope of microkernel based object oriented operating systems. Keywords: Operating system, research purpose operating system, object oriented operating system, microkernel 1. Introduction An operating system is a software that manages all the resources of a computer, both hardware and software, and provides an environment in which a user can execute programs in a convenient and efficient manner [1]. However, the principles and concepts used in the operating systems were not standardized in a day. In fact, operating systems have been evolving through the years [2]. There were no operating systems in the early computers. In those systems, every program required full hardware specification to execute correctly and perform each trivial task, and its own drivers for peripheral devices like card readers and line printers.
    [Show full text]
  • Mixed-Criticality Scheduling and Resource Sharing for High-Assurance Operating Systems
    Mixed-Criticality Scheduling and Resource Sharing for High-Assurance Operating Systems Anna Lyons Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computer Science and Engineering University of New South Wales Sydney, Australia September 2018 Abstract Criticality of a software system refers to the severity of the impact of a failure. In a high-criticality system, failure risks significant loss of life or damage to the environ- ment. In a low-criticality system, failure may risk a downgrade in user-experience. As criticality of a software system increases, so too does the cost and time to develop that software: raising the criticality also raises the assurance level, with the highest levels requiring extensive, expensive, independent certification. For modern cyber-physical systems, including autonomous aircraft and other vehicles, the traditional approach of isolating systems of different criticality by using completely separate physical hardware, is no longer practical, being both restrictive and inefficient. The result is mixed-criticality systems, where software applications with different criticalities execute on the same hardware. Sufficient mechanisms are required to ascertain that software in mixed-criticality systems is sufficiently isolated, otherwise, all software on that hardware is promoted to the highest criticality level, driving up costs to impractical levels. For mixed-criticality systems to be viable, both spatial and temporal isolation are required. Current aviation standards allow for mixed-criticality systems where temporal and spatial resources are strictly and statically partitioned in time and space, allowing some improvement over fully isolated hardware. However, further improvements are not only possible, but required for future innovation in cyber-physical systems.
    [Show full text]
  • Operating System Verification—An Overview
    Sadhan¯ a¯ Vol. 34, Part 1, February 2009, pp. 27–69. © Printed in India Operating system verification—An overview GERWIN KLEIN Sydney Research Laboratory, NICTA,∗ Australia, School of Computer Science and Engineering, University of New South Wales, Sydney, Australia e-mail: [email protected] Abstract. This paper gives a high-level introduction to the topic of formal, interactive, machine-checked software verification in general, and the verification of operating systems code in particular. We survey the state of the art, the advantages and limitations of machine-checked code proofs, and describe two specific ongoing larger-scale verification projects in more detail. Keywords. Formal software verification; operating systems; theorem proving. 1. Introduction The fastest, cheapest and easiest way to build something is properly the first time (Parker 2007). This engineer’s credo has made it into popular literature, but it seems to be largely ignored in the world of software development. In the late 1960s a software crisis was diagnosed on a summit in the Bavarian Alps (Naur & Randell 1969): Software was getting more and more complex, and we had no structured way of building it. We ended up with projects that took significantly longer than planned, were more expensive than planned, delivered results that did not fully address customer needs, or at worst were useless. This summit in the late 1960s is widely seen as the birth of the discipline of software engineering. Now, almost 40 years later, we have come a long way. There are numerous books on software engineering, a plenitude of methodologies, programming languages, and processes to choose from.
    [Show full text]
  • Load Balancing Experiments in Openmosix”, Inter- National Conference on Computers and Their Appli- Cations , Seattle WA, March 2006 0 0 5 4 9
    Machine Learning Approach to Tuning Distributed Operating System Load Balancing Algorithms Dr. J. Michael Meehan Alan Ritter Computer Science Department Computer Science Department Western Washington University Western Washington University Bellingham, Washington, 98225 USA Bellingham, Washington, 98225 USA [email protected] [email protected] Abstract 2.6 Linux kernel openMOSIX patch was in the process of being developed. This work concerns the use of machine learning techniques (genetic algorithms) to optimize load 2.1 Load Balancing in openMosix balancing policies in the openMosix distributed The standard load balancing policy for openMosix operating system. Parameters/alternative algorithms uses a probabilistic, decentralized approach to in the openMosix kernel were dynamically disseminate load balancing information to other altered/selected based on the results of a genetic nodes in the cluster. [4] This allows load information algorithm fitness function. In this fashion optimal to be distributed efficiently while providing excellent parameter settings and algorithms choices were scalability. The major components of the openMosix sought for the loading scenarios used as the test load balancing scheme are the information cases. dissemination and migration kernel daemons. The information dissemination daemon runs on each 1 Introduction node and is responsible for sending and receiving The objective of this work is to discover ways to load messages to/from other nodes. The migration improve the openMosix load balancing algorithm. daemon receives migration requests from other The approach we have taken is to create entries in nodes and, if willing, carries out the migration of the the /proc files which can be used to set parameter process. Thus, the system is sender initiated to values and select alternative algorithms used in the offload excess load.
    [Show full text]
  • Scalability of Microkernel-Based Systems
    Scalability of Microkernel-Based Systems Zur Erlangung des akademischen Grades eines DOKTORS DER INGENIERWISSENSCHAFTEN von der Fakultat¨ fur¨ Informatik der Universitat¨ Fridericiana zu Karlsruhe (TH) genehmigte DISSERTATION von Volkmar Uhlig aus Dresden Tag der mundlichen¨ Prufung:¨ 30.05.2005 Hauptreferent: Prof. Dr. rer. nat. Gerhard Goos Universitat¨ Fridericiana zu Karlsruhe (TH) Korreferent: Prof. Dr. sc. tech. (ETH) Gernot Heiser University of New South Wales, Sydney, Australia Karlsruhe: 15.06.2005 i Abstract Microkernel-based systems divide the operating system functionality into individ- ual and isolated components. The system components are subject to application- class protection and isolation. This structuring method has a number of benefits, such as fault isolation between system components, safe extensibility, co-existence of different policies, and isolation between mutually distrusting components. How- ever, such strict isolation limits the information flow between subsystems including information that is essential for performance and scalability in multiprocessor sys- tems. Semantically richer kernel abstractions scale at the cost of generality and mini- mality–two desired properties of a microkernel. I propose an architecture that al- lows for dynamic adjustment of scalability-relevant parameters in a general, flex- ible, and safe manner. I introduce isolation boundaries for microkernel resources and the system processors. The boundaries are controlled at user-level. Operating system components and applications can transform their semantic information into three basic parameters relevant for scalability: the involved processors (depending on their relation and interconnect), degree of concurrency, and groups of resources. I developed a set of mechanisms that allow a kernel to: 1. efficiently track processors on a per-resource basis with support for very large number of processors, 2.
    [Show full text]
  • HPC with Openmosix
    HPC with openMosix Ninan Sajeeth Philip St. Thomas College Kozhencheri IMSc -Jan 2005 [email protected] Acknowledgements ● This document uses slides and image clippings available on the web and in books on HPC. Credit is due to their original designers! IMSc -Jan 2005 [email protected] Overview ● Mosix to openMosix ● Why openMosix? ● Design Concepts ● Advantages ● Limitations IMSc -Jan 2005 [email protected] The Scenario ● We have MPI and it's pretty cool, then why we need another solution? ● Well, MPI is a specification for cluster communication and is not a solution. ● Two types of bottlenecks exists in HPC - hardware and software (OS) level. IMSc -Jan 2005 [email protected] Hardware limitations for HPC IMSc -Jan 2005 [email protected] The Scenario ● We are approaching the speed and size limits of the electronics ● Major share of possible optimization remains with software part - OS level IMSc -Jan 2005 [email protected] Hardware limitations for HPC IMSc -Jan 2005 [email protected] How Clusters Work? Conventional supe rcomputers achieve their speed using extremely optimized hardware that operates at very high speed. Then, how do the clusters out-perform them? Simple, they cheat. While the supercomputer is optimized in hardware, the cluster is so in software. The cluster breaks down a problem in a special way so that it can distribute all the little pieces to its constituents. That way the overall problem gets solved very efficiently. - A Brief Introduction To Commodity Clustering Ryan Kaulakis IMSc -Jan 2005 [email protected] What is MOSIX? ● MOSIX is a software solution to minimise OS level bottlenecks - originally designed to improve performance of MPI and PVM on cluster systems http://www.mosix.org Not open source Free for personal and academic use IMSc -Jan 2005 [email protected] MOSIX More Technically speaking: ● MOSIX is a Single System Image (SSI) cluster that allows Automated Load Balancing across nodes through preemptive process migrations.
    [Show full text]
  • The Utility Coprocessor: Massively Parallel Computation from the Coffee Shop
    The Utility Coprocessor: Massively Parallel Computation from the Coffee Shop John R. Douceur, Jeremy Elson, Jon Howell, and Jacob R. Lorch Microsoft Research Abstract slow jobs that take several minutes without UCop be- UCop, the “utility coprocessor,” is middleware that come interactive (15–20 seconds) with it. Thanks to makes it cheap and easy to achieve dramatic speedups the recent emergence of utility-computing services like of parallelizable, CPU-bound desktop applications using Amazon EC2 [8] and FlexiScale [45], which rent com- utility computing clusters in the cloud. To make UCop puters by the hour on a moment’s notice, anyone with a performant, we introduced techniques to overcome the credit card and $10 can use UCop to speed up his own low available bandwidth and high latency typical of the parallel applications. networks that separate users’ desktops from a utility One way to describe UCop is that it effectively con- computing service. To make UCop economical and easy verts application software into a scalable cloud service to use, we devised a scheme that hides the heterogene- targeted at exactly one user. This goal entails five re- ity of client configurations, allowing a single cluster to quirements. Configuration transparency means the ser- serve virtually everyone: in our Linux-based prototype, vice matches the user’s application, library, and con- the only requirement is that users and the cluster are us- figuration state. Non-invasive installation means UCop ing the same major kernel version. works with a user’s existing file system and application This paper presents the design, implementation, and configuration.
    [Show full text]
  • Legoos: a Disseminated, Distributed OS for Hardware Resource
    LegoOS: A Disseminated, Distributed OS for Hardware Resource Disaggregation Yizhou Shan, Yutong Huang, Yilun Chen, and Yiying Zhang, Purdue University https://www.usenix.org/conference/osdi18/presentation/shan This paper is included in the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’18). October 8–10, 2018 • Carlsbad, CA, USA ISBN 978-1-939133-08-3 Open access to the Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX. LegoOS: A Disseminated, Distributed OS for Hardware Resource Disaggregation Yizhou Shan, Yutong Huang, Yilun Chen, Yiying Zhang Purdue University Abstract that can fit into monolithic servers and deploying them in datacenters is a painful and cost-ineffective process that The monolithic server model where a server is the unit often limits the speed of new hardware adoption. of deployment, operation, and failure is meeting its lim- We believe that datacenters should break mono- its in the face of several recent hardware and application lithic servers and organize hardware devices like CPU, trends. To improve resource utilization, elasticity, het- DRAM, and disks as independent, failure-isolated, erogeneity, and failure handling in datacenters, we be- network-attached components, each having its own con- lieve that datacenters should break monolithic servers troller to manage its hardware. This hardware re- into disaggregated, network-attached hardware compo- source disaggregation architecture is enabled by recent nents. Despite the promising benefits of hardware re- advances in network technologies [24, 42, 52, 66, 81, 88] source disaggregation, no existing OSes or software sys- and the trend towards increasing processing power in tems can properly manage it.
    [Show full text]
  • A Look at the EROS Operating System Jonathan S
    A Look at the EROS Operating System Jonathan S. Shapiro Johns Hopkins University Abstract EROS is a capability-based, secure operating system originally designed to address the needs of shared computing utilities involving mutually suspicious users. The deploy­ ment plan for the original design was to provide leased compute service to competing business entities, potentially running on a single machine. As a result, the system was structured to preserve security in the face of both hostile dynamic content and hostile users. While the EROS platform can support the efficient deployment of multilevel se­ cure environments, it is primarily focused on business security requirements. These re­ quirements often address integrity as well as security concerns. An EROS-based deploy­ ment can ensure that business-critical data is manipulated exclusively by authorized ap­ plication software, which in turn is guarded against user tampering. Simultaneously, EROS enables users to run untrusted utility software, hand private or proprietary data to that software, and be assured (within reason) that the data cannot be disclosed to the out­ side world. This paper gives a general overview of the EROS system and what it can do. The EROS research project has now ended. Further work based on the EROS system is being pursued under the CapROS project (www.capros.org). The EROS team has moved on to a successor system: Coyotos (www.coyotos.org). 1 Introduction in other publications. Others result from chains of reasoning that are omitted for reasons of space. A This paper accompanies a talk for the 2005 Libre few are opinions derived from years of in-depth Software Meeting in Dijon, France.
    [Show full text]
  • Architectural Review of Load Balancing Single System Image
    Journal of Computer Science 4 (9): 752-761, 2008 ISSN 1549-3636 © 2008 Science Publications Architectural Review of Load Balancing Single System Image Bestoun S. Ahmed, Khairulmizam Samsudin and Abdul Rahman Ramli Department of Computer and Communication Systems Engineering, University Putra Malaysia, 43400 Serdang, Selangor, Malaysia Abstract: Problem statement: With the growing popularity of clustering application combined with apparent usability, the single system image is in the limelight and actively studied as an alternative solution for computational intensive applications as well as the platform for next evolutionary grid computing era. Approach: Existing researches in this field concentrated on various features of Single System Images like file system and memory management. However, an important design consideration for this environment is load allocation and balancing that is usually handled by an automatic process migration daemon. Literature shows that the design concepts and factors that affect the load balancing feature in an SSI system are not clear. Result: This study will review some of the most popular architecture and algorithms used in load balancing single system image. Various implementations from the past to present will be presented while focusing on the factors that affect the performance of such system. Conclusion: The study showed that although there are some successful open source systems, the wide range of implemented systems investigated that research activity should concentrate on the systems that have already been proposed and proved effectiveness to achieve a high quality load balancing system. Key words: Single system image, NOWs (network of workstations), load balancing algorithm, distributed systems, openMosix, MOSIX INTRODUCTION resources transparently irrespective of where they are available[1].The load balancing single system image Cluster of computers has become an efficient clusters dominate research work in this environment.
    [Show full text]