Osprey: Operating System for Predictable Clouds

Total Page:16

File Type:pdf, Size:1020Kb

Osprey: Operating System for Predictable Clouds Osprey: Operating System for Predictable Clouds Jan Sacha, Jeff Napper, Sape Mullender Jim McKie Bell Laboratories Bell Laboratories Alcatel-Lucent Alcatel-Lucent Antwerp, Belgium Murray Hill, NJ Abstract—Cloud computing is currently based on hardware only possible if the guest system cooperates with the host virtualization wherein a host operating system provides a operating system, which violates full transparency. virtual machine interface nearly identical to that of physical Although hardware virtualization allows backward com- hardware to guest operating systems. Full transparency allows backward compatibility with legacy software but introduces patibility with legacy software, full transparency comes at unpredictability at the guest operating system (OS) level. The a high cost. In addition to real-time scheduling problems, time perceived by the guest OS is non-linear. As a consequence, virtualization also hides the real properties of redundant it is difficult to run real-time or latency-sensitive applications system components such as the storage devices, compute in the cloud. In this paper we describe an alternative approach nodes, and network links of different systems to the level of to cloud computing where we run all user applications on top of a single cloud operating system called Osprey. Osprey the datacenter. allows dependable, predictable, and real-time computing by In paravirtualization [1], [2], [3], the virtual machine consistently managing all system resources and exporting interface is close but not identical to the physical hardware relevant information to the applications. Osprey ensures com- and the guest operating system is aware of virtualization. patibility with legacy software through OS emulation provided However, paravirtualization has been introduced to simplify by libraries and by porting runtime environments. Osprey’s resource containers fully specify constraints between applica- the hypervisor and improve application performance rather tions to enforce full application isolation for real-time execution than provide predictable performance and dependability. For guarantees. Osprey pushes much of the state out of the kernel instance, the hypervisor might provide real-time information into user applications for several benefits: full application to the guest OS but it does not cooperate in scheduling and accounting, mobility support, and efficient networking. Using a thus the guest OS is not able to provide real-time guarantees kernel-based packet filter, Osprey dispatches incoming packets to the user application as quickly as possible, eliminating the to the applications. kernel from the critical path. A real-time scheduler then decides In this paper we describe an alternative approach to using on the priority and order in which applications process their hardware virtualization for cloud computing. We abandon incoming packets while maintaining the limits set forth in the the split between host and guest operating systems to run all resource container. We have implemented a mostly complete user applications on top of a single cloud operating system Osprey prototype for the x86 architecture and we plan to port it to ARM and PowerPC and to develop a Linux library OS. called Osprey. We trade full virtualization transparency for predictable, real-time application execution and fault- Keywords-cloud; virtualization; predictability; real-time; tolerance. Osprey provides a virtual machine interface at the sys- I. INTRODUCTION tem call level as opposed to instruction set (ISA) level in hardware virtualization. This approach, known as OS A growing number of network services today are built virtualization [4], [5], incurs lower overhead compared to on cloud computing infrastructure. Cloud computing is hardware virtualization, while both approaches introduce typically based on hardware virtualization wherein a host similar benefits to cloud computing including application operating system (virtual machine monitor, or hypervisor) sandboxing, server consolidation, checkpoint/restart, and provides a virtual machine interface nearly identical to migration. Hardware virtualization introduces unnecessary that of physical hardware. Services with strict latency or overhead because an application runs on top of two operating timing requirements, such as telecommunications systems, systems (host and guest) which have separate sets of drivers, are difficult to run in current clouds: Hardware virtual- data buffers, resource management policies, etc. that often ization introduces unpredictability at the guest operating conflict. system (OS) level that violates key assumptions the guest In this paper we claim that OS virtualization allows system makes about the hardware it is running on. The dependable, predictable, and real-time computing since the time perceived by the guest system in a virtual machine operating system can consistently manage all system re- is non-linear when sharing hardware, making it difficult to sources while exporting relevant information to the applica- provide reliable timing guarantees. Real-time scheduling is tion. In Osprey, we combine OS virtualization with library 978-1-4673-2266-9/12/$31.00 ©2012 IEEE operating systems [6], [7], [8], which implement legacy Inter-process communication, as well as user-kernel OS personalities in the user space providing compatibility communication, is performed using shared-memory asyn- with legacy software. We implement library OSes on top chronous message queues. Traditional trap-based system of the Osprey kernel to enable high performance, real-time calls are replaced in Osprey by these message queues that guarantees, and scalability on modern, massively multicore allow user processes to batch multiple kernel requests to hardware. In the following sections we describe the key reduce context switches. Processes need to trap to the kernel features that allow Osprey to meets these goals. only in cases where the process fills up its queue or waits for an incoming message. Message-based system calls also II. PREDICTABLE PERFORMANCE allow the kernel to handle requests on a different core We are expecting that the CPU core count in future while the user process is running in parallel, maximizing hardware is going to gradually increase and memory access throughput. is going to be less and less uniform. Consequently, an oper- Since writing event-driven applications is considered dif- ating system for future clouds should be able to execute real- ficult, Osprey provides a lightweight, cooperative user-level time processes efficiently on multi-core NUMA hardware. threading library on top of asynchronous system calls. Similar to the multikernel model [9], we partition most Osprey threads require almost no locking. When a thread of the kernel state and physical memory in Osprey between yields or blocks waiting for a message, the library schedules CPU cores in order to reduce synchronization cost, in- another runnable thread, allowing the application to perform crease parallelism, maximize the efficiency of CPU memory multiple system calls in parallel without ever trapping or caches, and reduce the load on CPU interconnects. In blocking the process. particular, we pin user processes to cores and we run an independent scheduler on each core. A load balancer oc- III. APPLICATION ISOLATION casionally migrates processes between cores. Given a large Cloud applications often share hardware in order to run number of cores in the system and a smart process placement in a cost-effective manner, amortizing the costs of execution heuristic, we expect cross-core process migrations to be over multiple applications. It is crucial for a cloud operating relatively infrequent. Each core is assigned its own memory system to isolate applications from each other in order to allocation pool to reduce cross-core memory sharing. Parts provide them with enough resources to meet their Service- of the kernel address space are shared between the cores but Level Agreements (SLA). Current SLAs offered by cloud there is also a region that is private for each core. providers specify only an annual uptime because fine-scale Osprey supports several types of locks and semaphores for performance is very hard to guarantee with virtual machines synchronization. However, since most kernel data structures sharing hardware resources. are partitioned between cores, access to these data structures Each application in Osprey runs in a resource container can be protected using cheap locks which disable interrupts that controls resource consumption by the application. We and do not require busy waiting. We use global system locks distinguish between two types of resources: preemptive and infrequently and keep critical code sections to a necessary non-preemptive. Preemptive resources, such as CPU time minimum in order to avoid wasting CPU cycles on lock or network interface bandwidth, can be freely given to the contention and to eliminate unknown wait times to provide application and taken back by the kernel when needed. real-time guarantees. Resource containers thus specify the minimum amount of Osprey generally attempts to minimize the number of preemptive resources guaranteed for the application. traps, interrupts, and context switches since they have an Non-preemptive resources, such as memory, cannot be adverse effect on system performance and real-time guar- easily taken back once granted (note that we abandon antees. Clock and inter-core interrupts are only used if a swapping due to its high cost), and thus resource
Recommended publications
  • Java Implementation of the Graphical User Interface of the Octopus Distributed Operating System
    Java implementation of the graphical user interface of the Octopus distributed operating system Submitted by Aram Sadogidis Advisor Prof. Spyros Lalis University of Thessaly Volos, Greece October 2012 Acknowledgements I am sincerely grateful for all the people that supported me during my Uni- versity studies. Special thanks to professor Spyros Lalis, my mentor, who had decisive influence in shaping my character as an engineer. Also many thanks to my family and friends, whose support all these years, encouraged me to keep moving forward. 1 Contents 1 Introduction 4 2 System software technologies 6 2.1 Inferno OS . 6 2.2 JavaSE and Android framework . 8 2.3 Synthetic file systems . 10 2.3.1 Styx . 10 2.3.2 Op . 12 3 Octopus OS 14 3.1 UpperWare architecture . 14 3.2 Omero, a filesystem based window system . 16 3.3 Olive, the Omero viewer . 19 3.4 Ox, the Octopus shell . 21 4 Java Octopus Terminal 23 4.1 JOlive . 24 4.2 Desktop version . 25 4.2.1 Omero package . 26 4.2.2 ui package . 27 4.3 Android version . 28 4.3.1 com.jolive.Omero . 28 4.3.2 com.jolive.ui . 29 4.3.3 Pull application's UI . 30 5 Future perspective 33 5.1 GPS resources . 33 5.2 JOp . 34 5.3 Authentication device . 34 5.4 Remote Voice commander . 34 5.5 Conclusion . 35 6 Thesis preview in Greek 38 2 List of Figures 2.1 An application operates on a synthetic file as if it is a disk file, effectively communicating with a synthetic file system server.
    [Show full text]
  • Persistent 9P Sessions for Plan 9
    Persistent 9P Sessions for Plan 9 Gorka Guardiola, [email protected] Russ Cox, [email protected] Eric Van Hensbergen, [email protected] ABSTRACT Traditionally, Plan 9 [5] runs mainly on local networks, where lost connections are rare. As a result, most programs, including the kernel, do not bother to plan for their file server connections to fail. These programs must be restarted when a connection does fail. If the kernel’s connection to the root file server fails, the machine must be rebooted. This approach suffices only because lost connections are rare. Across long distance networks, where connection failures are more common, it becomes woefully inadequate. To address this problem, we wrote a program called recover, which proxies a 9P session on behalf of a client and takes care of redialing the remote server and reestablishing con- nection state as necessary, hiding network failures from the client. This paper presents the design and implementation of recover, along with performance benchmarks on Plan 9 and on Linux. 1. Introduction Plan 9 is a distributed system developed at Bell Labs [5]. Resources in Plan 9 are presented as synthetic file systems served to clients via 9P, a simple file protocol. Unlike file protocols such as NFS, 9P is stateful: per-connection state such as which files are opened by which clients is maintained by servers. Maintaining per-connection state allows 9P to be used for resources with sophisticated access control poli- cies, such as exclusive-use lock files and chat session multiplexers. It also makes servers easier to imple- ment, since they can forget about file ids once a connection is lost.
    [Show full text]
  • Bell Labs, the Innovation Engine of Lucent
    INTERNSHIPS FOR RESEARCH ENGINEERS/COMPUTER SCIENTISTS BELL LABS DUBLIN (IRELAND) – BELL LABS STUTTGART (GERMANY) Background Candidates are expected to have some experience related to the following topics: Bell Laboratories is a leading end-to-end systems and • Operating systems, virtualisation and computer solutions research lab working in the areas of cloud architectures. and distributed computing, platforms for real-time • big-data streaming and analytics, wireless networks, Software development skills (C/C++, Python). • semantic data access, services-centric operations and Computer networks and distributed systems. thermal management. The lab is embedded in the • Optimisation techniques and algorithms. great traditions of the Bell Labs systems research, in- • Probabilistic modelling of systems performance. ternationally renowned as the birthplace of modern The following skills or interests are also desirable: information theory, UNIX, the C/C++ programming • languages, Awk, Plan 9 from Bell Labs, Inferno, the Experience with OpenStack or OpenNebula or Spin formal verification tool, and many other sys- other cloud infrastructure management software. • tems, languages, and tools. Kernel-level programming (drivers, scheduler,...). • Xen, KVM and Linux scripting/administration. We offer summer internships in our Cloud • Parallel and distributed systems programming Computing team, spread across the facilities in Dublin (Ireland) and Stuttgart (Germany), focusing (MPI, OpenMP, CUDA, MapReduce, StarSs, …). • on research enabling future cloud
    [Show full text]
  • Plan 9 from Bell Labs
    Plan 9 from Bell Labs “UNIX++ Anyone?” Anant Narayanan Malaviya National Institute of Technology FOSS.IN 2007 What is it? Advanced technology transferred via mind-control from aliens in outer space Humans are not expected to understand it (Due apologies to lisperati.com) Yeah Right • More realistically, a distributed operating system • Designed by the creators of C, UNIX, AWK, UTF-8, TROFF etc. etc. • Widely acknowledged as UNIX’s true successor • Distributed under terms of the Lucent Public License, which appears on the OSI’s list of approved licenses, also considered free software by the FSF What For? “Not only is UNIX dead, it’s starting to smell really bad.” -- Rob Pike (circa 1991) • UNIX was a fantastic idea... • ...in it’s time - 1970’s • Designed primarily as a “time-sharing” system, before the PC era A closer look at Unix TODAY It Works! But that doesn’t mean we don’t develop superior alternates GNU/Linux • GNU’s not UNIX, but it is! • Linux was inspired by Minix, which was in turn inspired by UNIX • GNU/Linux (mostly) conforms to ANSI and POSIX requirements • GNU/Linux, on the desktop, is playing “catch-up” with Windows or Mac OS X, offering little in terms of technological innovation Ok, and... • Most of the “modern ideas” we use today were “bolted” on an ancient underlying system • Don’t believe me? A “modern” UNIX Terminal Where did it go wrong? • Early UNIX, “everything is a file” • Brilliant! • Only until people started adding “features” to the system... Why you shouldn’t be working with GNU/Linux • The Socket API • POSIX • X11 • The Bindings “rat-race” • 300 system calls and counting..
    [Show full text]
  • Security of OS-Level Virtualization Technologies: Technical Report
    Security of OS-level virtualization technologies Elena Reshetova1, Janne Karhunen2, Thomas Nyman3, N. Asokan4 1 Intel OTC, Finland 2 Ericsson, Finland 3 University of Helsinki, Finland 4 Aalto University and University of Helsinki, Finland Abstract. The need for flexible, low-overhead virtualization is evident on many fronts ranging from high-density cloud servers to mobile devices. During the past decade OS-level virtualization has emerged as a new, efficient approach for virtualization, with implementations in multiple different Unix-based systems. Despite its popularity, there has been no systematic study of OS-level virtualization from the point of view of security. In this report, we conduct a comparative study of several OS- level virtualization systems, discuss their security and identify some gaps in current solutions. 1 Introduction During the past couple of decades the use of different virtualization technolo- gies has been on a steady rise. Since IBM CP-40 [19], the first virtual machine prototype in 1966, many different types of virtualization and their uses have been actively explored both by the research community and by the industry. A relatively recent approach, which is becoming increasingly popular due to its light-weight nature, is Operating System-Level Virtualization, where a number of distinct user space instances, often referred to as containers, are run on top of a shared operating system kernel. A fundamental difference between OS-level virtualization and more established competitors, such as Xen hypervisor [24], VMWare [48] and Linux Kernel Virtual Machine [29] (KVM), is that in OS-level virtualization, the virtualized artifacts are global kernel resources, as opposed to hardware.
    [Show full text]
  • A Flexible Framework for File System Benchmarking &Pivot
    ;login SPRING 2016 VOL. 41, NO. 1 : & Filebench: A Flexible Framework for File System Benchmarking Vasily Tarasov, Erez Zadok, and Spencer Shepler & Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems Jonathan Mace, Ryan Roelke, and Rodrigo Fonseca & Streaming Systems and Architectures: Kafka, Spark, Storm, and Flink Jayant Shekhar and Amandeep Khurana & BeyondCorp: Design to Deployment at Google Barclay Osborn, Justin McWilliams, Betsy Beyer, and Max Saltonstall Columns Red/Blue Functions: How Python 3.5’s Async IO Creates a Division Among Function David Beazley Using RPCs in Go Kelsey Hightower Defining Interfaces with Swagger David N. Blank-Edelman Getting Beyond the Hero Sysadmin and Monitoring Silos Dave Josephsen Betting on Growth vs Magnitude Dan Geer Supporting RFCs and Pondering New Protocols Robert G. Ferrell UPCOMING EVENTS NSDI ’16: 13th USENIX Symposium on Networked USENIX Security ’16: 25th USENIX Security Systems Design and Implementation Symposium March 16–18, 2016, Santa Clara, CA, USA August 10–12, 2016, Austin, TX, USA www.usenix.org/nsdi16 www.usenix.org/sec16 Co-located with NSDI ’16 Co-located with USENIX Security ’16 CoolDC ’16: USENIX Workshop on Cool Topics on WOOT ’16: 10th USENIX Workshop on Offensive Sustainable Data Centers Technologies March 19, 2016 August 8–9, 2016 www.usenix.org/cooldc16 Submissions due May 17, 2016 www.usenix.org/woot16 SREcon16 CSET ’16: 9th Workshop on Cyber Security April 7–8, 2016, Santa Clara, CA, USA Experimentation and Test www.usenix.org/srecon16 August 8, 2016 Submissions
    [Show full text]
  • Operating Systems – the Code We Love to Hate
    U.S. Department of Energy Office of Science Operating Systems – the Code We Love to Hate (Operating and Runtime Systems: Status, Challenges and Futures) Fred Johnson Senior Technical Manager for Computer Science Office of Advanced Scientific Computing Research DOE Office of Science Salishan – April 2005 1 U.S. Department of Energy The Office of Science Office of Science Supports basic research that underpins DOE missions. Constructs and operates large scientific facilities for the U.S. scientific community. Accelerators, synchrotron light sources, neutron sources, etc. Six Offices Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Nuclear Physics Advanced Scientific Computing Research Salishan – April 2005 2 U.S. Department of Energy Simulation Capability Needs -- FY2005 Timeframe Office of Science Sustained Application Simulation Need Computational Significance Capability Needed (Tflops) Climate Calculate chemical balances Provides U.S. policymakers with Science in atmosphere, including > 50 leadership data to support policy clouds, rivers, and decisions. Properly represent and vegetation. predict extreme weather conditions in changing climate. Magnetic Optimize balance between > 50 Underpins U.S. decisions about future Fusion Energy self-heating of plasma and international fusion collaborations. heat leakage caused by Integrated simulations of burning electromagnetic turbulence. plasma crucial for quantifying prospects for commercial fusion. Combustion Understand interactions > 50 Understand detonation dynamics (e.g. Science between combustion and engine knock) in combustion turbulent fluctuations in systems. Solve the “soot “ problem burning fluid. in diesel engines. Environmental Reliably predict chemical > 100 Develop innovative technologies to Molecular and physical properties of remediate contaminated soils and Science radioactive substances. groundwater. Astrophysics Realistically simulate the >> 100 Measure size and age of Universe and explosion of a supernova for rate of expansion of Universe.
    [Show full text]
  • A Repository of Unix History and Evolution
    http://www.dmst.aueb.gr/dds/pubs/jrnl/2016-EMPSE-unix-history/html/unix-history.html This is an HTML rendering of a working paper draft that led to a publication. The publication should always be cited in preference to this draft using the following reference: Diomidis Spinellis. A repository of Unix History and evolution. Empirical Software Engineering, 22(3):1372–1404, 2017. (doi:10.1007/s10664-016-9445-5) This document is also available in PDF format. The final publication is available at Springer via doi:10.1007/s10664-016-9445-5. The document's metadata is available in BibTeX format. Find the publication on Google Scholar This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Diomidis Spinellis Publications A Repository of Unix History and Evolution1 Diomidis Spinellis Abstract The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1972 as a five thousand line kernel, to 2016 as a widely-used 27 million line system. The 1.1GB repository contains 496 thousand commits and 2,523 branch merges. The repository employs the commonly used Git version control system for its storage, and is hosted on the popular GitHub archive.
    [Show full text]
  • C Programming in Plan 9 from Bell Labs
    C Programming in Plan 9 from Bell Labs Pietro Gagliardi ABSTRACT 6DEI F=FAH EI =n EnJHo@K?JEon Jo FHoCH=mmEnC MEJD 2l=n ' BHom *All L=>I MEJD JDA + l=nCK=CA. 2l=n ' FHoLE@AI noJ onlO = IECnEBE?=nJlO EmFHoLA@ LAHIEon oB +, >KJ =lIo = nKm>AH oB FHoCH=mmEnC lE>H=HEAI Jo IEmFlEBO ?omFlE?=JA@ J=IkI. 6DEI F=FAH EI mA=nJ Jo >A = IKFFlAmAnJ Jo JDA m=nK=l F=CAI, oJDAH @o?KmAnJI FHoLE@A@ >O JDA IOIJAm En /sys/doc,=n@=FHoCH=mmAHߣIlEJAH=JKHA?ollA?JEon. 1. Introduction 2l=n ' BHom *All L=>I D=I =lM=OI >AAn = IOIJAm =>oLA JDA HAIJ: IEmFlA, FoHJ=>lA, =n@ BA=JKHA-?omFlAJA. 1J EInߣJ 7N1:;® H=JDAH, EJ EmFHoLAI on JDA >=IE?I oB 7N1: >O FHoLE@­ EnC = nKm>AH oB BA=JKHAI =>IAnJ BHom moIJ oJDAH oFAH=JEnC IOIJAmI. OnA oB JDoIA BA=­ JKHAI EI = CHA=J FHoCH=mmEnC AnLEHonmAnJ JD=J HEL=lI 7N1:ߣI. 2l=n ' EI BKllO 7nE?o@A- ?onBoHm=nJ JDHoKCD EJI nA=HlO KnELAHI=l KIA oB JDA 76.-& An?o@EnC, >HoKCDJ Jo KI >O JMo oB JDA FAoFlA JD=J >HoKCDJ KI 2l=n '. 1J noJ onlO kAAFI JDA + l=nCK=CA oB ol@, >KJ JDHoKCD JDA MoHk oB KAn 6DomFIon, EJ FHoLE@AI = + JD=J m=kAI IomA oJDAHMEIA ?omFlE­ ?=JA@ ?onIJHK?JI IJH=ECDJBoHM=H@. *=?kEnC JDEI nAM + KF EI !! FHoCH=mm=>ElEJO lE>H=HEAI JD=J IECnEBE?=nJlO HA@K?A JDA =moKnJ oB ?o@A = FHoCH=mmAH nAA@I Jo MHEJA.
    [Show full text]
  • Real Time in a Real Operating System.Pdf
    30 Real Time in a Real Operating System Sape J. Mullender, Pierre G. Jansen Introduction The quality of an operating system is more a subject of religious debate than of technical merit. The Windows community is like the Catholic Church; it has the largest following, and its members are mostly laymen who do not participate much in religious debates. The community is organized on strong hierarchical lines. The Unix community is like the mainstream Protestant Church; it has not as large a following as the Windows community, and its members define the system and run the community. Like the Protestant Church, there are many flavors of observance: Linux, FreeBSD, NetBSD, Mach; the list is as long as the list of protestant variants. Most are highly evangelical—a good Protestant trait—with Linux perhaps being the most fanatical. The Macintosh community hangs somewhere in the lurch between Windows and Unix, the Catholics and the Protestants, a bit like the Anglican Church; they’re Protestants acting like Catholics. Plan 9 from Bell Labs is like the Quakers: distinguished by its stress on the ‘Inner Light,’ noted for simplicity of life, in particular for plainness of speech. Like the Quakers, Plan 9 does not proselytize. Plan 9 is relatively little known and has but a small user community (a few thousand installations). Nevertheless, it is a complete operating system, and it is the only operating system booted by many of its users. Plan 9 is also used in sev- eral embedded environments. For instance, it is the system inside the Viaduct,a computer system the size of a packet of cigarettes that provides an encrypted bridge between Lucent employees’ home computers and the corporate intranet.
    [Show full text]
  • Dash1.Looksgreatinmothra.Pdf
    6DEI >ook M=I JOFAIAJ troff -ms -mpictures|lp -dstdout|ps2pdf En LK?E@= 5=nI >O JDA =KJDoH, KIEnC = LAnoLo 6DEnk2=@ : ! 6=>lAJ HKnnEnC JDA 'BHonJ oFAH=JEnC IOI­ JAm. 4An@AHA@: $- - '.4ON6 'BHonJ.oHC 6DEI EI = MoHk oB BE?JEon. N=mAI, ?D=H=?JAHI, Fl=?AI =n@ En?E@AnJI AEJDAH =HA JDA FHo@K?J oB JDA =KJDoHߣI Em=CEn=JEon oH =HA KIA@ BE?JEJEoKIlO, =n@ =nO HAIAm>l=n?A Jo =?JK=l FAH­ IonI,lELEnCoH@A=@,>KIEnAIIAI,?omF=nEAI,ALAnJIoHlo?=lAIEIAnJEHAlO?oEn?E@AnJ=l. M16/++/2K>lE?,om=En 9FRONT FREQUENTLY QUESTIONED ANSWERS Those who can do, those who can’t write and those who can’t write make ezines. ߞ 5=FAMKllAn@AH ACHTUNG! 6DEI @o?KmAnJߣI 5647+674- ߞ =n@ IomA oB EJI 6-:6 ߞ EI Fl=CE=HEzA@ ߞ BHomJDAO2-N*5,.)3 ACHTUNG! 601515NO6)52)+- ACHTUNG! 1nBoHm=JEon FHoLE@A@ >O JDEI @o?KmAnJ m=O >A oKJ@=JA@ oH jKIJ Fl=En MHonC.7IAOoKH>H=En.NO4-.7N,5. _sl’s info is incorrect. ߞ =nJD_N i really hate the openbsd content in the fqa ߞ =EjK 0 − Introduction to Plan 9 .-9D=JEI2l=n'? ..-2l=n'EInoJ7N1: ...-2l=n'EInoJFl=n'FoHJ ... -2l=n'EInoJ1nBAHno .. -2l=n'EInoJ=FHo@K?J ..!-2l=n'EInoJBoHOoK . -9DO2l=n'? . .-9D=J@oFAoFlAlEkA=>oKJ2l=n'? . ..-9D=J@oOoKKIA2l=n'BoH? . -9D=J@oFAoFlAD=JA=>oKJ2l=n'? . .-9D=JEInoJEn2l=n' . .!-9DO@E@2l=n'ߣI?HA=JoHICELAKFon2l=n'? . ."-9D=JEIJDA@A=lMEJD2l=n'ߣIMAEH@lE?AnIA? . .".-4E?D=H@5J=llm=nD=JAIJDA2l=nNEnAlE?AnIA?EH?= .
    [Show full text]
  • Introduction to Operating Systems Abstractions Using Plan 9 from Bell
    Introduction to Operating Systems Abstractions Using Plan 9 from Bell Labs (Draft ߞ 9/18/2006) Francisco J Ballesteros Copyright © 2006 Francisco J Ballesteros Plan 9 is Copyright © 2002 Lucent Technologies Inc. All Rights Reserved. Preface Using effectively the operating system is very important for anyone working with computers. It can be the difference between performing most tasks by hand, and asking the computer to perform them. Traditionally, Operating Systems courses used UNIX to do this. However, today there is no such thing as UNIX. Linux is a huge system, full of inconsistencies, with programs that do multiple tasks and do not perform them well. Linux manual pages just cannot be read. These lecture notes use Plan 9 from Bell Labs to teach a first (practical!) course on operating sys- tems. The system is easy to use for programmers, and is an excellent example of high-quality system design and software development. Studying its code reveals how simplicity can be more effective than contortions made by other systems. The first Operating Systems course at Rey Juan Carlos University is focused on practice. Because in theory, theory is like practice, but in practice it is not. What is important is for you to use the system, and to learn to solve problems. Theory will come later to fill the gaps and try to give more insight about what a system does and how can it be used. The whole text assumes that you have been already exposed to computer, and used at least a com- puter running Windows. This is so common that it makes no sense to drop this assumption.
    [Show full text]