Operating Systems – the Code We Love to Hate

Total Page:16

File Type:pdf, Size:1020Kb

Operating Systems – the Code We Love to Hate U.S. Department of Energy Office of Science Operating Systems – the Code We Love to Hate (Operating and Runtime Systems: Status, Challenges and Futures) Fred Johnson Senior Technical Manager for Computer Science Office of Advanced Scientific Computing Research DOE Office of Science Salishan – April 2005 1 U.S. Department of Energy The Office of Science Office of Science Supports basic research that underpins DOE missions. Constructs and operates large scientific facilities for the U.S. scientific community. Accelerators, synchrotron light sources, neutron sources, etc. Six Offices Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Nuclear Physics Advanced Scientific Computing Research Salishan – April 2005 2 U.S. Department of Energy Simulation Capability Needs -- FY2005 Timeframe Office of Science Sustained Application Simulation Need Computational Significance Capability Needed (Tflops) Climate Calculate chemical balances Provides U.S. policymakers with Science in atmosphere, including > 50 leadership data to support policy clouds, rivers, and decisions. Properly represent and vegetation. predict extreme weather conditions in changing climate. Magnetic Optimize balance between > 50 Underpins U.S. decisions about future Fusion Energy self-heating of plasma and international fusion collaborations. heat leakage caused by Integrated simulations of burning electromagnetic turbulence. plasma crucial for quantifying prospects for commercial fusion. Combustion Understand interactions > 50 Understand detonation dynamics (e.g. Science between combustion and engine knock) in combustion turbulent fluctuations in systems. Solve the “soot “ problem burning fluid. in diesel engines. Environmental Reliably predict chemical > 100 Develop innovative technologies to Molecular and physical properties of remediate contaminated soils and Science radioactive substances. groundwater. Astrophysics Realistically simulate the >> 100 Measure size and age of Universe and explosion of a supernova for rate of expansion of Universe. Gain first time. insight into inertial fusion processes. Salishan – April 2005 3 U.S. Department of Energy CS research program Office of Science Operating Systems FAST-OS Scalable System Software ISIC Science Appliance Programming Models MPI/ROMIO/PVFS II Runtime libraries MPI, Global Arrays (ARMCI), UPC (GASNet) Performance tools and analysis Program Development Data management and visualization Salishan – April 2005 4 U.S. Department of Energy Outline Office of Science Motivation – so what? OSes and Architectures Current State of Affairs Research directions Final points Salishan – April 2005 5 U.S. Department of Energy OS Noise/Jitter Office of Science Latest two sets of measurements are consistent (~70% longer than model) Fabrizio Petrini, Darren J. Kerbyson, Scott Pakin, "The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q", in Proc. SuperComputing, Phoenix, November 2003. SAGE on QB 1-rail (timing.input) 1.4 Model 1.2 21-Sep There is a difference 25-Nov why ? 1 0.8 0.6 Cycle Time(s) 0.4 0.2 Lower is 0 1 10 100 1000 10000 better! # PEs Salishan – April 2005 6 U.S. Department of Energy OS/Runtime Thread Placement Office of Science G. Jost et al -- What Multilevel Parallel Programs Do When You Are Not Watching: A Performance Analysis case Study Comparing MPI/OpenMP, MLP, and Nested OpenMP, WOMPAT 2004. http://www.tlc2.uh.edu/wompat2004/Presentations/jost.pdf Multi-zone NAS Parallel Benchmarks Coarse grain parallelism between zones Fine grain loop-level parallelism in solver routines MLP 30% – 50% faster in some cases. Why? Salishan – April 2005 7 U.S. Department of Energy MPI/MLP Differences Office of Science MPI: Initially not designed for NUMA architectures or mixing of threads and processes API does not provide support for memory/thread placement Vendor-specific APIs to control thread and memory placement MLP: Designed for NUMA architectures and hybrid programming API includes system call to bind threads to CPUs Binds threads to CPUs (Pinning) to improve performance of hybrid codes Salishan – April 2005 8 U.S. Department of Energy Results Office of Science Thread binding and data placement: Bad: Initial process assignment to CPUs is sequential – subsequent thread placement separates threads and data P1 P2 Good: Initial process assignment to CPUs allows room for multiple threads/data of a logical process to be “close to each other” P1 P2 “The use of detailed analysis techniques helped to determine that initially observed performance differences were not due to the programming models themselves but rather to other factors.” Salishan – April 2005 9 U.S. Department of Energy NWChem Architecture Office of Science Generic Energy, structure, … Tasks • Object-oriented design SCF energy, gradient, … – abstraction, data hiding, Molecular handles, APIs DFT energy, gradient, … Calculation • Parallel programming model MD, NMR, Solvation, … Modules – non-uniform memory Optimize, Dynamics, … access, global arrays, MPI Run-time database Molecular • Infrastructure Modeling – GA, Parallel I/O, RTDB, MA, Toolkit ... • Program modules PeIGS Basis Set Object ... ... Integral API Object Geometry – communication only through Parallel IO Molecular the database; persistence Software for easy restart Memory Allocator Development Global Arrays Toolkit Salishan – April 2005 10 U.S. Department of Energy Outline Office of Science Motivation – so what? OSes and Architectures Current State of Affairs Research directions Final points Salishan – April 2005 11 U.S. Department of Energy The 100,000 foot view Office of Science Full-featured OS Full set of shared services Complexity is a barrier to understanding new hw and sw Subject to “rogue/unexpected” effects Unix, Linux Lightweight OS Small set of shared services Puma/Cougar/Catamount – Red, Red Strom CNK – BlueGene/L Open Source/proprietary Interaction of system architecture and OS Clusters, CCNuma, MPP, Distributed/shared memory, bandwidth … Taxonomy is still unsettled High-Performance Computing: Clusters, Constellations, MPPs, and Future Directions, Dongarra, Sterling, Simon, Strohmaier, CISE, March/April 2005. Salishan – April 2005 12 U.S. Department of Energy Birds eye view of a typical software stack on a compute node Office of Science Parallel Resource Scientific Software Management/ Applications Development / Scheduler Compilers Kernel Characteristics User Space Runtime Support OS Kernel • Monolithic Perf Tools Debuggers CPU Scheduler • Privileged operations for protection Programming Model Runtime Virtual Memory MGR • Software tunable IPC • General purpose algorithms Chkpt/Rstrt I/O File Systems • Parallel knowledge in resource Sockets management and runtime OS Bypass Chkpt/RstrtTDP UDP • OS bypass for performance Hypervisor • Single system image? Node Hardware Architecture: CPU, Cache, Memory, Interconnect Adaptor, … HPC System Software Elements Salishan – April 2005 13 U.S. Department of Energy A High-End Cluster Office of Science Compute Switch Fabric SMP SMP SMP •••Compute Server ••• SMP SMP SMP O( 10-103 TF ) Secure Control Switch Fabric SGPFS ••••• Meta Data Global Name Visualization Servers Space Storage HPSS Switch Meta Data Fabric Agent User Access Archival Agent NFS Switch Fabric Storage HPSS DFS HPSS ••• ••• RAITs ••• •••••••••••••••••••••••••Disk Memory Server •••••••••••••••••••••••••• O( 1-102 PB ), User Data, Scratch Storage, Out of Core Memory, Disks, RAIDs, Locks Salishan – April 2005 14 U.S. Department of Energy Clustering software Office of Science Classic Lots of local disks with full root file systems and full OS on every node All the nodes are really, really independent (and visible) Clustermatic Clean, reliable BIOS that boots in seconds (LinuxBIOS) Boot directly into Linux so you can use HPC network to boot See entire cluster process state from one node (bproc) Fast, low overhead monitoring (Supermon) No NFS root -- root is local ramdisk Salishan – April 2005 15 U.S. Department of Energy LANL SCIENCE APPLIANCE Scalable Cluster Computing using Clustermatic Office of Science Compilers Debuggers Applications BJS Supermon Beoboot MPI BProc v9fs v9fs mon parallel i/o v9fs mon parallel i/o v9fs mon parallel i/o v9fs mon parallel i/o Linux Linux Linux Linux Linux LinuxBIOS LinuxBIOS LinuxBIOS LinuxBIOS LinuxBIOS High speed interconnect(s) master slaveslave slave slave v9fs server parallel i/o server Environment Fileserver High Performance Parallel Fileserver Thanks to Ron Minnich Salishan – April 2005 16 U.S. Department of Energy LWK approach Office of Science Separate policy decision from policy enforcement Move resource management as close to application as possible Protect applications from each other Get out of the way Salishan – April 2005 17 U.S. Department of Energy History of Sandia/UNM Lightweight Kernels Office of Science Thanks to Barney Maccabe and Ron Brightwell Salishan – April 2005 18 U.S. Department of Energy LWK Structure Office of Science PCT App. 1 App. 2 App. 3 libc.a libc.a libc.a libmpi.a libnx.a libvertex.a Q-Kernel: message passing, memory protection Salishan – April 2005 19 U.S. Department of Energy LWK Ingredients Office of Science Quintessential Kernel (Qk) Process Control Thread Policy enforcer Runs in user space Initializes hardware More privileges than user Handles interrupts and exceptions applications Maintain hardware virtual Policy maker addressing
Recommended publications
  • Java Implementation of the Graphical User Interface of the Octopus Distributed Operating System
    Java implementation of the graphical user interface of the Octopus distributed operating system Submitted by Aram Sadogidis Advisor Prof. Spyros Lalis University of Thessaly Volos, Greece October 2012 Acknowledgements I am sincerely grateful for all the people that supported me during my Uni- versity studies. Special thanks to professor Spyros Lalis, my mentor, who had decisive influence in shaping my character as an engineer. Also many thanks to my family and friends, whose support all these years, encouraged me to keep moving forward. 1 Contents 1 Introduction 4 2 System software technologies 6 2.1 Inferno OS . 6 2.2 JavaSE and Android framework . 8 2.3 Synthetic file systems . 10 2.3.1 Styx . 10 2.3.2 Op . 12 3 Octopus OS 14 3.1 UpperWare architecture . 14 3.2 Omero, a filesystem based window system . 16 3.3 Olive, the Omero viewer . 19 3.4 Ox, the Octopus shell . 21 4 Java Octopus Terminal 23 4.1 JOlive . 24 4.2 Desktop version . 25 4.2.1 Omero package . 26 4.2.2 ui package . 27 4.3 Android version . 28 4.3.1 com.jolive.Omero . 28 4.3.2 com.jolive.ui . 29 4.3.3 Pull application's UI . 30 5 Future perspective 33 5.1 GPS resources . 33 5.2 JOp . 34 5.3 Authentication device . 34 5.4 Remote Voice commander . 34 5.5 Conclusion . 35 6 Thesis preview in Greek 38 2 List of Figures 2.1 An application operates on a synthetic file as if it is a disk file, effectively communicating with a synthetic file system server.
    [Show full text]
  • Persistent 9P Sessions for Plan 9
    Persistent 9P Sessions for Plan 9 Gorka Guardiola, [email protected] Russ Cox, [email protected] Eric Van Hensbergen, [email protected] ABSTRACT Traditionally, Plan 9 [5] runs mainly on local networks, where lost connections are rare. As a result, most programs, including the kernel, do not bother to plan for their file server connections to fail. These programs must be restarted when a connection does fail. If the kernel’s connection to the root file server fails, the machine must be rebooted. This approach suffices only because lost connections are rare. Across long distance networks, where connection failures are more common, it becomes woefully inadequate. To address this problem, we wrote a program called recover, which proxies a 9P session on behalf of a client and takes care of redialing the remote server and reestablishing con- nection state as necessary, hiding network failures from the client. This paper presents the design and implementation of recover, along with performance benchmarks on Plan 9 and on Linux. 1. Introduction Plan 9 is a distributed system developed at Bell Labs [5]. Resources in Plan 9 are presented as synthetic file systems served to clients via 9P, a simple file protocol. Unlike file protocols such as NFS, 9P is stateful: per-connection state such as which files are opened by which clients is maintained by servers. Maintaining per-connection state allows 9P to be used for resources with sophisticated access control poli- cies, such as exclusive-use lock files and chat session multiplexers. It also makes servers easier to imple- ment, since they can forget about file ids once a connection is lost.
    [Show full text]
  • Bell Labs, the Innovation Engine of Lucent
    INTERNSHIPS FOR RESEARCH ENGINEERS/COMPUTER SCIENTISTS BELL LABS DUBLIN (IRELAND) – BELL LABS STUTTGART (GERMANY) Background Candidates are expected to have some experience related to the following topics: Bell Laboratories is a leading end-to-end systems and • Operating systems, virtualisation and computer solutions research lab working in the areas of cloud architectures. and distributed computing, platforms for real-time • big-data streaming and analytics, wireless networks, Software development skills (C/C++, Python). • semantic data access, services-centric operations and Computer networks and distributed systems. thermal management. The lab is embedded in the • Optimisation techniques and algorithms. great traditions of the Bell Labs systems research, in- • Probabilistic modelling of systems performance. ternationally renowned as the birthplace of modern The following skills or interests are also desirable: information theory, UNIX, the C/C++ programming • languages, Awk, Plan 9 from Bell Labs, Inferno, the Experience with OpenStack or OpenNebula or Spin formal verification tool, and many other sys- other cloud infrastructure management software. • tems, languages, and tools. Kernel-level programming (drivers, scheduler,...). • Xen, KVM and Linux scripting/administration. We offer summer internships in our Cloud • Parallel and distributed systems programming Computing team, spread across the facilities in Dublin (Ireland) and Stuttgart (Germany), focusing (MPI, OpenMP, CUDA, MapReduce, StarSs, …). • on research enabling future cloud
    [Show full text]
  • Plan 9 from Bell Labs
    Plan 9 from Bell Labs “UNIX++ Anyone?” Anant Narayanan Malaviya National Institute of Technology FOSS.IN 2007 What is it? Advanced technology transferred via mind-control from aliens in outer space Humans are not expected to understand it (Due apologies to lisperati.com) Yeah Right • More realistically, a distributed operating system • Designed by the creators of C, UNIX, AWK, UTF-8, TROFF etc. etc. • Widely acknowledged as UNIX’s true successor • Distributed under terms of the Lucent Public License, which appears on the OSI’s list of approved licenses, also considered free software by the FSF What For? “Not only is UNIX dead, it’s starting to smell really bad.” -- Rob Pike (circa 1991) • UNIX was a fantastic idea... • ...in it’s time - 1970’s • Designed primarily as a “time-sharing” system, before the PC era A closer look at Unix TODAY It Works! But that doesn’t mean we don’t develop superior alternates GNU/Linux • GNU’s not UNIX, but it is! • Linux was inspired by Minix, which was in turn inspired by UNIX • GNU/Linux (mostly) conforms to ANSI and POSIX requirements • GNU/Linux, on the desktop, is playing “catch-up” with Windows or Mac OS X, offering little in terms of technological innovation Ok, and... • Most of the “modern ideas” we use today were “bolted” on an ancient underlying system • Don’t believe me? A “modern” UNIX Terminal Where did it go wrong? • Early UNIX, “everything is a file” • Brilliant! • Only until people started adding “features” to the system... Why you shouldn’t be working with GNU/Linux • The Socket API • POSIX • X11 • The Bindings “rat-race” • 300 system calls and counting..
    [Show full text]
  • Security of OS-Level Virtualization Technologies: Technical Report
    Security of OS-level virtualization technologies Elena Reshetova1, Janne Karhunen2, Thomas Nyman3, N. Asokan4 1 Intel OTC, Finland 2 Ericsson, Finland 3 University of Helsinki, Finland 4 Aalto University and University of Helsinki, Finland Abstract. The need for flexible, low-overhead virtualization is evident on many fronts ranging from high-density cloud servers to mobile devices. During the past decade OS-level virtualization has emerged as a new, efficient approach for virtualization, with implementations in multiple different Unix-based systems. Despite its popularity, there has been no systematic study of OS-level virtualization from the point of view of security. In this report, we conduct a comparative study of several OS- level virtualization systems, discuss their security and identify some gaps in current solutions. 1 Introduction During the past couple of decades the use of different virtualization technolo- gies has been on a steady rise. Since IBM CP-40 [19], the first virtual machine prototype in 1966, many different types of virtualization and their uses have been actively explored both by the research community and by the industry. A relatively recent approach, which is becoming increasingly popular due to its light-weight nature, is Operating System-Level Virtualization, where a number of distinct user space instances, often referred to as containers, are run on top of a shared operating system kernel. A fundamental difference between OS-level virtualization and more established competitors, such as Xen hypervisor [24], VMWare [48] and Linux Kernel Virtual Machine [29] (KVM), is that in OS-level virtualization, the virtualized artifacts are global kernel resources, as opposed to hardware.
    [Show full text]
  • A Flexible Framework for File System Benchmarking &Pivot
    ;login SPRING 2016 VOL. 41, NO. 1 : & Filebench: A Flexible Framework for File System Benchmarking Vasily Tarasov, Erez Zadok, and Spencer Shepler & Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems Jonathan Mace, Ryan Roelke, and Rodrigo Fonseca & Streaming Systems and Architectures: Kafka, Spark, Storm, and Flink Jayant Shekhar and Amandeep Khurana & BeyondCorp: Design to Deployment at Google Barclay Osborn, Justin McWilliams, Betsy Beyer, and Max Saltonstall Columns Red/Blue Functions: How Python 3.5’s Async IO Creates a Division Among Function David Beazley Using RPCs in Go Kelsey Hightower Defining Interfaces with Swagger David N. Blank-Edelman Getting Beyond the Hero Sysadmin and Monitoring Silos Dave Josephsen Betting on Growth vs Magnitude Dan Geer Supporting RFCs and Pondering New Protocols Robert G. Ferrell UPCOMING EVENTS NSDI ’16: 13th USENIX Symposium on Networked USENIX Security ’16: 25th USENIX Security Systems Design and Implementation Symposium March 16–18, 2016, Santa Clara, CA, USA August 10–12, 2016, Austin, TX, USA www.usenix.org/nsdi16 www.usenix.org/sec16 Co-located with NSDI ’16 Co-located with USENIX Security ’16 CoolDC ’16: USENIX Workshop on Cool Topics on WOOT ’16: 10th USENIX Workshop on Offensive Sustainable Data Centers Technologies March 19, 2016 August 8–9, 2016 www.usenix.org/cooldc16 Submissions due May 17, 2016 www.usenix.org/woot16 SREcon16 CSET ’16: 9th Workshop on Cyber Security April 7–8, 2016, Santa Clara, CA, USA Experimentation and Test www.usenix.org/srecon16 August 8, 2016 Submissions
    [Show full text]
  • View Article(3467)
    Problems of information technology, 2018, №1, 92–97 Kamran E. Jafarzade DOI: 10.25045/jpit.v09.i1.10 Institute of Information Technology of ANAS, Baku, Azerbaijan [email protected] COMPARATIVE ANALYSIS OF THE SOFTWARE USED IN SUPERCOMPUTER TECHNOLOGIES The article considers the classification of the types of supercomputer architectures, such as MPP, SMP and cluster, including software and application programming interfaces: MPI and PVM. It also offers a comparative analysis of software in the study of the dynamics of the distribution of operating systems (OS) for the last year of use in supercomputer technologies. In addition, the effectiveness of the use of CentOS software on the scientific network "AzScienceNet" is analyzed. Keywords: supercomputer, operating system, software, cluster, SMP-architecture, MPP-architecture, MPI, PVM, CentOS. Introduction Supercomputer is a computer with high computing performance compared to a regular computer. Supercomputers are often used for scientific and engineering applications that need to process very large databases or perform a large number of calculations. The performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of millions of instructions per second (MIPS). Since 2015, the supercomputers performing up to quadrillion FLOPS have started to be developed. Modern supercomputers represent a large number of high performance server computers, which are interconnected via a local high-speed backbone to achieve the highest performance [1]. Supercomputers were originally introduced in the 1960s and bearing the name or monogram of the companies such as Seymour Cray of Control Data Corporation (CDC), Cray Research over the next decades. By the end of the 20th century, massively parallel supercomputers with tens of thousands of available processors started to be manufactured.
    [Show full text]
  • Distributed-Operating-Systems.Pdf
    Distributed Operating Systems Overview Ye Olde Operating Systems OpenMOSIX OpenSSI Kerrighed Quick Preview Front Back Distributed Operating Systems vs Grid Computing Grid System User Space US US US US US US Operating System OS OS OS OS OS OS Nodes Nodes Amoeba, Plan9, OpenMosix, Xgrid, SGE, Condor, Distcc, OpenSSI, Kerrighed. Boinc, GpuGrid. Distributed Operating Systems vs Grid Computing Problems with the grid. Programs must utilize that library system. Usually requiring seperate programming. OS updates take place N times. Problems with dist OS Security issues – no SSL. Considered more complicated to setup. Important Note Each node, even with distributed operating systems, boots a kernel. This kernel can vary depending on the role of the node and overall architecture of the system. User Space Operating System OS OS OS OS OS OS Nodes Amoeba Andrew S. Tanenbaum Earliest documentation: 1986 What modern language was originally developed for use in Amoeba? Anyone heard of Orca? Sun4c, Sun4m, 386/486, 68030, Sun 3/50, Sun 3/60. Amoeba Plan9 Started development in the 1980's Released in 1992 (universities) and 1995 (general public). All devices are part of the filesystem. X86, MIPS, DEC Alpha, SPARC, PowerPC, ARM. Union Directories, basis of UnionFS. /proc first implemente d here. Plan9 Rio , the Plan9 window manager showing ”faces(1), stats(8), acme(1) ” and many more things. Plan9 Split nodes into 3 distinct groupings. Terminals File servers Computational servers Uses the ”9P” protocol. Low level, byte protocol, not block. Used from filesystems, to printer communication. Author: Ken Thompso n Plan9 / Amoeba Both Plan9 and Amoeba make User Space groupings of nodes, into specific categories.
    [Show full text]
  • A Multiserver User-Space Unikernel for a Distributed Virtualization System
    A Multiserver User-space Unikernel for a Distributed Virtualization System Pablo Pessolani Departamento de Ingeniería en Sistemas de Información Facultad Regional Santa Fe, UTN Santa Fe, Argentina [email protected] Abstract— Nowadays, most Cloud applications are developed application components began to be deployed in Containers. using Service Oriented Architecture (SOA) or MicroService Containers (and similar OS abstractions such as Jails [7] and Architecture (MSA). The scalability and performance of them Zones [8]) are isolated execution environments or domains is achieved by executing multiple instances of its components in in user-space to execute groups of processes. Although different nodes of a virtualization cluster. Initially, they were Containers share the same OS, they provide enough security, deployed in Virtual Machines (VMs) but, they required enough performance and failure isolation. As Containers demand computational, memory, network and storage resources to fewer resources than VMs [9], they are a good choice to hold an Operating System (OS), a set of utilities, libraries, and deploy swarms. the application component. By deploying hundreds of these Another option to reduce resource requirements is to use application components, the resource requirements increase a the application component embedded in a Unikernel [10, lot. To minimize them, usually small OSs with small memory footprint are used. Another way to reduce the resource 11]. A Unikernel is defined as “specialized, single-address- requirements is integrating the application components in a space machine image constructed by using library operating Unikernel. This article proposes a Unikernel called MUK, system” [12]. A Unikernel is a technology which integrates based on a multiserver OS, to be used as a tool to integrate monolithically network, storage, and file systems services Cloud application components.
    [Show full text]
  • Popcorn Linux: Enabling Efficient Inter-Core Communication in a Linux-Based Multikernel Operating System
    Popcorn Linux: enabling efficient inter-core communication in a Linux-based multikernel operating system Benjamin H. Shelton Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Engineering Binoy Ravindran Christopher Jules White Paul E. Plassman May 2, 2013 Blacksburg, Virginia Keywords: Operating systems, multikernel, high-performance computing, heterogeneous computing, multicore, scalability, message passing Copyright 2013, Benjamin H. Shelton Popcorn Linux: enabling efficient inter-core communication in a Linux-based multikernel operating system Benjamin H. Shelton (ABSTRACT) As manufacturers introduce new machines with more cores, more NUMA-like architectures, and more tightly integrated heterogeneous processors, the traditional abstraction of a mono- lithic OS running on a SMP system is encountering new challenges. One proposed path forward is the multikernel operating system. Previous efforts have shown promising results both in scalability and in support for heterogeneity. However, one effort’s source code is not freely available (FOS), and the other effort is not self-hosting and does not support a majority of existing applications (Barrelfish). In this thesis, we present Popcorn, a Linux-based multikernel operating system. While Popcorn was a group effort, the boot layer code and the memory partitioning code are the authors work, and we present them in detail here. To our knowledge, we are the first to support multiple instances of the Linux kernel on a 64-bit x86 machine and to support more than 4 kernels running simultaneously. We demonstrate that existing subsystems within Linux can be leveraged to meet the design goals of a multikernel OS.
    [Show full text]
  • Facilitating Both Adoption of Linux Undergraduate Operating Systems Laboratories and Students’ Immersion in Kernel Code
    SOFTICE: Facilitating both Adoption of Linux Undergraduate Operating Systems Laboratories and Students’ Immersion in Kernel Code Alessio Gaspar, Sarah Langevin, Joe Stanaback, Clark Godwin University of South Florida, 3334 Winter Lake Road, 33803 Lakeland, FL, USA [alessio | Sarah | joe | clark]@softice.lakeland.usf.edu http://softice.lakeland.usf.edu/ courses, frequent re-installations… Because of this, virtual Abstract machines, such as the one provided by the User Mode Linux (UML) project [4][5][6] are becoming more common as a This paper discusses how Linux clustering and virtual machine means to enable students to safely and conveniently tinker with technologies can improve undergraduate students’ hands-on kernel internals. Security concerns are therefore addressed and experience in operating systems laboratories. Like similar re-installation processes considerably simplified. However, one 1 projects, SOFTICE relies on User Mode Linux (UML) to potential problem still remains; we have been assuming all provide students with privileged access to a Linux system along the availability of Linux workstations in the classrooms. without creating security breaches on the hosting network. We Let’s consider an instructor is interested in a Linux-based extend such approaches in two aspects. First, we propose to laboratory at an institution using exclusively Windows facilitate adoption of Linux-based laboratories by using a load- workstations. Chances are that the classroom setup will be balancing cluster made of recycled classroom PCs to remotely assigned to the instructor or that new Linux-qualified personnel serve access to virtual machines. Secondly, we propose a new will be hired. In both cases, the adoption of such laboratories approach for students to interact with the kernel code.
    [Show full text]
  • ELASTICITY THANKS to KERRIGHED SSI and XTREEMFS Elastic Computing and Storage Infrastructure Using Kerrighed SSI and Xtreemfs
    ELASTICITY THANKS TO KERRIGHED SSI AND XTREEMFS Elastic Computing and Storage Infrastructure using Kerrighed SSI and XtreemFS Alexandre Lissy1, St´ephane Lauri`ere2 1Laboratoire d’Informatique, Universit´ede Tours, 64 avenue Jean Portalis, Tours, France 2Mandriva, 8 rue de la Michodi`ere, Paris, France Julien Hadba Laboratoire d’Informatique, Universit´ede Tours, 64 avenue Jean Portalis, Tours, France Keywords: Kerrighed, Cloud, Infrastructure, Elastic, XtreemFS, Storage. Abstract: One of the major feature of Cloud Computing is its elasticity, thus allowing one to have a moving infrastructure at a lower cost. Achieving this elasticity is the job of the cloud provider, whether it is IaaS, PaaS or SaaS. On the other hand, Single System Image has a hotplug capability. It allows to “transparently” dispatch, from a user perspective, the workload on the whole cluster. In this paper, we study whether and how we could build a Cloud infrastructure, leveraging tools from the SSI field. Our goal is to provide, thanks to Kerrighed for the computing power aggregation and unified system view, some IaaS and to exploit XtreemFS capabilities to provide performant and resilient storage for the associated Cloud infrastructure. 1 INTRODUCTION age: SSIs are distributed systems running on top of a network of computers, that appear as one big unique When talking about Cloud, John Mc- computer to the user. Several projects aimed at pro- Carthy’s (Garfinkel, 1999) quotation at the MIT’s viding such a system. Among them, we can cite 100th year celebration is often used: “computation MOSIX (Barak and Shiloh, 1977), openMosix (Bar, may someday be organized as a public utility”.
    [Show full text]