Cluster Storage for Commodity Computation

Total Page:16

File Type:pdf, Size:1020Kb

Cluster Storage for Commodity Computation UCAM-CL-TR-690 Technical Report ISSN 1476-2986 Number 690 Computer Laboratory Cluster storage for commodity computation Russell Glen Ross June 2007 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 http://www.cl.cam.ac.uk/ c 2007 Russell Glen Ross This technical report is based on a dissertation submitted December 2006 by the author for the degree of Doctor of Philosophy to the University of Cambridge, Wolfson College. Technical reports published by the University of Cambridge Computer Laboratory are freely available via the Internet: http://www.cl.cam.ac.uk/techreports/ ISSN 1476-2986 Summary Standards in the computer industry have made basic components and en- tire architectures into commodities, and commodity hardware is increas- ingly being used for the heavy lifting formerly reserved for specialised plat- forms. Now software and services are following. Modern updates to vir- tualization technology make it practical to subdivide commodity servers and manage groups of heterogeneous services using commodity operating systems and tools, so services can be packaged and managed independent of the hardware on which they run. Computation as a commodity is soon to follow, moving beyond the specialised applications typical of today’s utility computing. In this dissertation, I argue for the adoption of service clusters— clusters of commodity machines under central control, but running ser- vices in virtual machines for arbitrary, untrusted clients—as the basic building block for an economy of flexible commodity computation.I outline the requirements this platform imposes on its storage system and argue that they are necessary for service clusters to be practical, but are not found in existing systems. Next I introduce Envoy, a distributed file system for service clusters. In addition to meeting the needs of a new environment, Envoy introduces a novel file distribution scheme that organises metadata and cache manage- ment according to runtime demand. In effect, the file system is partitioned and control of each part given to the client that uses it the most; that client in turn acts as a server with caching for other clients that require concur- rent access. Scalability is limited only by runtime contention, and clients share a perfectly consistent cache distributed across the cluster. As usage patterns change, the partition boundaries are updated dynamically, with urgent changes made quickly and more minor optimisations made over a longer period of time. Experiments with the Envoy prototype demonstrate that service clus- ters can support cheap and rapid deployment of services, from isolated instances to groups of cooperating components with shared storage de- mands. 3 Acknowledgements My advisor, Ian Pratt, helped in countless ways to set the course of my research and see it through, demanding rigour and offering many helpful suggestions. Major inflection points in my work always seemed to follow meetings with him. Steve Hand also helped me throughout my time at Cambridge, offering sound advice when needed and sharing his mastery of the research game. Jon Crowcroft read drafts of my dissertations, even taking time from his Christmas holiday to send me feedback. I was fortunate to be in the Systems Research Group during a time when it was full of clever and interesting people doing good research. Andy Warfield, Alex Ho, Eva Kalyvianaki, Christian Kreibich, and Anil Madhavapeddy have been particularly rich sources of conversation and camaraderie. I might not have survived the whole experience without Nancy, whose love and constant support provided a foundation for everything else. 4 Table of Contents 1 Introduction 9 1.1 Contributions ......................... 11 1.2 Outline ............................ 12 2 Background 14 2.1 Commodity computing .................... 14 2.1.1 Commodity big iron ................. 15 2.1.2 Computation for hire ................. 17 2.2 Virtualization ......................... 20 2.2.1 Virtual machine monitors .............. 21 2.2.2 XenoServers ...................... 22 2.3 Storage systems ........................ 23 2.3.1 Local file systems ................... 23 2.3.2 Client-server file systems ............... 25 2.3.3 Serverless file systems ................. 27 2.3.4 Wide-area file systems ................ 29 2.3.5 Cluster storage systems ................ 31 2.3.6 Virtual machines ................... 36 2.4 Summary ........................... 37 3 Service Clusters 39 3.1 Commodity computation ................... 41 3.1.1 Flexible commodity computation .......... 41 3.2 Service containers ....................... 42 3.2.1 Decoupling hardware and software ......... 43 3.2.2 Decoupling unrelated services ............ 44 3.2.3 Supporting commodity tools ............. 45 3.3 Service clusters ........................ 46 3.3.1 Economies of scale .................. 47 3.3.2 Heterogeneous workloads .............. 48 3.3.3 Central management ................. 49 3.3.4 Supporting a software ecosystem .......... 50 3.4 Storage for service clusters .................. 51 5 3.4.1 Running in a cluster ................. 52 3.4.2 Supporting heterogeneous clients .......... 53 3.4.3 Local impact ..................... 56 3.5 Summary ........................... 56 4 The Envoy File System 58 4.1 Assumptions ......................... 58 4.2 Architecture .......................... 59 4.2.1 Distribution ...................... 60 4.2.2 Storage layer ..................... 63 4.2.3 Envoy layer ...................... 65 4.2.4 Caching ........................ 68 4.2.5 Data paths for typical requests ............ 70 4.3 File system images ...................... 74 4.3.1 Security ........................ 75 4.3.2 Forks and snapshots ................. 78 4.3.3 Deleting snapshots .................. 80 4.4 Territory management .................... 83 4.4.1 Design principles ................... 84 4.4.2 Special cases ...................... 84 4.4.3 Dynamic boundary changes ............. 85 4.5 Recovery ........................... 87 4.5.1 Prerequisites ...................... 89 4.5.2 Recovery process ................... 89 4.5.3 Special cases ...................... 91 4.6 Summary ........................... 93 5 The Envoy Prototype 94 5.1 Scope and design coverage .................. 94 5.1.1 Implementation shortcuts .............. 95 5.2 The 9p protocol ........................ 96 5.2.1 Alternatives ...................... 96 5.2.2 Mapping Envoy to 9p ................ 99 5.3 Storage service ........................ 101 5.3.1 Protocol ........................ 102 5.3.2 Implementation .................... 103 5.3.3 Limitations ...................... 106 5.4 Envoy service ......................... 107 5.4.1 Protocol ........................ 108 5.4.2 Data structures .................... 109 5.4.3 Freezing and thawing ................. 112 5.4.4 Read operations ................... 114 5.4.5 Write operations ................... 116 6 5.4.6 Modifying territory ownership ............ 122 5.4.7 Limitations ...................... 126 5.5 Summary ........................... 128 6 Evaluation 129 6.1 Methodology ......................... 130 6.1.1 Assumptions and goals ................ 130 6.1.2 Test environment ................... 133 6.2 Independent clients ...................... 136 6.2.1 Performance ...................... 137 6.2.2 Architecture ...................... 140 6.2.3 Cache ......................... 144 6.3 Shared images ......................... 149 6.3.1 Dynamic behaviour .................. 150 6.4 Image operations ....................... 153 6.4.1 Forks ......................... 154 6.4.2 Snapshots ....................... 154 6.4.3 Service deployment .................. 155 6.5 Summary ........................... 157 7 Conclusion 158 7.1 Future Work .......................... 158 7.1.1 Caching ........................ 158 7.1.2 Territory partitioning ................. 159 7.1.3 Read-only territory grants .............. 159 7.1.4 Storage-informed load balancing .......... 159 7.2 Summary ........................... 160 References 162 7 8 Chapter 1 Introduction This dissertation presents the design and implementation of a distributed file system. The design is motivated by the requirements of a new environ- ment that emerges at the intersection of three trends: the renaissance of machine virtualization as a tool for hardware management, the increasing use of commodity hardware and software for server applications, and the emergence of computation as a commodity service. Providing computing services for hire is nothing new, but previous of- ferings have always been on a limited scale, restricted to a specific type of application or tied to the tool stack of a single vendor. Web and email host- ing services are commonplace, and many applications like tax preparation software and payroll management are available as hosted services. Nu- merous frameworks for hosted computing have been proposed [Ami98, Vah98, Tul98] and commercially implemented [Ama06, Sun06, Kal04], but these require using a specific distributed framework or middleware that introduces a “semantic bottleneck” between the application and the hosting environment [Ros00a]. The only way to make hosted computing a commodity is to convince vendors and customers to agree on a common set of interfaces so that applications can be easily moved from one host to the next and an open marketplace can emerge. Seeking universal agreement on a new application environment is a daunting task, but one that can be circumvented by using an existing plat- form that already enjoys
Recommended publications
  • Venti Analysis and Memventi Implementation
    Master’s thesis Venti analysis and memventi implementation Designing a trace-based simulator and implementing a venti with in-memory index Mechiel Lukkien [email protected] August 8, 2007 Committee: Faculty of EEMCS prof. dr. Sape J. Mullender DIES, Distributed and Embedded Systems ir. J. Scholten University of Twente ir. P.G. Jansen Enschede, The Netherlands 2 Abstract [The next page has a Dutch summary] Venti is a write-once content-addressed archival storage system, storing its data on magnetic disks: each data block is addressed by its 20-byte SHA-1 hash (called score). This project initially aimed to design and implement a trace-based simula- tor matching Venti behaviour closely enough to be able to use it to determine good configuration parameters (such as cache sizes), and for testing new opti- misations. A simplistic simulator has been implemented, but it does not model Venti behaviour accurately enough for its intended goal, nor is it polished enough for use. Modelled behaviour is inaccurate because the advanced optimisations of Venti have not been implemented in the simulator. However, implementation suggestions for these optimisations are presented. In the process of designing the simulator, the Venti source code has been investigated, the optimisations have been documented, and disk and Venti per- formance have been measured. This allowed for recommendations about per- formance, even without a simulator. Beside magnetic disks, also flash memory and the upcoming mems-based storage devices have been investigated for use with Venti; they may be usable in the near future, but require explicit support. The focus of this project has shifted towards designing and implementing memventi, an alternative implementation of the venti protocol.
    [Show full text]
  • HTTP-FUSE Xenoppix
    HTTP-FUSE Xenoppix Kuniyasu Suzaki† Toshiki Yagi† Kengo Iijima† Kenji Kitagawa†† Shuichi Tashiro††† National Institute of Advanced Industrial Science and Technology† Alpha Systems Inc.†† Information-Technology Promotion Agency, Japan††† {k.suzaki,yagi-toshiki,k-iijima}@aist.go.jp [email protected], [email protected] Abstract a CD-ROM. Furthermore it requires remaking the entire CD-ROM when a bit of data is up- dated. The other solution is a Virtual Machine We developed “HTTP-FUSE Xenoppix” which which enables us to install many OSes and ap- boots Linux, Plan9, and NetBSD on Virtual plications easily. However, that requires in- Machine Monitor “Xen” with a small bootable stalling virtual machine software. (6.5MB) CD-ROM. The bootable CD-ROM in- cludes boot loader, kernel, and miniroot only We have developed “Xenoppix” [1], which and most part of files are obtained via Internet is a combination of CD/DVD bootable Linux with network loopback device HTTP-FUSE “KNOPPIX” [2] and Virtual Machine Monitor CLOOP. It is made from cloop (Compressed “Xen” [3, 4]. Xenoppix boots Linux (KNOP- Loopback block device) and FUSE (Filesys- PIX) as Host OS and NetBSD or Plan9 as Guest tem USErspace). HTTP-FUSE CLOOP can re- OS with a bootable DVD only. KNOPPIX construct a block device from many small block is advanced in automatic device detection and files of HTTP servers. In this paper we describe driver integration. It prepares the Xen environ- the detail of the implementation and its perfor- ment and Guest OSes don’t need to worry about mance. lack of device drivers.
    [Show full text]
  • Accelerated AC Contingency Calculation on Commodity Multi
    1 Accelerated AC Contingency Calculation on Commodity Multi-core SIMD CPUs Tao Cui, Student Member, IEEE, Rui Yang, Student Member, IEEE, Gabriela Hug, Member, IEEE, Franz Franchetti, Member, IEEE Abstract—Multi-core CPUs with multiple levels of parallelism In the computing industry, the performance capability of (i.e. data level, instruction level and task/core level) have become the computing platform has been growing rapidly in the last the mainstream CPUs for commodity computing systems. Based several decades at a roughly exponential rate [3]. The recent on the multi-core CPUs, in this paper we developed a high performance computing framework for AC contingency calcula- mainstream commodity CPUs enable us to build inexpensive tion (ACCC) to fully utilize the computing power of commodity computing systems with similar computational power as the systems for online and real time applications. Using Woodbury supercomputers just ten years ago. However, these advances in matrix identity based compensation method, we transform and hardware performance result from the increasing complexity pack multiple contingency cases of different outages into a of the computer architecture and they actually increase the dif- fine grained vectorized data parallel programming model. We implement the data parallel programming model using SIMD ficulty of fully utilizing the available computational power for instruction extension on x86 CPUs, therefore, fully taking advan- a specific application [4]. This paper focuses on fully utilizing tages of the CPU core with SIMD floating point capability. We the computing power of modern CPUs by code optimization also implement a thread pool scheduler for ACCC on multi-core and parallelization for specific hardware, enabling the real- CPUs which automatically balances the computing loads across time complete ACCC application for practical power grids on CPU cores to fully utilize the multi-core capability.
    [Show full text]
  • Plan 9 from Bell Labs
    Plan 9 from Bell Labs “UNIX++ Anyone?” Anant Narayanan Malaviya National Institute of Technology FOSS.IN 2007 What is it? Advanced technology transferred via mind-control from aliens in outer space Humans are not expected to understand it (Due apologies to lisperati.com) Yeah Right • More realistically, a distributed operating system • Designed by the creators of C, UNIX, AWK, UTF-8, TROFF etc. etc. • Widely acknowledged as UNIX’s true successor • Distributed under terms of the Lucent Public License, which appears on the OSI’s list of approved licenses, also considered free software by the FSF What For? “Not only is UNIX dead, it’s starting to smell really bad.” -- Rob Pike (circa 1991) • UNIX was a fantastic idea... • ...in it’s time - 1970’s • Designed primarily as a “time-sharing” system, before the PC era A closer look at Unix TODAY It Works! But that doesn’t mean we don’t develop superior alternates GNU/Linux • GNU’s not UNIX, but it is! • Linux was inspired by Minix, which was in turn inspired by UNIX • GNU/Linux (mostly) conforms to ANSI and POSIX requirements • GNU/Linux, on the desktop, is playing “catch-up” with Windows or Mac OS X, offering little in terms of technological innovation Ok, and... • Most of the “modern ideas” we use today were “bolted” on an ancient underlying system • Don’t believe me? A “modern” UNIX Terminal Where did it go wrong? • Early UNIX, “everything is a file” • Brilliant! • Only until people started adding “features” to the system... Why you shouldn’t be working with GNU/Linux • The Socket API • POSIX • X11 • The Bindings “rat-race” • 300 system calls and counting..
    [Show full text]
  • (Brief) 2. an Extensible Compiler for Systems Programming
    Show and Tell 1. Plan 9 Things (brief) 2. An Extensible Compiler for Systems Programming Russ Cox rsc@plan9 1127 Show and Tell April 19, 2005 who am i ................................................................................................................. ‘‘Neighborhood kid’’ 1995 summer hacking Excel (jlb) 1995-1997 cable modems (nls, tom) 1997-1999 annoying Plan 9 user 1999 summer doing Plan 9 graphics (rob, jmk) 1999-present assorted Plan 9 hacking Plan 9 Things ........................................................................................................ VBE ߝ use BIOS to set up VGA modes ߝ requires switching into real mode and back Venti ߝ reworked significantly ߝ aggressive caching, prefetching, batching, delayed writes ߝ Bloom filter to avoid index misses Plan 9 from User Space (plan9port) ߝ port bulk of Plan 9 software to Unix systems ߝ Linux, FreeBSD, NetBSD, SunOS, Mac OS X .................................................................................................................................... An Extensible Compiler for Systems Programming Russ Cox Frans Kaashoek Eddie Kohler [email protected] Outline ..................................................................................................................... Why bother? What could it do? How could it work? Ground rules: ߝ interrupt with questions ߝ work in progress ߝ show and tell, not a job talk Man vs. Machine ߝ Linguistic Tensions ................................................... The most important readers of a program are people.
    [Show full text]
  • The Effectiveness of Deduplication on Virtual Machine Disk Images
    The Effectiveness of Deduplication on Virtual Machine Disk Images Keren Jin Ethan L. Miller Storage Systems Research Center Storage Systems Research Center University of California, Santa Cruz University of California, Santa Cruz [email protected] [email protected] ABSTRACT quired while preserving isolation between machine instances. Virtualization is becoming widely deployed in servers to ef- This approach better utilizes server resources, allowing many ficiently provide many logically separate execution environ- different operating system instances to run on a small num- ments while reducing the need for physical servers. While ber of servers, saving both hardware acquisition costs and this approach saves physical CPU resources, it still consumes operational costs such as energy, management, and cooling. large amounts of storage because each virtual machine (VM) Individual VM instances can be separately managed, allow- instance requires its own multi-gigabyte disk image. More- ing them to serve a wide variety of purposes and preserving over, existing systems do not support ad hoc block sharing the level of control that many users want. However, this between disk images, instead relying on techniques such as flexibility comes at a price: the storage required to hold overlays to build multiple VMs from a single “base” image. hundreds or thousands of multi-gigabyte VM disk images, Instead, we propose the use of deduplication to both re- and the inability to share identical data pages between VM duce the total storage required for VM disk images and in- instances. crease the ability of VMs to share disk blocks. To test the One approach saving disk space when running multiple effectiveness of deduplication, we conducted extensive eval- instances of operating systems on multiple servers, whether uations on different sets of virtual machine disk images with physical or virtual, is to share files between them; i.
    [Show full text]
  • Large Scale Disk Image Storage and Deployment in the Emulab Network Testbed
    EMUSTORE: LARGE SCALE DISK IMAGE STORAGE AND DEPLOYMENT IN THE EMULAB NETWORK TESTBED by Raghuveer Pullakandam A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science in Computer Science School of Computing The University of Utah August 2014 Copyright !c Raghuveer Pullakandam 2014 All Rights Reserved The University of Utah Graduate School STATEMENT OF THESIS APPROVAL Raghuveer Pullakandam The thesis of has been approved by the following supervisory committee members: Robert Ricci 07/26/2012 , Chair Date Approved Ganesh Gopalakrishnan 08/31/2011 , Member Date Approved Matthew Flatt 08/31/2011 , Member Date Approved Alan Davis and by , Chair of Comput the School of ing and by, Dean of The Graduate School. ABSTRACT The Emulab network testbed deploys and installs disk images on client nodes upon request. A disk image is a custom representation of filesystem which typically corresponds to an operating system configuration. Making a large variety of disk images available to Emulab users greatly encourages heterogeneous experimentation. This requires a significant amount of disk storage space. Data deduplication has the potential to dramatically reduce the amount of disk storage space required to store disk images. Since most disk images in Emulab are derived by customizing a few golden disk images, there is a substantial amount of data redundancy within and among these disk images. This work proposes a method of storing disk images in Emulab with maximizing storage utilization, minimal impact on performance, and nonintrusiveness as primary goals. We propose to design, implement, and evaluate EmuStore — a system built on top of a data deduplication infrastructure for efficiently storing disk images in the Emulab network testbed.
    [Show full text]
  • DASH 1 JDA@O?KmanjOok=HAHA=@EncHECDJNom,>KJEn>OokBohm
    '.4ON6 .4-37-N6L; 37-561ON-, )N59-45 6DEI >ook M=I JOFAIAJ troff -ms -mpictures|lp -dstdout|ps2pdf En LK?E@= 5=nI >OJDA=KJDoH,KIEnC=6DEnk2=@:$6=>lAJHKnnEnCJDA'BHonJoFAH=JEnCIOIJAm. 4An@AHA@: -%-" '.4ON6 'BHonJ.oHC 15*N-!:'%'&%!" '%& 6DEI EI = MoHk oB BE?JEon. N=mAI, ?D=H=?JAHI, Fl=?AI =n@ En?E@AnJI AEJDAH =HA JDA FHo@K?J oB JDA =KJDoHߣI Em=CEn=JEon oH =HA KIA@ BE?JEJEoKIlO, =n@ =nO HAIAm>l=n?A Jo =?JK=l FAH­ IonI, lELEnC oH @A=@, >KIEnAIIAI, ?omF=nEAI, ALAnJI oH lo?=lAI EI AnJEHAlO ?oEn?E@AnJ=l. 4E?D=H@ MEllAH =n@ 5JALA 5J=llEon =HA noJ =BBElE=JA@ MEJD 'BHonJ. 6DA oFAH=JEnC IOIJAm @oAInoJD=LA=noBBE?E=lFoHJIJHAA. M16/++/2K>lE?,om=En The study of this Book is forbidden. It is wise to destroy this copy after the first reading. Whosoever disregards this does so at his own risk and peril. These are most dire. Those who discuss the contents of this Book are to be shunned by all, as centres of pestilence. All questions of the Law are to be decided only by appeal to my writings, each for himself. There is no law beyond Do what thou wilt. 9FRONT FREQUENTLY QUESTIONED ANSWERS ACHTUNG! 'BHonJ@=IDm=nK=lEIMHEJJAn>O=n@BoH'BHonJKIAHI. Those who can do, those who can’t write and those who can’t write make ezines. ߞ 5=FAMKllAn@AH ACHTUNG! 1nBoHm=JEon FHoLE@A@ >O JDEI @o?KmAnJ EI 7NO..1+1)L =n@ m=O >A oKJ@=JA@ oHjKIJFl=En94ON/.7IAOoKH>H=En.NO4-.7N,5.
    [Show full text]
  • Thesis May Never Have Been Completed
    UvA-DARE (Digital Academic Repository) Digital Equipment Corporation (DEC): A case study of indecision, innovation and company failure Goodwin, D.T. Publication date 2016 Document Version Final published version Link to publication Citation for published version (APA): Goodwin, D. T. (2016). Digital Equipment Corporation (DEC): A case study of indecision, innovation and company failure. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:26 Sep 2021 Digital Equipment Corporation (DEC) (DEC) Corporation Digital Equipment David Thomas David Goodwin Digital Equipment Corporation (DEC): A Case Study of Indecision, Innovation and Company Failure David Thomas Goodwin Digital Equipment Corporation (DEC): A Case Study of Indecision, Innovation and Company Failure David Thomas Goodwin 1 Digital Equipment Corporation (DEC): A Case Study of Indecision, Innovation and Company Failure ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam op gezag van de Rector Magnificus prof.
    [Show full text]
  • 6 Read-Performance Optimization for Deduplication-Based Storage
    i i i i Read-Performance Optimization for Deduplication-Based Storage Systems in the Cloud BO MAO, Xiamen University HONG JIANG, University of Nebraska-Lincoln 6 SUZHEN WU, Xiamen University YINJIN FU, National University of Defense Technology LEI TIAN, University of Nebraska-Lincoln Data deduplication has been demonstrated to be an effective technique in reducing the total data transferred over the network and the storage space in cloud backup, archiving, and primary storage systems, such as VM (virtual machine) platforms. However, the performance of restore operations from a deduplicated backup can be significantly lower than that without deduplication. The main reason lies in the fact that a file or block is split into multiple small data chunks that are often located in different disks after deduplication, which can cause a subsequent read operation to invoke many disk IOs involving multiple disks and thus degrade the read performance significantly. While this problem has been by and large ignored in the literature thus far, we argue that the time is ripe for us to pay significant attention to it in light of the emerging cloud storage applications and the increasing popularity of the VM platform in the cloud. This is because, in a cloud storage or VM environment, a simple read request on the client side may translate into a restore operation if the data to be read or a VM suspended by the user was previously deduplicated when written to the cloud or the VM storage server, a likely scenario considering the network bandwidth and storage capacity concerns in such an environment.
    [Show full text]
  • Scale Enterprise Systems – for Better Business Computing
    Bringing Open Systems and Open Source Software to Large- Scale Enterprise Systems – for Better Business Computing Chu J. Jong and Kyoungwon Suh School of Information Technology, Illinois State University Normal, IL 61790, U.S.A. [email protected], [email protected] abandoned their business. In the meanwhile, the mainstream Abstract - Open systems and open sources have made computing industry moved to client-server based systems, significant contributions to the information technology field. followed by workstation based systems, and distributed The importance of their implementation along with the cluster systems. Today, a few mainframe manufacturers exist number of IT shops using open systems and open software and IBM is the one still actively producing improved continue to increase. The growing number of open software mainframe systems [2]. challenges the traditional mainframe based enterprise systems which mainly host propriety applications. Mainframe Mainframes maintained their long popularity until 1980s engineers realized the inevitable trend of openness and started because they not only offered better resource sharing but also to accept open applications and implement open systems to provided maintenance free services to their clients. In users’ host those applications. Moving toward the open systems and mind convenience, security, and robustness are always high open sources not only reduces the cost of software on their priority list. Mainframes are mainly designed for development life cycle but also improves the quality of the centralized computing environments in which users connect service. However, there exists a gap between the mainframe to their computing facility via a remote device. Information system programmers and the open system programmers.
    [Show full text]
  • Proceedings of 3Rd International Workshop on Plan 9 October 30-31, 2008
    Proceedings of 3rd International Workshop on Plan 9 October 30-31, 2008 Computer and Communication Engineering Department University of Thessaly Volos, Greece Organization Organizing Committee Spyros Lalis, University of Thessaly Manos Koutsoumpelias, University of Thessaly Francisco Ballesteros, Universidad Rey Juan Carlos de Madrid Sape Mullender, Bell Labs, Alcatel-Lucent Program Committee Richard Miller (chair), Miller Research Ltd. Peter Bosch, Bell Labs, Alcatel-Lucent Geoff Collyer, Bell Labs, Alcatel-Lucent Latchesar Ionkov, Los Alamos National Laboratory Paul Lalonde, Intel Corp. Eric Nichols, Nara Institute of Science and Technology Brantley Coile, Coraid Inc. Charles Forsyth, Vita Nuova Ltd. Table of Contents Glendix: A Plan9/Linux Distribution Anant Narayanan, Shantanu Choudhary, Vinay Pamarthi and Manoj Gaur..........................................1 Upperware: Pushing the Applications Back Into the System Gorka Guardiola, Francisco J. Ballesteros and Enrique Soriano..........................................................9 Scaling Upas Erik Quanstrom....................................................................................................................................19 Vidi: A Venti To Go Latchesar Ionkov..................................................................................................................................25 Inferno DS : Inferno port to the Nintendo DS Salva Peiro...........................................................................................................................................31
    [Show full text]