One Big PDF Volume

Total Page:16

File Type:pdf, Size:1020Kb

One Big PDF Volume Proceedings of the Linux Symposium July 14–16th, 2014 Ottawa, Ontario Canada Contents Btr-Diff: An Innovative Approach to Differentiate BtrFs Snapshots 7 N. Mandliwala, S. Pimpale, N. P. Singh, G. Phatangare Leveraging MPST in Linux with Application Guidance to Achieve Power and Performance Goals 13 M.R. Jantz, K.A. Doshi, P.A. Kulkarni, H. Yun CPU Time Jitter Based Non-Physical True Random Number Generator 23 S. Müller Policy-extendable LMK filter framework for embedded system 49 K. Baik, J. Kim, D. Kim Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads 63 R. Schöne, J. Schuchart, T. Ilsche, D. Hackenberg Veloces: An Efficient I/O Scheduler for Solid State Devices 77 V.R. Damle, A.N. Palnitkar, S.D. Rangnekar, O.D. Pawar, S.A. Pimpale, N.O. Mandliwala Dmdedup: Device Mapper Target for Data Deduplication 83 V. Tarasov, D. Jain, G. Kuenning, S. Mandal, K. Palanisami, P. Shilane, S. Trehan, E. Zadok SkyPat: C++ Performance Analysis and Testing Framework 97 P.H. Chang, K.H. Kuo, D.Y. Tsai, K. Chen, Luba W.L. Tang Computationally Efficient Multiplexing of Events on Hardware Counters 101 R.V. Lim, D. Carrillo-Cisneros, W. Alkowaileet, I.D. Scherson The maxwell(8) random number generator 111 S. Harris Popcorn: a replicated-kernel OS based on Linux 123 A. Barbalace, B. Ravindran, D. Katz Conference Organizers Andrew J. Hutton, Linux Symposium Emilie Moreau, Linux Symposium Proceedings Committee Ralph Siemsen With thanks to John W. Lockhart, Red Hat Robyn Bergeron Authors retain copyright to all submitted papers, but have granted unlimited redistribution rights to all as a condition of submission. Btr-Diff: An Innovative Approach to Differentiate BtrFs Snapshots Nafisa Mandliwala Swapnil Pimpale Pune Institute of Computer Technology PICT LUG [email protected] [email protected] Narendra Pal Singh Ganesh Phatangare Pune Institute of Computer Technology PICT LUG [email protected] [email protected] Abstract 1 Introduction Efficient storage and fast retrieval of data has always In current times, where data is critical to every organi- been of utmost importance. The BtrFs file system is zation, its appropriate storage and management is what a copy-on-write (COW) based B-tree file system that brings in value. Our approach aims at taking advantage has an built-in support for snapshots and is considered of the BtrFs architectural features that are made avail- a potential replacement for the EXT4 file system. It able by design. This facilitates speedy tree traversal to is designed specifically to address the need to scale, detect changes in snapshots. The send ioctl runs the tree manage and administer large storage configurations of comparison algorithm in kernel space using the on-disk Linux systems. Snapshots are useful to have local on- metadata format (rather than the abstract stat format line “copies” of the file system that can be referred back exported to the user space), which includes the ability to to, or to implement a form of deduplication, or for taking recognize when entire sub-trees can be skipped for com- a full backup of the file system. The ability to compare parison. Since the whole comparison algorithm runs in these snapshots becomes crucial for system administra- kernel space, the algorithm is clearly superior over exist- tors as well as end users. ing user space snapshot management tools such as Snap- 1 The existing snapshot management tools perform direc- per . tory based comparison on block level in user space. This Snapper uses diff algorithm with a few more optimiza- approach is generic and is not suitable for B-tree based tions to avoid comparing files that have not changed. file systems that are designed to cater to large storage. This approach requires all of the metadata for the two Simply traversing the directory structure is slow and trees being compared to be read. The most I/O inten- only gets slower as the file system grows. With the sive part is not comparing the files but generating the BtrFs send/receive mechanism, the filesystem can be in- list of changed files. It needs to list all the files in the structed to calculate the set of changes made between tree and stat them to see if they have changed between the two snapshots and serialize them to a file. the snapshots. The performance of such an algorithm Our objective is to leverage the send part in the kernel degrades drastically as changes to the file system grow. to implement a new mechanism to list all the files that This is mainly caused because Snapper deploys an al- have been added, removed, changed or had their meta- gorithm that is not specifically designed to run on COW data changed in some way. The proposed solution takes based file systems. advantage of the BtrFs B-tree structure and its power- ful snapshot capabilities to speed up the tree traversal The rest of the paper is organized as follows: Section2 and detect changes in snapshots based on inode values. explains the BtrFs internal architecture, associated data In addition, our tool can also detect changes between a structures, and features. Working of the send-receive snapshot and an explicitly mentioned parent. This lends code and the diff commands used, are discussed in itself for daily incremental backups of the file system, 1The description takes into consideration, the older version of and can very easily be integrated with existing snapshot Snapper. The newer version, however, does use the send-receive management tools. code for diff generation • 7 • 8 • Btr-Diff: An Innovative Approach to Differentiate BtrFs Snapshots Figure 1: The Terminal Node Structure (src: BtrFs De- sign Wiki) Figure 2: BtrFs Data Structures (src: BtrFs Design Wiki) Section3. Section4 covers the design of the pro- posed solution, the algorithm used and its implementa- tion. This is followed by benchmarking and its analysis is present in the tree etc. The key, which defines the in Section5. Section6 states applications of the send- order in the tree, has the fields: objectid, type and receive code and diff generation. Section7 lists the offset. Each subvolume has its own set of object ids. possible feature additions to Btr-diff. Finally, Section8 The type field contains the kind of item it is, of which, summarizes the conclusions of the paper and is followed the prominent ones are inode_item, inode_ref, xattr_ by references. item, orphan_item, dir_log_item, dir_item, dir_ index, extent_data, root_item, root_ref, extent_ item, extent_data_ref, dev_item, chunk_item etc. 2 BtrFs File System The offset field is dependent on the kind of item. The Btrfs file system is scalable to a maximum file/file A typical leaf node consists of several items. offset and system size up to 16 exabytes. Although it exposes a size tell us where to find the item in the leaf (relative to plethora of features w.r.t. scalability, data integrity, data the start of the data area). The internal nodes contain compression, SSD optimizations etc, this paper focuses [key,blockpointer] pairs whereas the leaf nodes con- only on the ones relevant to snapshot storage: compari- tain a block header, array of fix sized items and the data son and retrieval. field. Items and data grow towards each other. Typical data structures are shown in Figure2. The file system uses the ‘copy-on-write’ B-tree as its generic data structure. B-trees are used as they pro- vide logarithmic time for common operations like inser- 2.2 Subvolumes and Snapshots tion, deletion, sequential access and search. This COW- friendly B-tree in which the leaf node linkages are ab- sent was originally proposed by Ohad Rodeh. In such The BtrFs subvolume is a named B-tree that holds files trees, writes are never made in-place, instead, the modi- and directories and is the smallest unit that can be snap- fied data is written to a different location, and the corre- shotted. A point-in-time copy of such a subvolume is sponding metadata is updated. BtrFs, thus, has built-in called a snapshot. A reference count is associated with support for snapshots, which are point-in-time copies of every subvolume which is incremented on snapshot cre- entire subvolumes, and rollbacks. ation. A snapshot stores the state of a subvolume and can be created almost instantly using the following com- mand, 2.1 Data Structures btrfs subvolume snapshot [-r] <source> [<dest>/]<name> As seen in Figure1, the BtrFs architecture consists of three data structures internally; namely blockheader, Such a snapshot occupies no disk space at creation. key and item. The blockheader contains checksums, Snapshots can be used to backup subvolumes that can file system specific uuid, the level at which the block be used later for restore or recovery operations. 2014 Linux Symposium • 9 3 Send-Receive Code subvolume ID corresponding to the snapshot of inter- est. It then calculates changes between the two given Snapshots are primarily used for data backup and recov- snapshots. The command sends the subvolume speci- ery. Considering the size of file system wide snapshots, fied by <subvol> to stdout. By default, this will send detecting how the file system has changed between two the whole subvolume. The following are some more op- given snapshots, manually, is a tedious task. Thus devel- tions for output generation: oping a clean mechanism to showcase differences be- tween two snapshots becomes extremely important in • The operation can take a list of snapshot / subvol- snapshot management. ume IDs and generate a combined file for all of The diff command is used for differentiating any them. The parent snapshot can be specified explic- two files, but it uses text based2 search which is time itly.
Recommended publications
  • FAI.Me a Build Service for Installation Images and Cloud Images
    FAI.me A Build Service for Installation Images and Cloud Images Thomas Lange, Debian Developer and sysadmin at the University of Cologne [email protected] MiniDebConf Hamburg 2018 1/19 finger Mrfai@localhost ◮ whoami ◮ Sysadmin for more than two and a half decades ◮ Debian developer since 2000 ◮ Diploma in computer science, University of Bonn, Germany ◮ SunOS 4.1.1 on SPARC hardware, then Solaris Jumpstart ◮ Started FAI in 1999 for my first cluster (16× Dual PII 400 MHz) ◮ Several talks and tutorials: Linux Kongress, Linuxtag, DebConf, SANE, LCA, FOSDEM, CeBit, OSDC, UKUUG, FrOSCon, Chemnitzer Linuxtag ◮ FAI trainings 2/19 Motivation ◮ Debian installer is not that easy for beginners ◮ Also FAI is not for beginners ◮ How to make FAI usable for beginners? 3/19 The idea ◮ An installer should cover the most usual installations ◮ Ignore the special cases ◮ Do only ask the really important questions ◮ Ask everything at the beginning ◮ Create a customized installation media ◮ Boot this installation media and get yourself a coffee ◮ Ready! 4/19 FAI ◮ FAI = Fully Automatic Installation ◮ FAI is a tool for experienced sysadmins ◮ You have to adjust the config files to your local needs ◮ How to make FAI usable for beginners? FAI.me 5/19 FAI.me 6/19 FAI.me ◮ Easy creation of the installation media (CD/USB stick) ◮ Customizations are easy to do (clicks on a web page) ◮ Lanuage, user name and pw, root pw ◮ Select one of the common desktops ◮ Additional packages ◮ Distributions: stable, stable+backports, testing 7/19 Some more advanced features ◮ A couple of different partitioning variants ◮ SSH key for root login ◮ Use your github account (ssh pub key) for the root login ◮ Add one public available repository 8/19 FAI.me for the cloud ◮ Cloud Images ◮ Get your customized cloud image by just a few clicks ◮ Disk size ◮ Disk image format (raw.xz, raw.zst, qcow2, vmdk,...) ◮ Hostname 9/19 FAI.me more ideas ◮ Make web page dynamic: easy mode Thanks Juri! ◮ Images for non-amd64 archs ◮ Other distributions (e.g.
    [Show full text]
  • Beyond BIOS Developing with the Unified Extensible Firmware Interface
    Digital Edition Digital Editions of selected Intel Press books are in addition to and complement the printed books. Click the icon to access information on other essential books for Developers and IT Professionals Visit our website at www.intel.com/intelpress Beyond BIOS Developing with the Unified Extensible Firmware Interface Second Edition Vincent Zimmer Michael Rothman Suresh Marisetty Copyright © 2010 Intel Corporation. All rights reserved. ISBN 13 978-1-934053-29-4 This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights. Intel may make changes to specifications, product descriptions, and plans at any time, without notice. Fictitious names of companies, products, people, characters, and/or data mentioned herein are not intended to represent any real individual, company, product, or event. Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications. Intel, the Intel logo, Celeron, Intel Centrino, Intel NetBurst, Intel Xeon, Itanium, Pentium, MMX, and VTune are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
    [Show full text]
  • Oracle® Linux Administrator's Solutions Guide for Release 6
    Oracle® Linux Administrator's Solutions Guide for Release 6 E37355-64 August 2017 Oracle Legal Notices Copyright © 2012, 2017, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S.
    [Show full text]
  • SUSE BU Presentation Template 2014
    TUT7317 A Practical Deep Dive for Running High-End, Enterprise Applications on SUSE Linux Holger Zecha Senior Architect REALTECH AG [email protected] Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 2 Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 3 About REALTECH 1/2 REALTECH Software REALTECH Consulting . Business Service Management . SAP Mobile . Service Operations Management . Cloud Computing . Configuration Management and CMDB . SAP HANA . IT Infrastructure Management . SAP Solution Manager . Change Management for SAP . IT Technology . Virtualization . IT Infrastructure 4 About REALTECH 2/2 5 Our Customers Manufacturing IT services Healthcare Media Utilities Consumer Automotive Logistics products Finance Retail REALTECH Consulting GmbH 6 Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 7 The Inspiration for this Session • Several performance workshops at customers • Performance escalations at customer who migrated from UNIX (AIX, Solaris, HP-UX) to Linux • Presenting the experiences made at these customers in this session • Preventing the audience from performance degradation caused from: – Significant design mistakes – Wrong architecture assumptions – Having no architecture at all 8 Performance Optimization The False Estimation Upgrading server with CPUs that are 12.5% faster does not improve application
    [Show full text]
  • Class-Action Lawsuit
    Case 3:20-cv-00863-SI Document 1 Filed 05/29/20 Page 1 of 279 Steve D. Larson, OSB No. 863540 Email: [email protected] Jennifer S. Wagner, OSB No. 024470 Email: [email protected] STOLL STOLL BERNE LOKTING & SHLACHTER P.C. 209 SW Oak Street, Suite 500 Portland, Oregon 97204 Telephone: (503) 227-1600 Attorneys for Plaintiffs [Additional Counsel Listed on Signature Page.] UNITED STATES DISTRICT COURT DISTRICT OF OREGON PORTLAND DIVISION BLUE PEAK HOSTING, LLC, PAMELA Case No. GREEN, TITI RICAFORT, MARGARITE SIMPSON, and MICHAEL NELSON, on behalf of CLASS ACTION ALLEGATION themselves and all others similarly situated, COMPLAINT Plaintiffs, DEMAND FOR JURY TRIAL v. INTEL CORPORATION, a Delaware corporation, Defendant. CLASS ACTION ALLEGATION COMPLAINT Case 3:20-cv-00863-SI Document 1 Filed 05/29/20 Page 2 of 279 Plaintiffs Blue Peak Hosting, LLC, Pamela Green, Titi Ricafort, Margarite Sampson, and Michael Nelson, individually and on behalf of the members of the Class defined below, allege the following against Defendant Intel Corporation (“Intel” or “the Company”), based upon personal knowledge with respect to themselves and on information and belief derived from, among other things, the investigation of counsel and review of public documents as to all other matters. INTRODUCTION 1. Despite Intel’s intentional concealment of specific design choices that it long knew rendered its central processing units (“CPUs” or “processors”) unsecure, it was only in January 2018 that it was first revealed to the public that Intel’s CPUs have significant security vulnerabilities that gave unauthorized program instructions access to protected data. 2. A CPU is the “brain” in every computer and mobile device and processes all of the essential applications, including the handling of confidential information such as passwords and encryption keys.
    [Show full text]
  • Adventures with Illumos
    > Adventures with illumos Peter Tribble Theoretical Astrophysicist Sysadmin (DBA) Technology Tinkerer > Introduction ● Long-time systems administrator ● Many years pointing out bugs in Solaris ● Invited onto beta programs ● Then the OpenSolaris project ● Voted onto OpenSolaris Governing Board ● Along came Oracle... ● illumos emerged from the ashes > key strengths ● ZFS – reliable and easy to manage ● Dtrace – extreme observability ● Zones – lightweight virtualization ● Standards – pretty strict ● Compatibility – decades of heritage ● “Solarishness” > Distributions ● Solaris 11 (OpenSolaris based) ● OpenIndiana – OpenSolaris ● OmniOS – server focus ● SmartOS – Joyent's cloud ● Delphix/Nexenta/+ – storage focus ● Tribblix – one of the small fry ● Quite a few others > Solaris 11 ● IPS packaging ● SPARC and x86 – No 32-bit x86 – No older SPARC (eg Vxxx or SunBlades) ● Unique/key features – Kernel Zones – Encrypted ZFS – VM2 > OpenIndiana ● Direct continuation of OpenSolaris – Warts and all ● IPS packaging ● X86 only (32 and 64 bit) ● General purpose ● JDS desktop ● Generally rather stale > OmniOS ● X86 only ● IPS packaging ● Server focus ● Supported commercial offering ● Stable components can be out of date > XStreamOS ● Modern variant of OpenIndiana ● X86 only ● IPS packaging ● Modern lightweight desktop options ● Extra applications – LibreOffice > SmartOS ● Hypervisor, not general purpose ● 64-bit x86 only ● Basis of Joyent cloud ● No inbuilt packaging, pkgsrc for applications ● Added extra features – KVM guests – Lots of zone features –
    [Show full text]
  • Kdump, a Kexec-Based Kernel Crash Dumping Mechanism
    Kdump, A Kexec-based Kernel Crash Dumping Mechanism Vivek Goyal Eric W. Biederman Hariprasad Nellitheertha IBM Linux NetworkX IBM [email protected] [email protected] [email protected] Abstract important consideration for the success of a so- lution has been the reliability and ease of use. Kdump is a crash dumping solution that pro- Kdump is a kexec based kernel crash dump- vides a very reliable dump generation and cap- ing mechanism, which is being perceived as turing mechanism [01]. It is simple, easy to a reliable crash dumping solution for Linux R . configure and provides a great deal of flexibility This paper begins with brief description of what in terms of dump device selection, dump saving kexec is and what it can do in general case, and mechanism, and plugging-in filtering mecha- then details how kexec has been modified to nism. boot a new kernel even in a system crash event. The idea of kdump has been around for Kexec enables booting into a new kernel while quite some time now, and initial patches for preserving the memory contents in a crash sce- kdump implementation were posted to the nario, and kdump uses this feature to capture Linux kernel mailing list last year [03]. Since the kernel crash dump. Physical memory lay- then, kdump has undergone significant design out and processor state are encoded in ELF core changes to ensure improved reliability, en- format, and these headers are stored in a re- hanced ease of use and cleaner interfaces. This served section of memory. Upon a crash, new paper starts with an overview of the kdump de- kernel boots up from reserved memory and pro- sign and development history.
    [Show full text]
  • On the Performance Variation in Modern Storage Stacks
    On the Performance Variation in Modern Storage Stacks Zhen Cao1, Vasily Tarasov2, Hari Prasath Raman1, Dean Hildebrand2, and Erez Zadok1 1Stony Brook University and 2IBM Research—Almaden Appears in the proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST’17) Abstract tions on different machines have to compete for heavily shared resources, such as network switches [9]. Ensuring stable performance for storage stacks is im- In this paper we focus on characterizing and analyz- portant, especially with the growth in popularity of ing performance variations arising from benchmarking hosted services where customers expect QoS guaran- a typical modern storage stack that consists of a file tees. The same requirement arises from benchmarking system, a block layer, and storage hardware. Storage settings as well. One would expect that repeated, care- stacks have been proven to be a critical contributor to fully controlled experiments might yield nearly identi- performance variation [18, 33, 40]. Furthermore, among cal performance results—but we found otherwise. We all system components, the storage stack is the corner- therefore undertook a study to characterize the amount stone of data-intensive applications, which become in- of variability in benchmarking modern storage stacks. In creasingly more important in the big data era [8, 21]. this paper we report on the techniques used and the re- Although our main focus here is reporting and analyz- sults of this study. We conducted many experiments us- ing the variations in benchmarking processes, we believe ing several popular workloads, file systems, and storage that our observations pave the way for understanding sta- devices—and varied many parameters across the entire bility issues in production systems.
    [Show full text]
  • Coreboot - the Free Firmware
    coreboot - the free firmware vimacs <https://vimacs.lcpu.club> Linux Club of Peking University May 19th, 2018 . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 1 / 77 License This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. You can find the source code of this presentation at: https://git.wehack.space/coreboot-talk/ . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 2 / 77 Index 1 What is coreboot? History Why use coreboot 2 How coreboot works 3 Building and using coreboot Building Flashing 4 Utilities and Debugging 5 Join the community . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 3 / 77 Index 6 Porting coreboot with autoport ASRock B75 Pro3-M Sandy/Ivy Bridge HP Elitebooks Dell Latitude E6230 7 References . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 4 / 77 1 What is coreboot? History Why use coreboot 2 How coreboot works 3 Building and using coreboot Building Flashing 4 Utilities and Debugging 5 Join the community . vimacs (LCPU) coreboot - the free firmware May 19th, 2018 5 / 77 What is coreboot? coreboot is an extended firmware platform that delivers a lightning fast and secure boot experience on modern computers and embedded systems. As an Open Source project it provides auditability and maximum control over technology. The word ’coreboot’ should always be written in lowercase, even at the start of a sentence. vimacs (LCPU) coreboot - the free firmware May 19th, 2018 6 / 77 History: from LinuxBIOS to coreboot coreboot has a very long history, stretching back more than 18 years to when it was known as LinuxBIOS.
    [Show full text]
  • Vytvoření Softwareově Definovaného Úložiště Pro Potřeby Ukládání a Sdílení Dat V Rámci Instituce a Jeho Zálohování Do Datových Úložišť Cesnet
    Slezská univerzita v Opavě Centrum informačních technologií Vysoká škola báňská – Technická univerzita Ostrava Fakulta elektrotechniky a informatiky Technická zpráva k projektu 529R1/2014 Vytvoření softwareově definovaného úložiště pro potřeby ukládání a sdílení dat v rámci instituce a jeho zálohování do datových úložišť Cesnet Řešitel: Ing. Jiří Sléžka Spoluřešitelé: Mgr. Jan Nosek, Ing. Pavel Nevlud, Ing. Marek Dvorský, Ph.D., Ing. Jiří Vychodil, Ing. Lukáš Kapičák Leden 2016 Obsah 1 Popis projektu...................................................................................................................................1 2 Cíle projektu.....................................................................................................................................1 3 Volba HW a SW komponent.............................................................................................................1 3.1 Hardware...................................................................................................................................1 3.2 GlusterFS..................................................................................................................................2 3.2.1 Replicated GlusterFS Volume...........................................................................................2 3.2.2 Distributed GlusterFS Volume..........................................................................................3 3.2.3 Striped Glusterfs Volume..................................................................................................3
    [Show full text]
  • Arxiv:1907.06110V1 [Cs.DC] 13 Jul 2019 (TCB) with Millions of Lines of Code and a Massive Attack Enclave” of Physical Machines
    Supporting Security Sensitive Tenants in a Bare-Metal Cloud∗† Amin Mosayyebzadeh1, Apoorve Mohan4, Sahil Tikale1, Mania Abdi4, Nabil Schear2, Charles Munson2, Trammell Hudson3 Larry Rudolph3, Gene Cooperman4, Peter Desnoyers4, Orran Krieger1 1Boston University 2MIT Lincoln Laboratory 3Two Sigma 4Northeastern University Abstract operations and implementations; the tenant needs to fully trust the non-maliciousness and competence of the provider. Bolted is a new architecture for bare-metal clouds that en- ables tenants to control tradeoffs between security, price, and While bare-metal clouds [27,46,48,64,66] eliminate the performance. Security-sensitive tenants can minimize their security concerns implicit in virtualization, they do not ad- trust in the public cloud provider and achieve similar levels of dress the rest of the challenges described above. For example, security and control that they can obtain in their own private OpenStack’s bare-metal service still has all of OpenStack in data centers. At the same time, Bolted neither imposes over- the TCB. As another example, existing bare-metal clouds en- head on tenants that are security insensitive nor compromises sure that previous tenants have not compromised firmware by the flexibility or operational efficiency of the provider. Our adopting a one-size-fits-all approach of validation/attestation prototype exploits a novel provisioning system and special- or re-flashing firmware. The tenant has no way to program- ized firmware to enable elasticity similar to virtualized clouds. matically verify the firmware installed and needs to fully trust Experimentally we quantify the cost of different levels of se- the provider. As yet another example, existing bare-metal curity for a variety of workloads and demonstrate the value clouds require the tenant to trust the provider to scrub any of giving control to the tenant.
    [Show full text]
  • Free Berkeley Software Distribution
    Free berkeley software distribution Berkeley Software Distribution (BSD) was a Unix operating system derivative developed and . The lawsuit slowed development of the free- software descendants of BSD for nearly two years while their legal status was in question, and as a OS family​: ​Unix. Software Central is a consolidation of several IST sites that offer software to UC Berkeley faculty, staff and students. The products available through this site are ​Productivity Software · ​Mathematics & Sciences · ​VMware · ​Stata. Software Central. The API department runs the Campus Software Distribution Service, which: Runs a distribution service through (link is. Free Speech Online Blue Ribbon Campaign. Welcome to ! UNIX! Live free or die! Google. Custom Search. What is this page all about? This page is. BSD stands for “Berkeley Software Distribution”. which is available on CD-ROM and for free download from FTP sites, for example OpenBSD. All of the documentation and software included in the BSD and As you know, certain of the Berkeley Software Distribution ("BSD") source. Template:Redirect Berkeley Software Distribution (BSD, sometimes called many other operating systems, both free and proprietary, to incorporate BSD code. BSD (originally: Berkeley Software Distribution) refers to the particular version of the UNIX operating system that was developed at and distributed fro. Short for Berkeley Software Distribution, BSD is a Unix-like NetBSD is another free version of BSD compatible with a very large variety of. Berkeley Software Distribution (BSD) is a prominent version of the Unix operating system that was developed and distributed by the Computer Systems. Early in , Joy put together the "Berkeley Software Distribution.
    [Show full text]