Block-Level Virtualization: How Far Can We Go?

Total Page:16

File Type:pdf, Size:1020Kb

Block-Level Virtualization: How Far Can We Go? Block-level Virtualization: How far can we go? Michail D. Flouris†, Stergios V. Anastasiadis∗ and Angelos Bilas‡ Institute of Computer Science (ICS) Foundation for Research and Technology - Hellas P.O.Box 1385, Heraklion, GR 71110, Greece { flouris, stergios, bilas }@ics.forth.gr significant objective of lowering administration costs through Abstract automated administration. Furthermore, the software infras- tructure should provide monitoring and dynamic reconfigura- In this paper, we present our vision on building large-scale tion mechanisms to allow the definition and implementation of storage systems from commodity components in a data center. For high-level quality-of-service policies. reasons of efficiency and flexibility, we advocate maintaining so- 3. Finally, the third challenge is efficient storage sharing between phisticated data management functions behind a block-based in- remote data-centers, enabling global-scale storage distribution. terface. We anticipate that such an interface can be customized to meet the diverse needs of end-users and applications through the Next we explore each of these challenges in more detail. extensible storage system architecture that we propose. 1.1. Scalable, Low-cost Platform Today, commercial storage systems mostly use expensive custom- 1. Introduction made hardware components and proprietary protocols. We ad- vocate building scalable low-cost storage systems from storage Data storage is becoming increasingly important as larger clusters composed of common off-the-shelf components. Current amounts of data assets need to be stored for archival or online- commodity computers and networks have (or will soon have) sig- processing purposes. Increasing requirements for improving stor- nificant computing power and network performance, at a fraction age efficiency and cost-effectiveness introduce the need for scal- of the cost of custom storage-specific hardware (controllers or net- able storage systems in a data-center, that consolidate system re- works). Using such components, we can build low-cost storage sources into a single system image. Consolidation enables con- clusters with enough capacity and performance to meet practi- tinuous system operation uninterrupted from device faults, easier cally any storage needs, and scale it economically. This idea is storage management, and ultimately lower cost of purchase and not new. It has been proposed and predicted a few years ago by maintenance. several researchers and vendors [8, 9]. We think that the necessary Our target is primary a data-center environment, where appli- commodity technologies are maturing now with the emergence of cation servers and user workstations are able to access the storage new standards, protocols and interconnects, such as Serial ATA, system through potentially different network paths and protocols. PCI-X/Express/AS and iSCSI. For the rest of this discussion, we In this environment, although storage consolidation has potential assume that the future storage will be based on clusters built of for lower costs and more efficient storage management, its suc- commodity PCs (e.g. x86, SATA disks) and commodity intercon- cess depends on the ability of the system architecture to face three nects (e.g. gigabit Ethernet, PCI-Express/AS ) accessible through main challenges: standard protocols (e.g. SCSI). 1. The first significant challenge is the platform, that is how to A commodity PC with PCI-X or PCI-Express can currently provide system scalability with minimal hardware costs. accommodate at least 2 TBytes of storage using commodity I/O controllers with 8 or 16 disks SATA disks each. The PCI-X bus 2. The second crucial challenge is the software infrastructure re- can support aggregate throughput in excess of 800 MBytes/sec to quired for (i) virtualizing, (ii) sharing, and (iii) managing the the disks, while gigabit Ethernet NICs and switches are an inex- physical storage in a data center. This includes the virtual- pensive means for interconnecting the storage nodes. The nodes ization capability to mask applications from each other and can also run an existing high-performance operating system such to facilitate storage management tasks. It also includes the as Linux or FreeBSD for a very low cost. Similarly, intercon- † nect technology that will allow efficient scaling is currently being Also a graduate student at the Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4, Canada. developed (e.g. PCI-Express/AS). Thus, we assume that the com- ∗ Also, with the Department of Electronic and Computer Engineer- modity components to build such storage platform are or will be ing, Technical University of Crete, Chania, GR 73100, Greece. available at a low cost. We believe that the next missing com- ‡ Also, with the Department of Computer Science, University of ponent for building cost-effective storage systems is addressing Crete, P.O. Box 2208, Heraklion, GR 71409, Greece. challenge 2. This will provide the software infrastructure that 101 will run on top, integrating storage resources into a system that in any significant way. Even though some vendors claim that they provides flexibility, sharing, and automated management. This is can support such sharing, efficient virtualization remains an elu- the main motivation for our work. sive target in large-scale systems, because of the software com- plexity. Maintaining consistency is complex and usually intro- 1.2. Software Infrastructure duces significant overheads in the system. A consolidated storage system requires increased flexibility to serve multiple applications and satisfy diverse needs in both stor- 1.3. Global-scale Sharing age management and data access. Moreover, as we accumulate In this area issues include interoperability, hiding network laten- massive amounts of storage resources behind a single system cies and achieving high throughput in networks with large delay- image, the automation of the administration tasks becomes cru- bandwidth products. We consider this area out of the scope of cial for sustaining efficient and cost-effective operation. Contin- our work, but we believe that the virtualization infrastructure we uous unobtrusive measurement of the device efficiency and the propose facilitates global storage connectivity and interoperabil- user-perceived performance should drive data reorganization and ity between data centers. device reconfiguration towards load-balanced system operation. The rest of this paper is organized as follows. Section 2 Finally, policy-based data administration should meet preagreed presents our position on virtualization issues, while Section 3 dis- performance objectives for individual applications. These are the cusses our approach to block-level virtualization. Finally, Section challenges for the software infrastructure in a data center. We 4 discusses related work and Section 5 draws our conclusions. argue that such requirements can be satisfied through a virtualiza- tion infrastructure. 2. Our Approach The term “virtualization” has been used to describe mainly two Our goal in this work is to address the software infrastructure concepts: indirection and sharing. The first refers to the addition challenges, by exploring storage virtualization solutions. The of an indirection layer between the physical resources and the log- term virtualization is used here to signify both an indirection layer ical devices accessed by the applications. Such a layer allows ad- between physical resources and virtual volume, and resource shar- ministrators to create and applications to access various types of ing. Next we describe the directions we have taken in our ap- virtual volumes that are mapped to physical devices, while offer- proach. ing higher-level semantics through multiple layers of mappings. This kind of virtualization has several advantages: (i) It enables a lot of useful management and reconfiguration operations, such as 2.1. Block-level functionality non-disruptive data reorganization and migration during the sys- We advocate block-level virtualization (we use the indirection tem operation. (ii) It adds flexibility to the system configuration, notion here), as opposed to file-level virtualization for the fol- since physical storage space can be arbitrarily mapped to virtual lowing reasons. First, certain functionality, such as compres- volumes. For example a virtual volume can be defined as any sion or encryption, may be simpler and more efficient to provide combination of basic operators, such as aggregation, partition- on unstructured fixed data blocks rather than variable-size files. ing, striping or mirroring. (iii) It allows implementation of useful Second, storage hardware at the back-end has evolved signifi- functionality, such as fault-tolerance with RAID levels, backup cantly from simple disks and fixed controllers to powerful stor- with volume snapshots or encryption with encryption data filters. age nodes [1, 8, 9] that offer block-level storage to multiple ap- (iv) It allows extensibility, the addition of new functionality in plications over a storage area network [16, 17]. Block-level vir- the system. This is achieved by implementing new virtualiza- tualization modules can exploit the processing capacity of these tion modules with the desired functions and incrementally adding storage nodes, where filesystems (running mainly on the applica- them to the system. tion servers and the clients) cannot. Third, storage management Currently, the “indirection” virtualization mechanisms of most (e.g. migrating data, backup,
Recommended publications
  • The Complete Freebsd
    The Complete FreeBSD® If you find errors in this book, please report them to Greg Lehey <grog@Free- BSD.org> for inclusion in the errata list. The Complete FreeBSD® Fourth Edition Tenth anniversary version, 24 February 2006 Greg Lehey The Complete FreeBSD® by Greg Lehey <[email protected]> Copyright © 1996, 1997, 1999, 2002, 2003, 2006 by Greg Lehey. This book is licensed under the Creative Commons “Attribution-NonCommercial-ShareAlike 2.5” license. The full text is located at http://creativecommons.org/licenses/by-nc-sa/2.5/legalcode. You are free: • to copy, distribute, display, and perform the work • to make derivative works under the following conditions: • Attribution. You must attribute the work in the manner specified by the author or licensor. • Noncommercial. You may not use this work for commercial purposes. This clause is modified from the original by the provision: You may use this book for commercial purposes if you pay me the sum of USD 20 per copy printed (whether sold or not). You must also agree to allow inspection of printing records and other material necessary to confirm the royalty sums. The purpose of this clause is to make it attractive to negotiate sensible royalties before printing. • Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one. • For any reuse or distribution, you must make clear to others the license terms of this work. • Any of these conditions can be waived if you get permission from the copyright holder. Your fair use and other rights are in no way affected by the above.
    [Show full text]
  • Freebsd Handbook
    FreeBSD Handbook http://www.freebsd.org/doc/en_US.ISO8859-1/books/han... FreeBSD Handbook The FreeBSD Documentation Project Copyright © 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 The FreeBSD Documentation Project Welcome to FreeBSD! This handbook covers the installation and day to day use of FreeBSD 8.3-RELEASE and FreeBSD 9.1-RELEASE. This manual is a work in progress and is the work of many individuals. As such, some sections may become dated and require updating. If you are interested in helping out with this project, send email to the FreeBSD documentation project mailing list. The latest version of this document is always available from the FreeBSD web site (previous versions of this handbook can be obtained from http://docs.FreeBSD.org/doc/). It may also be downloaded in a variety of formats and compression options from the FreeBSD FTP server or one of the numerous mirror sites. If you would prefer to have a hard copy of the handbook, you can purchase one at the FreeBSD Mall. You may also want to search the handbook. REDISTRIBUTION AND USE IN SOURCE (XML DOCBOOK) AND 'COMPILED' FORMS (XML, HTML, PDF, POSTSCRIPT, RTF AND SO FORTH) WITH OR WITHOUT MODIFICATION, ARE PERMITTED PROVIDED THAT THE FOLLOWING CONDITIONS ARE MET: 1. REDISTRIBUTIONS OF SOURCE CODE (XML DOCBOOK) MUST RETAIN THE ABOVE COPYRIGHT NOTICE, THIS LIST OF CONDITIONS AND THE FOLLOWING DISCLAIMER AS THE FIRST LINES OF THIS FILE UNMODIFIED. 2. REDISTRIBUTIONS IN COMPILED FORM (TRANSFORMED TO OTHER DTDS, CONVERTED TO PDF, POSTSCRIPT, RTF AND OTHER FORMATS) MUST REPRODUCE THE ABOVE COPYRIGHT NOTICE, THIS LIST OF CONDITIONS AND THE FOLLOWING DISCLAIMER IN THE DOCUMENTATION AND/OR OTHER MATERIALS PROVIDED WITH THE DISTRIBUTION.
    [Show full text]
  • DN Print Magazine BSD News BSD Mall BSD Support Source Wars Join Us
    Mirrors Primary (US) Issues April 2002 April 2002 Get BSD Contact Us Search BSD FAQ New to BSD? DN Print Magazine BSD News BSD Mall BSD Support Source Wars Join Us T H I S M O N T H ' S F E A T U R E S From the Editor Review: FreeBSD Services Limited's 4.5-RELEASE DVD Editorial by Sam Smith by Chris Coleman Building a self-funded If this was just a review of FreeBSD 4.5, and I just company is difficult in any mentioned that it comes on DVD, then the review would economy, and more have missed the point. There is a significant difference, especially so in the current when it comes to using the discs, between the CD and DVD one. Its a backhanded distributions... Read More blessing, however. One the one hand, you are protected from the "dot com" syndrome, where you burn A Tour Through The NetBSD Source Tree: Part II - though money faster than Libraries investors can throw it at by Hubert Feyrer you. But on the other hand, you can't get money if you In Unix(-like operating systems), commonly used routines wanted to, investors, loans that can be accessed from many programs are grouped or otherwise. within so-called libraries and that can be used from application programs. The src/lib directory contains the Get BSD Stuff libraries that come with NetBSD, and there's quite a number of them. Let's have a look! Read More DOSSIER and the Meta Project (Part 3) by Rich Morin Meta is an exploration into integrating system metadata and documentation.
    [Show full text]
  • 1999 USENIX Annual Technical Conference
    CONFERENCE WEB SITE: http://www.usenix.org/events/usenix99/ May 3,1999. Pre-registration savings deadline 19991999 USENIXUSENIX AnnualAnnual TechnicalTechnical ConferenceConference JUNEJUNE 6–11,6–11, 1999 1999 MONTEREYMONTEREY CONFERENCECONFERENCE CENTERCENTER MONTEREY,MONTEREY, CALIFORNIA, CALIFORNIA, USA USA "USENIX has always been a great event offering high-quality sessions, broken up into tutorials, an exhibition and a technical conference, that tends to draw a very seriously technical crowd" Phil Hughes, Linux Journal TalkTalk toto youryour peerspeers andand leading-edgeleading-edge developers,sharedevelopers,share problemsproblems andand theirtheir solutions,andsolutions,and bringbring homehome ideas,techniquesideas,techniques andand technologiestechnologies you'llyou'll applyapply immediately.immediately. SponsoredSponsored byby TheThe USENIXUSENIX AssociationAssociation USENIX Conference Office Non-Profit 22672 Lambert Street, Suite 613 Organization Lake Forest, CA 92630 US Postage PAID Phone: 1.949.588.8649 USENIX Fax: 1.949.588.9706 Association Email: [email protected] Office hours: 8:30 am–5:00 pm Pacific Time Please pass the brochure along to a colleague! y er b egist .R 1999. ve ents/usenix99 a ay 3, v S M g/e 19991999 USENIXUSENIX .usenix.or www AAnnnnuauall TTeecchhnniiccaall CCoonnffeerreennccee JUNEJUNE 6–11,6–11, 19991999 MONTEREMONTEREYY CCONFERENCEONFERENCE CENTERCENTER MONTEREMONTEREYY,, CCALIFORNIA,ALIFORNIA, USAUSA A renowned conference by and for programmers,, developers,,and system administrators workinging inin adadvvanced systems and software.. Refereed papers track—at the heart of our reputation for cutting-edge technical excellence—this year’s highly current interest research reports cover resource management, file systems, virtual memory systems, security, Web server performance, storage systems, O/S performance, and more. The FREENIX track—top-quality technical presentations on the latest developments in the world of freely redistributable software, including the business side of developing open source software.
    [Show full text]
  • Migration of WAFL to BSD the University Of
    Migration of WAFL to BSD by Sreelatha S. Reddy B.E., College of Engineering, Pune, India, 1998 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of British Columbia December 2000 © Sreelatha S. Reddy, 2000 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. - Department of CompOTer ^Ct&PtCf^ The University of British Columbia Vancouver, Canada •ate QO DeeeraKe-r' aoQQ DE-6 (2/88) Abstract The UNIX kernel has been around for quite some time. Considering this time factor, we realise that compared to the other components of the operating system, the filesystem has yet to see the implementation of novel and innovative features. However, filesystem research has continued for other kernels. The Write Anywhere File Layout filesystem developed at Network Appliance has been proved to be a reliable and new filesystem with its ability to take snaphots for backups and to schedule consistency points for fast recovery. WAFL has been built as a filesystem component in the ONTAP kernel which runs on the NFS filer.
    [Show full text]
  • Rosetta Stone for Unix
    This custom drawing feature now works in Mozilla, in Opera 7.0 or better, and in IE 5 or better for Solaris, HP-UX, or Windows. Use click, shift-click, control-click, and the "set" and "clear" buttons to select the desired subset of OSs. Then select "Draw table" to redraw the window with your custom table. Thanks to Mårten Svantesson <[email protected]> for the improved JavaScript. A Sysadmin's Unixersal Translator (ROSETTA STONE) OR What do they call that in this world? Contributions and corrections gratefully accepted. Please help us fill in the blanks. New "tasks" are welcome, too! PDF version (for best results print US legal size, landscape orientation. If your browser plug-in has trouble with the PDF, then download it and load it in stand-alone acroread). Framed version. OS versions in parentheses; e.g. 10+ means version 10 and greater; 9- means version 9 and previous. $=extra cost If not specified, commands are in one of the following directories: /usr/bin, /usr/sbin, /sbin, /bin Categories: hardware, firmware, devices disks kernel, boot, swap files, volumes networking security, backup software, patching, tracing, logging references TASK \ OS AIX Darwin DG/UX FreeBSD HP-UX IRIX Linux NCR Unix NetBSD OpenBSD Reliant SCO UnixWare Solaris SunOS 4 Tru64 Ultrix UNICOS OS / TASK Fujitsu Siemens. Based on Cray Inc., formed By IBM, with input from SCO UnixWare 7.0.1& The Open Source DC/OSX from Pyramid, from the March System V, BSD, etc. etc. Derived from SCO UnixWare 7.1.1 Solaris 2.0-2.6, 7, 8, 9 (Digital Unix, OSF/1) An early DEC Unix, foundation for Mac OS Data General was aquired Derived from 4.4BSD-Lite (rh) = Red Hat, Forked from NetBSD in SINIX Solaris 1.* 2000 merger of OS notes Runs mainly on IBM Hewlett-Packard.
    [Show full text]
  • GEOM Tutorial
    GEOM Tutorial Poul-Henning Kamp [email protected] Outline ● Background and analysis. ● The local architectural scenery ● GEOM fundamentals. ● (tea break) ● Slicers (not a word about libdisk!) ● Tales of the unexpected. ● Q/A etc. UNIX Disk I/O ● A disk is a one dimensional array of sectors. – 512 bytes/sector typical, but not required. ● Two I/O operations: read+write – Sectorrange: First sector + count. – RAM must be mapped into kernel. ● I/O request contained in struct buf/bio ● Schedule I/O by calling strategy() ● Completion signaled by biodone() callback. GEOM does what ? ● Sits between DEVFS and device-drivers ● Provides framework for: – Arbitrary transformations of I/O requests. – Collection of statistics. – Disksort like optimizations. – Automatic configuration – Directed configuration. ªYou are hereº Userland application Physio() Filesystem Buffer cache VM system To DEVFS GEOM looks like a regular device driver DEVFS Disk device drivers use the disk_*() API to interface to GEOM GEOM Device driver The GEOM design envelope. ● Modular. ● Freely stackable. ● Auto discovery. ● Directed Configuration. ● POLA ● DWIM ● No unwarranted politics. ªModularº ● You cannot define a new transformation and insert it into Veritas volume manager, AIX LVM, Vinum or RaidFrame. ● They are all monolithic and closed. – ªA quaint feature from the seventiesº. Freely stackable. ● Put your transformations in the order you like. – Mirror ad0 + ad1, partition the result. – Partition ad0 and ad1, mirror ad0a+ad1a, ad0b+ad1b, ad0c+ad1c, ad0d+ad1d ... ● Strictly defined interfaces between classes. Auto discovery. ● Classes allowed to ªautomagicallyº respond to detectable clues. – Typically reacts to on-disk meta-data. ● MBR, disklabel etc – Could also be other types of stimuli. Directed configuration ● ªroot is always rightº -- the kernel.
    [Show full text]
  • Survey of Volume Managers
    Survey Of Volume Managers Nasser M. Abbasi May 24, 2000 Compiled on May 14, 2020 at 9:33pm Contents 1 Advantages of Volume Managers 1 2 Terminology used in LVM software 1 3 Survey of Volume Managers 1 3.1 SGI XLV. First generation volume manager . 2 3.2 SGI XVM Volume Manager. Second generation, enhanced XLV . 2 3.3 Linux LVM (Logical Volume Manager) . 2 3.4 Veritas Volumne Manager. (Sometimes called VxVM) . 3 3.5 The Vinum Volume Manager . 5 3.6 Compaq SANworks Enterprise Volume manager . 5 3.7 Sun StorEdge Volume Manager (SSVM). (repackaged version of Veritas Volume Manager.) . 6 3.8 AIX Logical Volume Manager . 6 3.9 Sequent Volume Manager (ptx/SVM) . 6 3.10 HP Shared Logical Volume Manager (SLVM) . 8 4 Other links to Volume Managers 8 1 Advantages of Volume Managers From http://www.uwsg.iu.edu/usail/peripherals/disks/logical/ Some of the advantages of using a logical volume manager: 1. They oer greater flexibility for disk partitioning. 2. The size of logical volumes can be modified according to need, while the operating system is running 3. Logical volumes can span multiple disks. 4. Disk mirroring is often supported, for greater data reliability 2 Terminology used in LVM software Figure below Shows the main terms used in Logical Volume Manager software. 1 2 PV's (Physical Volumes) /dev/sde1 /dev/sde2 /dev/sde2 VG (Volume Group) = /dev/test_vg PE1 PE2 PE 3 PE 4 PE 5 PE 6 made up of PE's (Physcial Extents) LE 1 LE 2 LE 3 LE 1 LE 2 LE 3 LV (Logical Volume) = /dev/test_vg/lv_1 LV (Logical Volume) = /dev/test_vg/lv_2 made up of LE's (Logical extents).
    [Show full text]
  • The Vinum Volume Manager
    The vinum Volume Manager Greg Lehey Table of Contents 1. Synopsis ................................................................................................................................ 1 2. Access Bottlenecks ................................................................................................................... 1 3. Data Integrity ......................................................................................................................... 3 4. vinum Objects ........................................................................................................................ 4 5. Some Examples ....................................................................................................................... 5 6. Object Naming ....................................................................................................................... 10 7. Configuring vinum ................................................................................................................ 11 8. Using vinum for the Root File System ........................................................................................ 12 1. Synopsis No matter the type of disks, there are always potential problems. The disks can be too small, too slow, or too unreliable to meet the system's requirements. While disks are getting bigger, so are data storage requirements. Often a le system is needed that is bigger than a disk's capacity. Various solutions to these problems have been proposed and implemented. One method is through
    [Show full text]
  • 2000 Unixguide.Net, All Rights Reserved. Hermelito Go (Last Update: Thursday, 11-Apr-2002 14:32:33 PDT )
    © 2000 UNIXguide.net, All Rights Reserved. Hermelito Go (Last Update: Thursday, 11-Apr-2002 14:32:33 PDT ) Directory Mappings AIX FreeBSD HP-UX LINUX(RedHat) SOLARIS Tru64 Root filesystem / {/dev/hd4} / {/dev/ad0s1a} / {/dev/vg00/lvol1} / {/dev/sda1} / {/dev/vx/dsk/rootvol} / {/dev/rz0a} /export/home Home Directory /home {/dev/hd1} /home {/dev/vg00/lvol4} /dev/vx/dsk/home} /tmp {/dev/hd3} /tmp {/dev/vg00/lvol6} /tmp /dev/vx/dsk/swapvol} /usr {/dev/hd2} /usr {/dev/ad0s1f} /usr {/dev/vg00/lvol7} /usr /usr {/dev/rz0g} /var {/dev/hd9var} /var {/dev/ad0s1e} /var {/dev/vg00/lvol8} /var Sample configuration files - /usr/newconfig User Accounts AIX FreeBSD HP-UX LINUX(RedHat) Solaris Tru64 /etc/passwd /etc/passwd /etc/passwd /etc/passwd /etc/passwd Password files /etc/security/passwd /etc/master.passwd /tcb/files/auth/r/root /etc/shadow /etc/shadow /etc/passwd /etc/group /etc/group Groups file /etc/security/group /etc/group /etc/logingroup /etc/group /etc/group /etc/group Maximum # of user ID 4294967295 65535 2147483647 65535 2147483647 65535 Allow/Deny remote /etc/security/user /etc/ttys /etc/securetty /etc/securetty /etc/default/login /etc/securettys login {rlogin=true} {secure} {console} {ttyp1} {CONSOLE=/dev/console} {ttyp1} User nobody's id # 4294967294 65534 -2 99 60001 & 65534(nobody4) 65534 Group nobody's id # 4294967294 65534 -2(nogroup) 99 60002 & 65534(nogroup) 65534 {lilo} control-x linux S passwd root press the HALT Button or boot from CD/Tape boot cdrom -s >boot {grub} (Control-P) Installation/Maintenance mkdir /tmp/a >>>boot -fl
    [Show full text]
  • The Vinum Volume Manager Gr Eg Lehey LEMIS (SA) Pty Ltd PO Box 460 Echunga SA 5153
    The Vinum Volume Manager Gr eg Lehey LEMIS (SA) Pty Ltd PO Box 460 Echunga SA 5153. [email protected] [email protected] [email protected] ABSTRACT The Vinum Volume Manager is a device driver which implements virtual disk drives. It isolates disk hardwarefromthe device interface and maps data in ways which result in an incr ease in flexibility, perfor mance and reliability compared to the traditional slice view of disk storage. Vinum implements the RAID-0, RAID-1, RAID-4 and RAID-5 models, both in- dividually and in combination. Vinum is an open source volume manager which runs under FreeBSD and NetBSD. It was inspired by the VERITAS® volume manager and implements many of the concepts of VERITAS®. Its key features are: • Vinum implements many RAID levels: • RAID-0 (striping). • RAID-1 (mirroring). • RAID-4 (fixed parity). • RAID-5 (block-interleaved parity). • RAID-10 (mirroring and striping), a combination of RAID-0 and RAID-5. In addition, other combinations arepossible for which no formal RAID level definition exists. • Volume managers initially emphasized reliability and perfor mance rather than ease of use. The results arefrequently down time due to misconfiguration, with consequent reluctance on the part of operational personnel to attempt to use the moreunusual featur es of the product. Vinum attempts to provide an easier-to-use non-GUI inter- face. In place of conventional disk partitions, Vinum presents synthetic disks called volumes to 1 the user.These volumes arethe top level of a hierarchy of objects used to construct vol- umes with differ ent characteristics: • The top level is the virtual disk or volume.Volumes effectively replace disk drives.
    [Show full text]
  • Freebsd Documentation Release 10.1
    FreeBSD Documentation Release 10.1 Claudia Mane July 12, 2015 Contents 1 &title; 3 1.1 What is FreeBSD?............................................3 1.2 Cutting edge features...........................................3 1.3 Powerful Internet solutions........................................3 1.4 Advanced Embedded Platform......................................3 1.5 Run a huge number of applications...................................3 1.6 Easy to install..............................................4 1.7 FreeBSD is free .............................................4 1.8 Contributing to FreeBSD.........................................4 2 &title; 5 2.1 Introduction...............................................5 3 &title; 15 3.1 Experience the possibilities with FreeBSD............................... 15 3.2 FreeBSD is a true open system with full source code........................... 15 3.3 FreeBSD runs thousands of applications................................. 15 3.4 FreeBSD is an operating system that will grow with your needs..................... 16 3.5 What experts have to say . ........................................ 16 4 &title; 17 4.1 BSD Daemon............................................... 17 4.2 “Powered by FreeBSD” Logos...................................... 19 4.3 Old Advertisement Banners....................................... 19 4.4 Graphics Use............................................... 19 4.5 Trademarks................................................ 20 5 &title; 21 6 &title; 23 6.1 Subversion...............................................
    [Show full text]