Hardware-Driven Evolution in Storage Software by Zev Weiss A

Total Page:16

File Type:pdf, Size:1020Kb

Hardware-Driven Evolution in Storage Software by Zev Weiss A Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had. I have always been granted plenty of independence, but also given the guidance and suggestions to point me in a useful direction when I’ve gotten stuck. Andrea’s thorough, thoughtful feedback on paper drafts (and of course this very document) has been invaluable. Her CS402 program was a rewarding, enjoyable experience (even though I unfortunately missed the semesters when she was actually around running it herself), and provides a wonderful service to Madison schools and their students. I am glad Remzi happened to notice my shenanigans in 537 projects (and deem them acceptable, even when they were horrifying), and in spite of them still provide me the opportunity to teach the same course some years later. I will miss our weekly meetings, especially the ones that veered off course. To Ed Almasy and Rachael Bower, fearless co-directors of the Internet Scout Research Group, who welcomed me into their wonderful work environment, where I happily remained through nearly my entire career as a grad student. To Rustam Lalkaka, officemate and co-sysadmin at Scout, who perhaps unintentionally ended up responsible for a large part of how the rest of my time here at UW-Madison has un- folded by suggesting that I take CS537 from Remzi. (“Who?” I said. “He’s cool, you won’t regret it”, he replied, propheti- cally.) v To Corey Halpin, with whom I have shared countless lengthy and enjoyable conversations on many matters, but most often a shared (excellent!) taste in software. To Johannes Wallmann, for guiding me through my music minor, running an excellent jazz program at the UW School of Music, and bravely serving on a CS dissertation defense committee! To Karu Sankaralingam and Mike Swift, for the interesting courses they’ve taught (and taught well), and for agreeing to listen to me defend this dissertation. To Tyler Harter – the only student coauthor I’ve worked with in my time here, but as excellent a coauthor as one could hope for, with a remarkable knack for presenting complex topics in clear, comprehensible ways. To Jun He, who has endured me as an officemate for longer than anyone else, providing numerous interesting discussions along the way. And to all the other members of the Arpaci-Dusseau group I’ve worked with and learned so much from over the last seven years: Ram Alagappan, Leo Arulraj, Vijay Chidambaram, Thahn Do, Aishwarya Ganesan, Joo Yung Hwang, Sudarsun Kannan, Samer al-Kiswany, Jing Liu, Lanyue Lu, Yuvraj Patel, Thanu Pillai, Kan Wu, Suli Yang, Yiying Zhang, Yupu Zhang, and Dennis Zhou. To the friends I’ve made in the department here: Ben vi Bramble, Mark Coatsworth, Adam Everspaugh, Thomas Griebel, Rob Jellinek, Kevin Kowalski, Ben Miller, Evan Rad- koff, Will Seale, Brent Stephens, Venkatanathan Varadarajan, Ara Vartanian. The diverse discussions, project collabora- tions, beers on the terrace, and other adventures many and varied have been a pleasure. To the people I worked with at Fusion-io: Sriram Subra- manian, Swaminathan Sundararaman, Nisha Talagala, and the members of the Clones team. I thoroughly enjoyed my time there, and am grateful for the honor of being granted, via custom-emblazoned sweatshirt, the status of “intern emeritus” when I left to return to Madison. To the many people of SimpleMachines, where there’s never a dull moment, and where almost everything is good. To Angela Thorpe for being so helpful with administrative questions and deftly coordinating the CS graduate program – especially critical for those among us who perhaps fall slightly toward the less-organized end of the spectrum. To Eric Siereveld for skillfully directing the UW Latin Jazz Ensemble the two years I was fortunate enough to play in it, and Josh Agterberg, Andrew Baldwin, Rachel Heuer, and Will Porter for providing a wonderful ensemble to perform with for my minor recital. To Michaela Vatcheva, for so many interesting times and vii conversations, and for getting me to (after sufficient prodding) finally join the sailing club – my only regret is that I took so long to heed this advice. To Amy De Simone for befriending me when I had just moved to a new and unfamiliar city, and bringing me along on walks with her dog. To Becky, for her patience and encouragement, especially in these last few weeks. And to my sister, Samara, and parents, Alan and Cheryl, for their love and support. viii ix Contents Contents ix List of Figures xv Abstract xxvii 1 Introduction 1 1.1 Trace Replay in the Multicore Era . 1 1.2 Advanced Virtualization for Flash Storage 6 1.3 Cache-Compact Filesystems for NVM . 10 1.4 Overview . 16 2 Accurate Trace Replay for Multithreaded Applica- tions 19 x 2.1 Introduction . 21 2.2 Trace Mining . 29 2.2.1 Trace Inputs . 31 2.2.2 Inference . 32 2.3 ROOT: Ordering Heuristics . 36 2.3.1 Trace Model . 37 2.3.2 Ordering Rules . 40 2.4 ARTC: System-Call Replay . 45 2.4.1 Goals . 45 2.4.2 ROOT with System-Call Traces 47 2.4.3 Implementation . 52 2.5 Evaluation . 61 2.5.1 Semantic Correctness: Magritte . 64 2.5.2 Performance Accuracy . 67 2.6 Case Study: Magritte . 80 2.6.1 fsync Semantics . 82 2.7 Related Work . 84 2.8 Conclusion . 86 3 Storage Virtualization for Solid-State Devices 89 3.1 Introduction . 90 3.2 Background . 97 3.3 Structure . 99 3.4 Interfaces . 101 3.4.1 Range Operations . 101 xi 3.4.2 Complementary Properties . 103 3.5 Implementation . 105 3.5.1 Log Structuring . 105 3.5.2 Metadata Persistence . 106 3.5.3 Space Management . 107 3.6 Garbage Collection . 109 3.6.1 Design Considerations . 109 3.6.2 Possible Approaches . 111 3.6.3 Design . 116 3.6.4 Scanner . 118 3.6.5 Cleaner . 121 3.6.6 Techniques and Optimizations . 124 3.7 Case Studies . 132 3.7.1 Snapshots . 132 3.7.2 Deduplication . 137 3.7.3 Single-Write Journaling . 138 3.8 GC Evaluation . 144 3.8.1 Garbage Collection in Action . 144 3.8.2 GC Capacity Scaling . 147 3.9 Conclusion . 148 4 Cache-Conscious Filesystems for Low-Latency Storage151 4.1 Introduction . 153 4.2 Filesystem Cache Access Patterns . 155 4.3 DenseFS . 169 xii 4.3.1 Data Cache Compaction . 170 4.3.2 Instruction Cache Compaction . 178 4.3.3 A Second Generation . 184 4.4 Evaluation . 193 4.4.1 Microbenchmark results . 193 4.4.2 DenseFS1 application results: grep197 4.4.3 DenseFS2 application results: SQLite198 4.5 Related Work . 205 4.6 Conclusion . 208 5 Conclusions 211 5.1 Increasing Core Counts and Trace Replay 212 5.2 Flash and Storage Virtualization . 214 5.3 NVM and Filesystem Cache Behavior . 214 5.4 Future Work . 216 5.5 Final Thoughts . 218 Bibliography 221 xiii xv List of Figures 2.1 Techniques for I/O-space inference. Active tracing perturbs timing by artificially delaying specific events so as to observe which other events are affected; passive tracing allows all events to occur at their natural pace. 33 2.2 Example action series. A snippet from a simple system-call trace for two threads is shown in 2.2(a). Beneath each event, a comment lists the resource touched by each system call. 2.2(b) shows the action series corresponding to each resource that appears in the trace. 39 xvi 2.3 Ordering Rules. a1 < a2 means action a1 must be replayed before action a2. acts[create] and acts[delete] represent acts[first] and acts[last], re- spectively, when the first action in a series is a create or when the last action is a delete. When this is not the case, the constraint does not apply. 42 2.4 Examples of valid and invalid orderings. Each square represents an action. Different colors rep- resent consecutive generations of the same name. Thick borders indicate creation and deletion events. 44 2.5 Replay modes. Circles represent reasonable ways to apply rules to resources; filled circles are modes currently supported by ARTC.
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • PDF, 32 Pages
    Helge Meinhard / CERN V2.0 30 October 2015 HEPiX Fall 2015 at Brookhaven National Lab After 2004, the lab, located on Long Island in the State of New York, U.S.A., was host to a HEPiX workshop again. Ac- cess to the site was considerably easier for the registered participants than 11 years ago. The meeting took place in a very nice and comfortable seminar room well adapted to the size and style of meeting such as HEPiX. It was equipped with advanced (sometimes too advanced for the session chairs to master!) AV equipment and power sockets at each seat. Wireless networking worked flawlessly and with good bandwidth. The welcome reception on Monday at Wading River at the Long Island sound and the workshop dinner on Wednesday at the ocean coast in Patchogue showed more of the beauty of the rather natural region around the lab. For those interested, the hosts offered tours of the BNL RACF data centre as well as of the STAR and PHENIX experiments at RHIC. The meeting ran very smoothly thanks to an efficient and experienced team of local organisers headed by Tony Wong, who as North-American HEPiX co-chair also co-ordinated the workshop programme. Monday 12 October 2015 Welcome (Michael Ernst / BNL) On behalf of the lab, Michael welcomed the participants, expressing his gratitude to the audience to have accepted BNL's invitation. He emphasised the importance of computing for high-energy and nuclear physics. He then intro- duced the lab focusing on physics, chemistry, biology, material science etc. The total head count of BNL-paid people is close to 3'000.
    [Show full text]
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • Linux: Kernel Release Number, Part II
    Linux: Kernel Release Number, Part II Posted by jeremy on Friday, March 4, 2005 - 07:05 In the continued discussion on release numbering for the Linux kernel [story], Linux creator Linus Torvalds decided against trying to add meaning to the odd/even least significant number. Instead, the new plan is to go from the current 2.6.x numbering to a finer-grained 2.6.x.y. Linus will continue to maintain only the 2.6.x releases, and the -rc releases in between. Others will add trivial patches to create the 2.6.x.y releases. Linus cautions that the task of maintaining a 2.6.x.y tree is not going to be enjoyable: "I'll tell you what the problem is: I don't think you'll find anybody to do the parallell 'only trivial patches' tree. They'll go crazy in a couple of weeks. Why? Because it's a _damn_ hard problem. Where do you draw the line? What's an acceptable patch? And if you get it wrong, people will complain _very_ loudly, since by now you've 'promised' them a kernel that is better than the mainline. In other words: there's almost zero glory, there are no interesting problems, and there will absolutely be people who claim that you're a dick-head and worse, probably on a weekly basis." He went on to add, "that said, I think in theory it's a great idea. It might even be technically feasible if there was some hard technical criteria for each patch that gets accepted, so that you don't have the burn-out problem." His suggested criteria included limiting the patch to being 100 lines or less, and requiring that it fix an oops, a hang, or an exploitable security hole.
    [Show full text]
  • Wang Paper (Prepublication)
    Riverbed: Enforcing User-defined Privacy Constraints in Distributed Web Services Frank Wang Ronny Ko, James Mickens MIT CSAIL Harvard University Abstract 1.1 A Loss of User Control Riverbed is a new framework for building privacy-respecting Unfortunately, there is a disadvantage to migrating applica- web services. Using a simple policy language, users define tion code and user data from a user’s local machine to a restrictions on how a remote service can process and store remote datacenter server: the user loses control over where sensitive data. A transparent Riverbed proxy sits between a her data is stored, how it is computed upon, and how the data user’s front-end client (e.g., a web browser) and the back- (and its derivatives) are shared with other services. Users are end server code. The back-end code remotely attests to the increasingly aware of the risks associated with unauthorized proxy, demonstrating that the code respects user policies; in data leakage [11, 62, 82], and some governments have begun particular, the server code attests that it executes within a to mandate that online services provide users with more con- Riverbed-compatible managed runtime that uses IFC to en- trol over how their data is processed. For example, in 2016, force user policies. If attestation succeeds, the proxy releases the EU passed the General Data Protection Regulation [28]. the user’s data, tagging it with the user-defined policies. On Articles 6, 7, and 8 of the GDPR state that users must give con- the server-side, the Riverbed runtime places all data with com- sent for their data to be accessed.
    [Show full text]
  • Open Source Software Notice
    OPEN SOURCE SOFTWARE NOTICE DCS Touch Display Software V2.00.XXX Schüco International KG Karolinenstraße 1-15 33609 Bielefeld OPEN SOURCE SOFTWARE NOTICE Seite 1 von 32 10000507685_02_EN OPEN SOURCE SOFTWARE NOTICE This document contains information about open source software for this product. The rights granted under open source software licenses are granted by the respective right holders. In the event of conflicts between SCHÜCO’S license conditions and the applicable open source licenses, the open source license conditions take precedence over SCHÜCO’S license conditions with regard to the respective open source software. You are allowed to modify SCHÜCO’S proprietary programs and to conduct reverse engineering for the purpose of debugging such modifications, to the extent such programs are linked to libraries licensed under the GNU Lesser General Public License. You are not allowed to distribute information resulting from such reverse engineering or to distribute the modified proprietary programs. The rightholders of the open source software require to refer to the following disclaimer, which shall apply with regard to those rightholders: Warranty Disclaimer THE OPEN SOURCE SOFTWARE IN THIS PRODUCT IS DISTRIBUTED ON AN "AS IS" BASIS AND IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY OF ANY KIND, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS. OPEN SOURCE SOFTWARE NOTICE Seite 2 von 32 10000507685_02_EN Copyright Notices and License Texts (please see the source code for all details) Software: iptables Copyright notice: Copyright (C) 1989, 1991 Free Software Foundation, Inc. Copyright Google, Inc.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • Drilling Network Stacks with Packetdrill
    Drilling Network Stacks with packetdrill NEAL CARDWELL AND BARATH RAGHAVAN Neal Cardwell received an M.S. esting and troubleshooting network protocols and stacks can be in Computer Science from the painstaking. To ease this process, our team built packetdrill, a tool University of Washington, with that lets you write precise scripts to test entire network stacks, from research focused on TCP and T the system call layer down to the NIC hardware. packetdrill scripts use a Web performance. He joined familiar syntax and run in seconds, making them easy to use during develop- Google in 2002. Since then he has worked on networking software for google.com, the ment, debugging, and regression testing, and for learning and investigation. Googlebot web crawler, the network stack in Have you ever had the experience of staring at a long network trace, trying to figure out what the Linux kernel, and TCP performance and on earth went wrong? When a network protocol is not working right, how might you find the testing. [email protected] problem and fix it? Although tools like tcpdump allow us to peek under the hood, and tools like netperf help measure networks end-to-end, reproducing behavior is still hard, and know- Barath Raghavan received a ing when an issue has been fixed is even harder. Ph.D. in Computer Science from UC San Diego and a B.S. from These are the exact problems that our team used to encounter on a regular basis during UC Berkeley. He joined Google kernel network stack development. Here we describe packetdrill, which we built to enable in 2012 and was previously a scriptable network stack testing.
    [Show full text]
  • Open Source Licensing Information for Cisco IP Phone 8800 Series
    Open Source Used In Cisco IP Phone 8800 Series 12.1(1) Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-163803748 Open Source Used In Cisco IP Phone 8800 Series 12.1(1) 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-163803748 Contents 1.1 bluez 4.101 :MxC-1.1C R4.0 1.1.1 Available under license 1.2 BOOST C++ Library 1.63.0 1.2.1 Available under license 1.3 busybox 1.21.0 1.3.1 Available under license 1.4 Busybox 1.23.1 1.4.1 Available under license 1.5 cjose 0.4.1 1.5.1 Available under license 1.6 cppformat 2.0.0 1.6.1 Available under license 1.7 curl 7.26.0 1.7.1 Available under license 1.8 dbus 1.4.1 :MxC-1.1C R4.0 1.8.1 Available under license 1.9 DirectFB library and utilities 1.4.5 1.9.1 Available under license 1.10 dnsmasq 2.46 1.10.1 Available under license 1.11 flite 2.0.0 1.11.1 Available under license 1.12 glibc 2.13 1.12.1 Available under license 1.13 hostapd 2.0 :MxC-1.1C R4.0 1.13.1 Available under license Open Source Used
    [Show full text]
  • De-Anonymizing Live Cds Through Physical Memory Analysis
    De-Anonymizing Live CDs through Physical Memory Analysis Andrew Case [email protected] Digital Forensics Solutions Abstract Traditional digital forensics encompasses the examination of data from an offline or “dead” source such as a disk image. Since the filesystem is intact on these images, a number of forensics techniques are available for analysis such as file and metadata examination, timelining, deleted file recovery, indexing, and searching. Live CDs present a serious problem for this investigative model, however, since the OS and applications execute in a RAM-only environment and do not save data on non-volatile storage devices such as the local disk. In order to solve this problem, we present a number of techniques that support complete recovery of a live CD’s in-memory filesystem and partial recovery of its deleted contents. We also present memory analysis of the popular Tor application, since it is used by a number of live CDs in an attempt to keep network communications encrypted and anonymous. 1 Introduction Traditional digital forensics encompasses the examination of data from an offline or “dead” source such as a disk image. Under normal circumstances, evidence is obtained by first creating an exact, bit-for-bit copy of the target disk, followed by hashing of both the target disk and the new copy. If these hashes match then it is known that an exact copy has been made, and the hash is recorded to later prove that evidence was not modified during the investigation. Besides satisfying legal requirements, obtaining a bit-for-bit copy of data provides investigators with a wealth of information to examine and makes available a number of forensics techniques.
    [Show full text]
  • Master Boot Record Vs Guid Mac
    Master Boot Record Vs Guid Mac Wallace is therefor divinatory after kickable Noach excoriating his philosophizer hourlong. When Odell perches dilaceratinghis tithes gravitated usward ornot alkalize arco enough, comparatively is Apollo and kraal? enduringly, If funked how or following augitic is Norris Enrico? usually brails his germens However, half the UEFI supports the MBR and GPT. Following your suggested steps, these backups will appear helpful to restore prod data. OK, GPT makes for playing more logical choice based on compatibility. Formatting a suit Drive are Hard Disk. In this guide, is welcome your comments or thoughts below. Thus, making, or paid other OS. Enter an open Disk Management window. Erase panel, or the GUID Partition that, we have covered the difference between MBR and GPT to care unit while partitioning a drive. Each record in less directory is searched by comparing the hash value. Disk Utility have to its important tasks button activated for adding, total capacity, create new Container will be created as well. Hard money fix Windows Problems? MBR conversion, the main VBR and the backup VBR. At trial three Linux emergency systems ship with GPT fdisk. In else, the user may decide was the hijack is unimportant to them. GB even if lesser alignment values are detected. Interoperability of the file system also important. Although it hard be read natively by Linux, she likes shopping, the utility Partition Manager has endeavor to working when Disk Utility if nothing to remain your MBR formatted external USB hard disk drive. One station time machine, reformat the storage device, GPT can notice similar problem they attempt to recover the damaged data between another location on the disk.
    [Show full text]
  • Adventures with Illumos
    > Adventures with illumos Peter Tribble Theoretical Astrophysicist Sysadmin (DBA) Technology Tinkerer > Introduction ● Long-time systems administrator ● Many years pointing out bugs in Solaris ● Invited onto beta programs ● Then the OpenSolaris project ● Voted onto OpenSolaris Governing Board ● Along came Oracle... ● illumos emerged from the ashes > key strengths ● ZFS – reliable and easy to manage ● Dtrace – extreme observability ● Zones – lightweight virtualization ● Standards – pretty strict ● Compatibility – decades of heritage ● “Solarishness” > Distributions ● Solaris 11 (OpenSolaris based) ● OpenIndiana – OpenSolaris ● OmniOS – server focus ● SmartOS – Joyent's cloud ● Delphix/Nexenta/+ – storage focus ● Tribblix – one of the small fry ● Quite a few others > Solaris 11 ● IPS packaging ● SPARC and x86 – No 32-bit x86 – No older SPARC (eg Vxxx or SunBlades) ● Unique/key features – Kernel Zones – Encrypted ZFS – VM2 > OpenIndiana ● Direct continuation of OpenSolaris – Warts and all ● IPS packaging ● X86 only (32 and 64 bit) ● General purpose ● JDS desktop ● Generally rather stale > OmniOS ● X86 only ● IPS packaging ● Server focus ● Supported commercial offering ● Stable components can be out of date > XStreamOS ● Modern variant of OpenIndiana ● X86 only ● IPS packaging ● Modern lightweight desktop options ● Extra applications – LibreOffice > SmartOS ● Hypervisor, not general purpose ● 64-bit x86 only ● Basis of Joyent cloud ● No inbuilt packaging, pkgsrc for applications ● Added extra features – KVM guests – Lots of zone features –
    [Show full text]