LSF ’07: 2007 Linux Storage & Filesystem Workshop Namically Adjusted According to the Current Popularity of Its Hot Zone

Total Page:16

File Type:pdf, Size:1020Kb

LSF ’07: 2007 Linux Storage & Filesystem Workshop Namically Adjusted According to the Current Popularity of Its Hot Zone June07login1summaries_press.qxd:login summaries 5/27/07 10:27 AM Page 84 PRO: A Popularity-Based Multi-Threaded Reconstruction overhead of PRO is O(n), although if a priority queue is Optimization for RAID-Structured Storage Systems used in the PRO algorithm the computation overhead can Lei Tian and Dan Feng, Huazhong University of Science and be reduced to O(log n). The entire PRO implementation in Technology; Hong Jiang, University of Nebraska—Lincoln; Ke the RAIDFrame software only added 686 lines of code. Zhou, Lingfang Zeng, Jianxi Chen, and Zhikun Wang, Hua- Work on PRO is ongoing. Future work includes optimiz - zhong University of Science and Technology and Wuhan ing the time slice, scheduling strategies, and hot zone National Laboratory for Optoelectronics; Zhenlei Song, length. Currently, PRO is being ported into the Linux soft - Huazhong University of Science and Technology ware RAID. Finally, the authors plan on further investigat - ing use of access patterns to help predict user accesses and Hong Jiang began his talk by discussing the importance of of filesystem semantic knowledge to explore accurate re - data recovery. Disk failures have become more common in construction. RAID-structured storage systems. The improvement in disk capacity has far outpaced improvements in disk band - The first questioner asked about the average rate of recov - width, lengthening the overall RAID recovery time. Also, ery for PRO. Hong answered that the average reconstruc - disk drive reliability has improved slowly, resulting in a tion time is several hundred seconds in the experimental very high overall failure rate in a large-scale RAID storage setup. The second questioner asked how well PRO recon - system. Disk-oriented reconstruction (DOR) is one of the struction compares to DOR reconstruction under no work - existing I/O parallelism-based recovery mechanisms. DOR load. Hong commented that when there is no workload follows a sequential order of stripes in reconstruction, re - the reconstruction performance for PRO and DOR is the gardless of user access patterns. Workload access patterns same. In response to a question about write overhead, need to be considered because 80% of the accesses are di - Hong stated that his research team is actively looking into rected to 20% of the data, according to Pareto’s Principle, this. The last question involved the sensitivity of the re - and 10% of the files accessed on a Web server typically ac - sults to the number of threads and to the time slice. Hong count for 90% of the server requests. The authors present explained that the impact of the number of threads and a popularity-based multi-threaded reconstruction opti - time slice is negligible in the current experimental configu - mization (PRO) that takes advantage of data popularity to ration. However, a more elaborate sensitivity study is un - improve reconstruction performance. PRO divides data derway in the project. units on the spare disks into hot zones. Each hot zone has a reconstruction thread. The priority of each thread is dy - LSF ’07: 2007 Linux Storage & Filesystem Workshop namically adjusted according to the current popularity of its hot zone. PRO keeps track of the user accesses and ad - justs the popularity of each hot zone accordingly. PRO se - San Jose, CA February 12–13, 2007 lects the reconstruction thread with the highest priority and allocates a time slice to it. When a thread’s time slice Summarized by Brandon Philips ([email protected]) runs out, PRO assigns a time slice to the next highest pri - Fifty members of the Linux storage and filesystem commu - ority thread. The process repeats until all of the data units nities met in San Jose, California, to give status updates, have been rebuilt. Priority-based scheduling is used so that present new ideas, and discuss issues during the two-day the reconstruction regions are always the hottest regions. Linux Storage & Filesystem Workshop. The workshop was Time-slicing is used to exploit the I/O bandwidth of hard chaired by Ric Wheeler and sponsored by EMC, NetApp, disks and access locality. Panasas, Seagate, and Oracle. PRO was compared to DOR in the evaluation because DOR is arguably the most effective among the existing re - JOINT SESSION construction algorithms. The evaluation of PRO examined reconstruction performance by measuring user response Ric Wheeler opened the workshop by explaining the basic time, reconstruction time, and algorithm complexity. PRO contract that storage systems make with the user to guar - was integrated into the original DOR approach imple - antee that the complete set of data will be stored, bytes are mented in the RAIDframe software to validate and evaluate correct and in order, and raw capacity is utilized as com - PRO. The evaluation was performed by replaying three dif - pletely as possible. It is so simple that it seems that there ferent Web traces that consisted of read-only Web search should be no open issues, right? activity. It was found that PRO consistently outperformed Today, these basic demands are met most of the time, but DOR in reconstruction and user response time by up to Ric posed a number of questions. How do we validate that 44.7% and 23.9%, respectively. PRO’s effectiveness relies no files have been lost? How do we verify that bytes are on the existence of popularity and locality in the workload correctly stored? How can we utilize disks efficiently for as well as the intensity of the workload. PRO also uses small files? How do errors get communicated between the extra memory for each thread descriptor. The computation layers? 84 ;LOGIN: VOL. 32, NO. 3 June07login1summaries_press.qxd:login summaries 5/27/07 10:27 AM Page 85 Through the course of the next two days some of these a fsck and, if Val’s fsck estimates for 2013 come true, hav - questions were discussed, others were raised, and a few ing a strategy to pass the time will become very important. ideas were proposed. Continue reading for the details. With her system booted Val presented an estimate of 2013 Ext4 Status Update fsck times. She first measured a fsck of her 37-GB home Mingming Cao gave a status update on ext4, the recent directory with 21 GB in use, which took 7.5 minutes and fork of the ext3 file system. The primary goal of the fork read 1.3 GB of filesystem data. Next, she used projections was the move to 48-bit block numbers; this change allows of disk technology from Seagate to estimate the time to the file system to support up to 1024 petabytes of storage. fsck a 2013 home directory, which will be 16 times larger. This feature was originally designed to be merged into ext3 Although 2013 disks will have a fivefold bandwidth in - but was seen as too disruptive [1]. The patch is also built crease, seek times will only improve by 20%, to 10 ms, on top of the patch set that replaces the indirect block map leading to a fsck time of 80 minutes! The primary reason and with extents [2] in ext4. Support for greater than 32K for long fscks is seek latency, since fsck spends most of its directory entries will also be merged into ext4. time seeking over the disk, discovering and fetching dy - namic filesystem data such as directory entries, indirect On top of these changes a number of ext3 options will be blocks, and extents. enabled by default in ext4; these include directory index - ing to improve file access for large directories, resize inode, Reducing seeks and avoiding the seek latency punishment which reserves space in the block group descriptor for on - are key to reducing fsck times. Val suggested one solution: line growing, and 256-byte inodes. Users of ext3 can use Keeping a bitmap on disk that tracks the blocks that con - these features today by using mkfs.ext3 -I 256 -O tain filesystem metadata; this would allow for reading all resize_inode dir_index /dev/device. data in a single arm sweep. This optimization, in the best case, would make a single sequential sweep over the disk A number of RFCs are also being considered for inclusion and on the 2013 disk reading all filesystem data would into ext4. This includes a patch that will add nanosecond only take around 134 seconds, which is a big improve - timestamps [3] and the creation of persistent file alloca - ment. A full explanation of the findings and possible solu - tions [4], which will be similar to posix_fallocate but tions can be found in the paper Repair-Driven File System won’t waste time writing zeros to the disk. Design [5]. Also, Val announced that she is working full Currently, ext4 stores a limited number of extended attrib - time on a file system called chunkfs [6] that will make utes in-inode and has space for one additional block of ex - speed and ease of repair a primary design goal. tended attribute data, but this may not be enough to sat - Zach Brown presented a blktrace of e2fsck. The basic out - isfy xattr-hungry applications. For example, Samba needs come of the trace is that the disk can stream data at 26 additional space to support Vista’s heavy use of ACLs, and Mbps and fsck is achieving 12 Mbps. This situation could eCryptFS can store arbitrarily large keys in extended at - be improved to some degree without on-disk layout tributes. This led everyone to the conclusion that someone changes if the developers had a vectorized I/O call. Zach needs to collect data on how xattrs are being used, to help explained that in many cases you know the block locations developers decide how to best implement xattrs.
Recommended publications
  • CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N
    CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N. Gutierrez Center for Education and Research Information Assurance and Security Purdue University, West Lafayette, IN 47907-2086 DECEPTIVE MEMORY SYSTEMS ADissertation Submitted to the Faculty of Purdue University by Christopher N. Gutierrez In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2017 Purdue University West Lafayette, Indiana ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Eugene H. Spa↵ord, Co-Chair Department of Computer Science Dr. Saurabh Bagchi, Co-Chair Department of Computer Science Dr. Dongyan Xu Department of Computer Science Dr. Mathias Payer Department of Computer Science Approved by: Dr. Voicu Popescu by Dr. William J. Gorman Head of the Graduate Program iii This work is dedicated to my wife, Gina. Thank you for all of your love and support. The moon awaits us. iv ACKNOWLEDGMENTS Iwould liketothank ProfessorsEugeneSpa↵ord and SaurabhBagchi for their guidance, support, and advice throughout my time at Purdue. Both have been instru­ mental in my development as a computer scientist, and I am forever grateful. I would also like to thank the Center for Education and Research in Information Assurance and Security (CERIAS) for fostering a multidisciplinary security culture in which I had the privilege to be part of. Special thanks to Adam Hammer and Ronald Cas­ tongia for their technical support and Thomas Yurek for his programming assistance for the experimental evaluation. I am grateful for the valuable feedback provided by the members of my thesis committee, Professor Dongyen Xu, and Professor Math­ ias Payer.
    [Show full text]
  • HTTP-FUSE Xenoppix
    HTTP-FUSE Xenoppix Kuniyasu Suzaki† Toshiki Yagi† Kengo Iijima† Kenji Kitagawa†† Shuichi Tashiro††† National Institute of Advanced Industrial Science and Technology† Alpha Systems Inc.†† Information-Technology Promotion Agency, Japan††† {k.suzaki,yagi-toshiki,k-iijima}@aist.go.jp [email protected], [email protected] Abstract a CD-ROM. Furthermore it requires remaking the entire CD-ROM when a bit of data is up- dated. The other solution is a Virtual Machine We developed “HTTP-FUSE Xenoppix” which which enables us to install many OSes and ap- boots Linux, Plan9, and NetBSD on Virtual plications easily. However, that requires in- Machine Monitor “Xen” with a small bootable stalling virtual machine software. (6.5MB) CD-ROM. The bootable CD-ROM in- cludes boot loader, kernel, and miniroot only We have developed “Xenoppix” [1], which and most part of files are obtained via Internet is a combination of CD/DVD bootable Linux with network loopback device HTTP-FUSE “KNOPPIX” [2] and Virtual Machine Monitor CLOOP. It is made from cloop (Compressed “Xen” [3, 4]. Xenoppix boots Linux (KNOP- Loopback block device) and FUSE (Filesys- PIX) as Host OS and NetBSD or Plan9 as Guest tem USErspace). HTTP-FUSE CLOOP can re- OS with a bootable DVD only. KNOPPIX construct a block device from many small block is advanced in automatic device detection and files of HTTP servers. In this paper we describe driver integration. It prepares the Xen environ- the detail of the implementation and its perfor- ment and Guest OSes don’t need to worry about mance. lack of device drivers.
    [Show full text]
  • Oracle® Linux 7 Managing File Systems
    Oracle® Linux 7 Managing File Systems F32760-07 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • DMFS - a Data Migration File System for Netbsd
    DMFS - A Data Migration File System for NetBSD William Studenmund Veridian MRJ Technology Solutions NASAAmes Research Center" Abstract It was designed to support the mass storage systems de- ployed here at NAS under the NAStore 2 system. That system supported a total of twenty StorageTek NearLine ! have recently developed DMFS, a Data Migration File tape silos at two locations, each with up to four tape System, for NetBSD[I]. This file system provides ker- drives each. Each silo contained upwards of 5000 tapes, nel support for the data migration system being devel- and had robotic pass-throughs to adjoining silos. oped by my research group at NASA/Ames. The file system utilizes an underlying file store to provide the file The volman system is designed using a client-server backing, and coordinates user and system access to the model, and consists of three main components: the vol- files. It stores its internal metadata in a flat file, which man master, possibly multiple volman servers, and vol- resides on a separate file system. This paper will first man clients. The volman servers connect to each tape describe our data migration system to provide a context silo, mount and unmount tapes at the direction of the for DMFS, then it will describe DMFS. It also will de- volman master, and provide tape services to clients. The scribe the changes to NetBSD needed to make DMFS volman master maintains a database of known tapes and work. Then it will give an overview of the file archival locations, and directs the tape servers to move and mount and restoration procedures, and describe how some typi- tapes to service client requests.
    [Show full text]
  • Parallel File Systems for HPC Introduction to Lustre
    Parallel File Systems for HPC Introduction to Lustre Piero Calucci Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 The Need for Shared Storage 2 The Lustre File System 3 Other Parallel File Systems Parallel File Systems for HPC Cluster & Storage Piero Calucci Shared Storage Lustre Other Parallel File Systems A typical cluster setup with a Master node, several computing nodes and shared storage. Nodes have little or no local storage. Parallel File Systems for HPC Cluster & Storage Piero Calucci The Old-Style Solution Shared Storage Lustre Other Parallel File Systems • a single storage server quickly becomes a bottleneck • if the cluster grows in time (quite typical for initially small installations) storage requirements also grow, sometimes at a higher rate • adding space (disk) is usually easy • adding speed (both bandwidth and IOpS) is hard and usually involves expensive upgrade of existing hardware • e.g. you start with an NFS box with a certain amount of disk space, memory and processor power, then add disks to the same box Parallel File Systems for HPC Cluster & Storage Piero Calucci The Old-Style Solution /2 Shared Storage Lustre • e.g. you start with an NFS box with a certain amount of Other Parallel disk space, memory and processor power File Systems • adding space is just a matter of plugging in some more disks, ot ar worst adding a new controller with an external port to connect external disks • but unless you planned for
    [Show full text]
  • Unionfs: User- and Community-Oriented Development of a Unification File System
    Unionfs: User- and Community-Oriented Development of a Unification File System David Quigley, Josef Sipek, Charles P. Wright, and Erez Zadok Stony Brook University {dquigley,jsipek,cwright,ezk}@cs.sunysb.edu Abstract If a file exists in multiple branches, the user sees only the copy in the higher-priority branch. Unionfs allows some branches to be read-only, Unionfs is a stackable file system that virtually but as long as the highest-priority branch is merges a set of directories (called branches) read-write, Unionfs uses copy-on-write seman- into a single logical view. Each branch is as- tics to provide an illusion that all branches are signed a priority and may be either read-only writable. This feature allows Live-CD develop- or read-write. When the highest priority branch ers to give their users a writable system based is writable, Unionfs provides copy-on-write se- on read-only media. mantics for read-only branches. These copy- on-write semantics have lead to widespread There are many uses for namespace unifica- use of Unionfs by LiveCD projects including tion. The two most common uses are Live- Knoppix and SLAX. In this paper we describe CDs and diskless/NFS-root clients. On Live- our experiences distributing and maintaining CDs, by definition, the data is stored on a read- an out-of-kernel module since November 2004. only medium. However, it is very convenient As of March 2006 Unionfs has been down- for users to be able to modify the data. Uni- loaded by over 6,700 unique users and is used fying the read-only CD with a writable RAM by over two dozen other projects.
    [Show full text]
  • We Get Letters Sept/Oct 2018
    SEE TEXT ONLY WeGetletters by Michael W Lucas letters@ freebsdjournal.org tmpfs, or be careful to monitor tmpfs space use. Hey, FJ Letters Dude, Not that you’ll configure your monitoring system Which filesystem should I use? to watch tmpfs, because it’s temporary. And no matter what, one day you’ll forget —FreeBSD Newbie that you used memory space as a filesystem. You’ll stash something vital in that temporary space, then reboot. And get really annoyed Dear FreeBSD Newbie, when that vital data vanishes into the ether. First off, welcome to FreeBSD. The wider com- Some other filesystems aren’t actively terrible. munity is glad to help you. The device filesystem devfs(5) provides device Second, please let me know who told you to nodes. Filesystems that can’t store user data are start off by writing me. I need to properly… the best filesystems. But then some clever sysad- “thank” them. min decides to hack on /etc/devfs.rules to Filesystems? Sure, let’s talk filesystems. change the standard device nodes for their spe- Discussing which filesystem is the worst is like cial application, or /etc/devd.conf to create or debating the merits of two-handed swords as reconfigure device nodes, and the whole system compared to lumberjack-grade chainsaws and goes down the tubes. industrial tulip presses. While every one of them Speaking of clever sysadmins, now and then has perfectly legitimate uses, in the hands of the people decide that they want to optimize disk novice they’re far more likely to maim everyone space or cut down how many copies of a file involved.
    [Show full text]
  • Hierarchical Storage Management with SAM-FS
    Hierarchical Storage Management with SAM-FS Achim Neumann CeBiTec / Bielefeld University Agenda ● Information Lifecycle Management ● Backup – Technology of the past ● Filesystems, SAM-FS, QFS and SAM-QFS ● Main Functions of SAM ● SAM and Backup ● Rating, Perspective and Literature June 13, 2006 Achim Neumann 2 ILM – Information Lifecylce Management ● 5 Exabyte data per year (92 % digital, 8 % analog) ● 20% fixed media (disk), 80% removable media ● Managing data from Creation to Cremation (and everything that is between (by courtesy of Sun Microsystems Inc). June 13, 2006 Achim Neumann 3 Different classes of Storagesystems (by courtesy of Sun Microsystems Inc). June 13, 2006 Achim Neumann 4 Backup – Technology of the past ● Backup – protect data from data loss ● Backup is not the problem ● Ability to restore data when needed − Disaster − Hardware failure − User error − Old versions from a file June 13, 2006 Achim Neumann 5 Backup – Technology of the past (cont.) ● Triple data in 2 – 3 years ● Smaller backup window ☞ more data in less time Stream for # Drives for # Drives for #TB 24 h 24 h 6 h 1 12,1 MB/s 2 5 10 121 MB/s 13 49 30 364 MB/s 37 146 June 13, 2006 Achim Neumann 6 Backup – Technology of the past (cont.) ● Limited number of tape drives in library ● Limited number of I/O Channels in server ● Limited scalability of backup software ● Inability to keep all tape drives streaming at one time ● At least every full backup has a copy of the same file ● It's not unusual to have 8 or 10 copies of the same file ☞ excessive media usage and
    [Show full text]
  • Petascale Data Management: Guided by Measurement
    Petascale Data Management: Guided by Measurement petascale data storage institute www.pdsi-scidac.org/ MPP2 www.pdsi-scidac.org MEMBER ORGANIZATIONS & Lustre • Los Alamos National Laboratory – institute.lanl.gov/pdsi/ • Parallel Data Lab, Carnegie Mellon University – www.pdl.cmu.edu/ • Oak Ridge National Laboratory – www.csm.ornl.gov/ • Sandia National Laboratories – www.sandia.gov/ • National Energy Research Scientific Computing Center • Center for Information Technology Integration, U. of Michigan pdsi.nersc.gov/ www.citi.umich.edu/projects/pdsi/ • Pacific Northwest National Laboratory – www.pnl.gov/ • University of California at Santa Cruz – www.pdsi.ucsc.edu/ The Computer Failure Data Repository Filesystems Statistics Survey • Goal: to collect and make available failure data from a large variety of sites GOALS • Better understanding of the characteristics of failures in the real world • Gather & build large DB of static filetree summary • Now maintained by USENIX at cfdr.usenix.org/ • Build small, non-invasive, anonymizing stats gather tool • Distribute fsstats tool via easily used web site Red Storm NAME SYSTEM TYPE SYSTEM SIZE TIME PERIOD TYPE OF DATA • Encourage contributions (output of tool) from many FSs Any node • Offer uploaded statistics & summaries to public & Lustre 22 HPC clusters 5000 nodes 9 years outage . Label Date Type File Total Size Total Space # files # dirs max size max space max dir max name avg file avg dir . 765 nodes (2008) System TB TB M K GB GB ents bytes MB ents . 1 HPC cluster 5 years PITTSBURGH 3,400 disks
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Kratka Povijest Unixa Od Unicsa Do Freebsda I Linuxa
    Kratka povijest UNIXa Od UNICSa do FreeBSDa i Linuxa 1 Autor: Hrvoje Horvat Naslov: Kratka povijest UNIXa - Od UNICSa do FreeBSDa i Linuxa Licenca i prava korištenja: Svi imaju pravo koristiti, mijenjati, kopirati i štampati (printati) knjigu, prema pravilima GNU GPL licence. Mjesto i godina izdavanja: Osijek, 2017 ISBN: 978-953-59438-0-8 (PDF-online) URL publikacije (PDF): https://www.opensource-osijek.org/knjige/Kratka povijest UNIXa - Od UNICSa do FreeBSDa i Linuxa.pdf ISBN: 978-953- 59438-1- 5 (HTML-online) DokuWiki URL (HTML): https://www.opensource-osijek.org/dokuwiki/wiki:knjige:kratka-povijest- unixa Verzija publikacije : 1.0 Nakalada : Vlastita naklada Uz pravo svakoga na vlastito štampanje (printanje), prema pravilima GNU GPL licence. Ova knjiga je napisana unutar inicijative Open Source Osijek: https://www.opensource-osijek.org Inicijativa Open Source Osijek je član udruge Osijek Software City: http://softwarecity.hr/ UNIX je registrirano i zaštićeno ime od strane tvrtke X/Open (Open Group). FreeBSD i FreeBSD logo su registrirani i zaštićeni od strane FreeBSD Foundation. Imena i logo : Apple, Mac, Macintosh, iOS i Mac OS su registrirani i zaštićeni od strane tvrtke Apple Computer. Ime i logo IBM i AIX su registrirani i zaštićeni od strane tvrtke International Business Machines Corporation. IEEE, POSIX i 802 registrirani i zaštićeni od strane instituta Institute of Electrical and Electronics Engineers. Ime Linux je registrirano i zaštićeno od strane Linusa Torvaldsa u Sjedinjenim Američkim Državama. Ime i logo : Sun, Sun Microsystems, SunOS, Solaris i Java su registrirani i zaštićeni od strane tvrtke Sun Microsystems, sada u vlasništvu tvrtke Oracle. Ime i logo Oracle su u vlasništvu tvrtke Oracle.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]