Paulo Orlando Reis Afonso Lopes a Shared-Disk

Total Page:16

File Type:pdf, Size:1020Kb

Load more

PAULO ORLANDO REIS AFONSO LOPES A SHARED-DISK PARALLEL CLUSTER FILE SYSTEM (“Um Sistema de Ficheiros Paralelos para Clusters de Disco Partilhado”) Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia LISBOA 2009 Agradecimentos Estas poucas linhas são o corolário de uns bons anos de trabalho, polvilhados aqui e ali por alegrias (e não só) e momentos inesquecíveis; são as últimas a serem escritas, mas nem por isso menos importantes: de facto, são-no de tal forma, que constituem a “abertura” desta dissertação. Em primeiro lugar, a minha gratidão a dois Professores do DI-FCT/UNL, que me convidaram para esta casa e são responsáveis pela completa mudança que se operou na minha vida – Profs. José Cardoso e Cunha e Pedro Medeiros; e a este último, em especial, um redobrar de agradecimentos por ter aceite a tarefa de me orientar ao longo destes anos. Também quero aproveitar para agradecer aos colegas que aceitaram distribuições assimétricas nas suas cargas de trabalho docente, permitindo que as minhas tarefas de docência se concentrassem e, nos outros períodos, me pudesse focar quase exclusivamente neste trabalho. Duas empresas contribuíram decisivamente com os seus donativos para a realização deste trabalho, que se iniciou numa infra-estrutura doada pela Lusitania, Companhia de Seguros, S.A. e, prosseguiu numa outra obtida através de uma candidatura bem sucedida ao programa IBM Shared University Research (SUR). Os meus agradecimentos a ambas, e em especial a Teresa Moradias, António Jorge Matos e Luís Esteves, da Lusitania CS; e a Filipa Valente e Luís Diniz dos Santos, da IBM Portugal. Foram vários os amigos e colegas a trabalhar em organismos e empresas que utilizam, ou são potenciais utilizadores dos paradigmas e/ou tecnologias aqui abordados, bem como outras que fazem destas o seu core business , e com os quais houve uma profícua troca de ideias e de experiências; destes gostaria de destacar, em especial Carlos Corredoura, da EMC Portugal. E, por último, à Lili, ao Pedro e à Ana Sofia... i ii Resumo O uso coordenado de múltiplos nós de computação ( clusters) como plataforma para resolver, em ambientes de cálculo de elevado desempenho (HPC), problemas de grande exigência computacional, ou para oferecer, em ambientes de Sistemas de Informação (SI), serviços fiáveis e tempos de resposta adequados é hoje uma solução indiscutível, em termos de custo/benefício. Os ambientes de HPC e SI são razoavelmente dissimilares, particularmente no que se refere a sistemas de ficheiros e as arquitecturas de armazenamento; em “ambientes HPC”, favorece- -se o uso de sistemas de ficheiros de elevado desempenho, em detrimento de outras características (não são, geralmente, compatíveis POSIX), e usam-se discos internos ou privados; em “ambientes SI”, preferem-se soluções de alta disponibilidade suportadas em armazenamento externo e, quando tal se revela necessário, sistemas de ficheiros para discos partilhados (CFS), desde que compatíveis POSIX (mesmo sacrificando o desempenho). O parallel Cluster File System (pCFS) é a nossa proposta para mudar este estado de coisas, usando o melhor de cada um: a fiabilidade dos CFSs e o excelente desempenho dos sistemas de ficheiros paralelos. Não se pretende conseguir máximos absolutos, mas tão somente uma compatibilidade total com a norma POSIX, versatilidade, e níveis de fiabilidade e desempenho suficientemente bons para uma utilização genérica – aplicações tradicionais e HPC, suporte de motores DBMS que armazenem dados em ficheiros, e streaming de vídeo. As ideias-chave para o pCFS são: • Caching cooperativo, uma técnica usada em sistemas de ficheiros para discos distribuídos que, tanto quanto sabemos, nunca foi usada em CFSs em SAN ou sistemas de ficheiros paralelos. Resulta daqui que o pCFS pode usar todas as infra-estruturas (LAN e SAN) para aceder a dados. • Locking de granularidade fina, que permite definir regiões disjuntas (ao nível do byte) num ficheiro podendo os processos, mesmo quando correm em nós distintos, nele ler e escrever em paralelo, à velocidade da infra-estrutura SAN (desde que não ocorram mudanças importantes na estrutura dos metadados). Construímos um protótipo sobre o GFS (um CFS da Red Hat), modificando ligeiramente o módulo GFS, acrescentando-lhe dois módulos de sistema suplementares, e ainda um terceiro, de nível utilizador. No protótipo, o locking de grão fino está integralmente realizado e a cache global é mantida coerente com transferências de fragmentos de páginas realizadas sobre LAN. Os testes efectuados para o caso de processos que correm em diferentes nós escrevendo sobre um mesmo ficheiro mostram que o pCFS tem um desempenho idêntico ao do Parallel Virtual File System (PVFS) e duas vezes superior ao do NFS, consumindo muito menos CPU que estes (cerca de 10 vezes); e que, quando comparado com o GFS, tem desempenhos que são 2 a 600 vezes superiores (para acessos de 4 MB e 4 KB, respectivamente) com idênticos consumos de CPU. iii iv Abstract Today, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX- -compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: • Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. • Fine-grain locking, whereby processes running across distinct nodes may define non- overlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage. v vi Table of Contents Motivation and Background 1 Introduction........................................................................................................1 1.1 The evolution of high performance computing architectures ....................1 1.2 Bottlenecks: when reality crushes in..........................................................1 1.3 The I/O bottlenecks....................................................................................2 1.4 High availability.........................................................................................2 2 I/O Intensive Applications .................................................................................3 2.1 Who needs high-performance I/O?............................................................3 2.2 Scientific applications................................................................................3 2.3 Database applications.................................................................................4 2.4 Multimedia applications.............................................................................5 2.5 High performance I/O for all: the case for shared disk storage .................6 3 Dissertation Focus..............................................................................................6 3.1 Problem statement......................................................................................6 3.2 Contributions..............................................................................................7 3.3 Organization...............................................................................................7
Recommended publications
  • Elastic Storage for Linux on System Z IBM Research & Development Germany

    Elastic Storage for Linux on System Z IBM Research & Development Germany

    Peter Münch T/L Test and Development – Elastic Storage for Linux on System z IBM Research & Development Germany Elastic Storage for Linux on IBM System z © Copyright IBM Corporation 2014 9.0 Elastic Storage for Linux on System z Session objectives • This presentation introduces the Elastic Storage, based on General Parallel File System technology that will be available for Linux on IBM System z. Understand the concepts of Elastic Storage and which functions will be available for Linux on System z. Learn how you can integrate and benefit from the Elastic Storage in a Linux on System z environment. Finally, get your first impression in a live demo of Elastic Storage. 2 © Copyright IBM Corporation 2014 Elastic Storage for Linux on System z Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AIX* FlashSystem Storwize* Tivoli* DB2* IBM* System p* WebSphere* DS8000* IBM (logo)* System x* XIV* ECKD MQSeries* System z* z/VM* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
  • Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C

    Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C

    Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Michael M. Swift Computer Sciences Department, University of Wisconsin, Madison Abstract and most complex code bases in the kernel. Further, We introduce Membrane, a set of changes to the oper- file systems are still under active development, and new ating system to support restartable file systems. Mem- ones are introduced quite frequently. For example, Linux brane allows an operating system to tolerate a broad has many established file systems, including ext2 [34], class of file system failures and does so while remain- ext3 [35], reiserfs [27], and still there is great interest in ing transparent to running applications; upon failure, the next-generation file systems such as Linux ext4 and btrfs. file system restarts, its state is restored, and pending ap- Thus, file systems are large, complex, and under develop- plication requests are serviced as if no failure had oc- ment, the perfect storm for numerous bugs to arise. curred. Membrane provides transparent recovery through Because of the likely presence of flaws in their imple- a lightweight logging and checkpoint infrastructure, and mentation, it is critical to consider how to recover from includes novel techniques to improve performance and file system crashes as well. Unfortunately, we cannot di- correctness of its fault-anticipation and recovery machin- rectly apply previous work from the device-driver litera- ery. We tested Membrane with ext2, ext3, and VFAT. ture to improving file-system fault recovery. File systems, Through experimentation, we show that Membrane in- unlike device drivers, are extremely stateful, as they man- duces little performance overhead and can tolerate a wide age vast amounts of both in-memory and persistent data; range of file system crashes.
  • CS 5600 Computer Systems

    CS 5600 Computer Systems

    CS 5600 Computer Systems Lecture 10: File Systems What are We Doing Today? • Last week we talked extensively about hard drives and SSDs – How they work – Performance characterisEcs • This week is all about managing storage – Disks/SSDs offer a blank slate of empty blocks – How do we store files on these devices, and keep track of them? – How do we maintain high performance? – How do we maintain consistency in the face of random crashes? 2 • ParEEons and MounEng • Basics (FAT) • inodes and Blocks (ext) • Block Groups (ext2) • Journaling (ext3) • Extents and B-Trees (ext4) • Log-based File Systems 3 Building the Root File System • One of the first tasks of an OS during bootup is to build the root file system 1. Locate all bootable media – Internal and external hard disks – SSDs – Floppy disks, CDs, DVDs, USB scks 2. Locate all the parEEons on each media – Read MBR(s), extended parEEon tables, etc. 3. Mount one or more parEEons – Makes the file system(s) available for access 4 The Master Boot Record Address Size Descripon Hex Dec. (Bytes) Includes the starEng 0x000 0 Bootstrap code area 446 LBA and length of 0x1BE 446 ParEEon Entry #1 16 the parEEon 0x1CE 462 ParEEon Entry #2 16 0x1DE 478 ParEEon Entry #3 16 0x1EE 494 ParEEon Entry #4 16 0x1FE 510 Magic Number 2 Total: 512 ParEEon 1 ParEEon 2 ParEEon 3 ParEEon 4 MBR (ext3) (swap) (NTFS) (FAT32) Disk 1 ParEEon 1 MBR (NTFS) 5 Disk 2 Extended ParEEons • In some cases, you may want >4 parEEons • Modern OSes support extended parEEons Logical Logical ParEEon 1 ParEEon 2 Ext.
  • W4118: Linux File Systems

    W4118: Linux File Systems

    W4118: Linux file systems Instructor: Junfeng Yang References: Modern Operating Systems (3rd edition), Operating Systems Concepts (8th edition), previous W4118, and OS at MIT, Stanford, and UWisc File systems in Linux Linux Second Extended File System (Ext2) . What is the EXT2 on-disk layout? . What is the EXT2 directory structure? Linux Third Extended File System (Ext3) . What is the file system consistency problem? . How to solve the consistency problem using journaling? Virtual File System (VFS) . What is VFS? . What are the key data structures of Linux VFS? 1 Ext2 “Standard” Linux File System . Was the most commonly used before ext3 came out Uses FFS like layout . Each FS is composed of identical block groups . Allocation is designed to improve locality inodes contain pointers (32 bits) to blocks . Direct, Indirect, Double Indirect, Triple Indirect . Maximum file size: 4.1TB (4K Blocks) . Maximum file system size: 16TB (4K Blocks) On-disk structures defined in include/linux/ext2_fs.h 2 Ext2 Disk Layout Files in the same directory are stored in the same block group Files in different directories are spread among the block groups Picture from Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639 3 Block Addressing in Ext2 Twelve “direct” blocks Data Data BlockData Inode Block Block BLKSIZE/4 Indirect Data Data Blocks BlockData Block Data (BLKSIZE/4)2 Indirect Block Data BlockData Blocks Block Double Block Indirect Indirect Blocks Data Data Data (BLKSIZE/4)3 BlockData Data Indirect Block BlockData Block Block Triple Double Blocks Block Indirect Indirect Data Indirect Data BlockData Blocks Block Block Picture from Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc.
  • Developments in GFS2

    Developments in GFS2

    Developments in GFS2 Andy Price <[email protected]> Software Engineer, GFS2 OSSEU 2018 11 GFS2 recap ● Shared storage cluster filesystem ● High availability clusters ● Uses glocks (“gee-locks”) based on DLM locking ● One journal per node ● Divided into resource groups ● Requires clustered lvm: clvmd/lvmlockd ● Userspace tools: gfs2-utils 22 Not-so-recent developments ● Full SELinux support (linux 4.5) ● Previously no way to tell other nodes that cached labels are invalid ● Workaround was to mount with -o context=<label> ● Relabeling now causes security label invalidation across cluster 33 Not-so-recent developments ● Resource group stripe alignment (gfs2-utils 3.1.11) ● mkfs.gfs2 tries to align resource groups to RAID stripes where possible ● Uses libblkid topology information to find stripe unit and stripe width values ● Tries to spread resource groups evenly across disks 44 Not-so-recent developments ● Resource group LVBs (linux 3.6) ● -o rgrplvb (all nodes) ● LVBs: data attached to DLM lock requests ● Allows resource group metadata to be cached and transferred with glocks ● Decreases resource group-related I/O 55 Not-so-recent developments ● Location-based readdir cookies (linux 4.5) ● -o loccookie (all nodes) ● Uniqueness required by NFS ● Previously 31 bits of filename hash ● Problem: collisions with large directories (100K+ files) ● Cookies now based on the location of the directory entry within the directory 66 Not-so-recent developments ● gfs_controld is gone (linux 3.3) ● dlm now triggers gfs2 journal recovery via callbacks ●
  • Globalfs: a Strongly Consistent Multi-Site File System

    Globalfs: a Strongly Consistent Multi-Site File System

    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
  • Add-Ons for Red Hat Enterprise Linux

    Add-Ons for Red Hat Enterprise Linux

    DATASHEET ADD-ONS FOR RED HAT ENTERPRISE LINUX WORKLOAD AND MISSION-CRITICAL PLATFORM ENHANCEMENTS In conjunction with Red Hat® Enterprise Linux®, Red Hat offers a portfolio of Add-Ons to extend the features of your Red Hat Enterprise Linux subscription. Add-Ons to Red Hat Enterprise Linux tailor your application environment to suit your particular computing requirements. With increased flexibility and choice, customers can deploy what they need, when they need it. ADD-ON OPTIONS FOR AVAILABILITY MANAGEMENT High Availability Add-On Red Hat’s High Availability Add-On provides on-demand failover services between nodes within a cluster, making applications highly available. The High Availability Add-On supports up to 16 nodes and may be configured for most applications that use customizable agents, as well as for virtual guests. The High Availability Add-On also includes failover support for off-the-shelf applications like Apache, MySQL, and PostgreSQL. When using the High Availability Add-On, a highly available service can fail over from one node to another with no apparent interruption to cluster clients. The High Availability Add-On also ensures data integrity when one cluster node takes over control of a service from another clus- ter node. It achieves this by promptly evicting nodes from the cluster that are deemed to be faulty using a method called “fencing” that prevents data corruption. www.redhat.com DATASHEET ADD-ONS For RED Hat ENterprise LINUX 6 Resilient Storage Add-On Red Hat’s Resilient Storage Add-On enables a shared storage or clustered file system to access the same storage device over a network.
  • Ext3 = Ext2 + Journaling

    Ext3 = Ext2 + Journaling

    FS Sistem datoteka-skup metoda i struktura podataka koje operativni sistem koristi za čuvanje podataka Struktura sistema datoteka: - 1. zaglavlje→neophodni podaci za funkcionisanje sistema datoteka - 2. strukture za organizaciju podataka na medijumu→meta podaci - 3. podaci→datoteke i direktorijumi Strukture podataka neophodne za realizaciju sistema datoteka: - PCB(Partition Control Block) - BCB(Boot control Block) - Kontrolne strukture za alokaciju datoteka(i-node tabela kod Linux-a) - Direktorijumske strukture koje sadrže kontrolne blokove datoteka - FCB(File Control Block) ext3 Slide 1 of 51 VIRTUELNI SISTEM DATOTEKA(VFS) Linux podržava rad sa velikim brojem sistema datoteka(ext2,ext3, XFS,FAT, NTFS...) VFS-objektno orjentisani način realizacije sistema datoteka koji omogućava korisniku da na isti način pristupa svim sistemima datoteka Način obraćanja korisnika sistemu datoteka - korisnik->API - VFS->sistem datoteka ext3 Slide 2 of 51 Linux FS Linux posmatra svaki sistem datoteka kao nezavisnu hijerarhijsku strukturu objekata(datoteka i direktorijuma) na čijem se vrhu nalazi root(/) direktorijum Objekti Linux sistema datoteka: Super block - zaglavlje(superblock) - i-node tabela I-Node Table - blokovi sa podacima - direktorijumski blokovi - blokovi indirektnih pokazivača Data Area i-node-opisuje objekte, oko 128B na disku Kompromis između veličine i-node tabele i brzine rada sistema datoteka - prvih 10-12 pokazivača na blokove sa podacima - za alokaciju većih datoteka koristi se single indirection block - za još veće datoteke
  • FORM 10−K RED HAT INC − RHT Filed: April 30, 2007 (Period: February 28, 2007)

    FORM 10−K RED HAT INC − RHT Filed: April 30, 2007 (Period: February 28, 2007)

    FORM 10−K RED HAT INC − RHT Filed: April 30, 2007 (period: February 28, 2007) Annual report which provides a comprehensive overview of the company for the past year Table of Contents PART I Item 1. Business 3 PART I ITEM 1. BUSINESS ITEM 1A. RISK FACTORS ITEM 1B. UNRESOLVED STAFF COMMENTS ITEM 2. PROPERTIES ITEM 3. LEGAL PROCEEDINGS ITEM 4. SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS PART II ITEM 5. MARKET FOR REGISTRANT S COMMON EQUITY, RELATED STOCKHOLDER MATTERS AND ISSUER PURCHASES OF E ITEM 6. SELECTED FINANCIAL DATA ITEM 7. MANAGEMENT S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS ITEM 7A. QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK ITEM 8. FINANCIAL STATEMENTS AND SUPPLEMENTARY DATA ITEM 9. CHANGES IN AND DISAGREEMENTS WITH ACCOUNTANTS ON ACCOUNTING AND FINANCIAL DISCLOSURE ITEM 9A. CONTROLS AND PROCEDURES ITEM 9B. OTHER INFORMATION Part III ITEM 10. DIRECTORS, EXECUTIVE OFFICERS AND CORPORATE GOVERNANCE ITEM 11. EXECUTIVE COMPENSATION ITEM 12. SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER MATT ITEM 13. CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS, AND DIRECTOR INDEPENDENCE ITEM 14. PRINCIPAL ACCOUNTANT FEES AND SERVICES PART IV ITEM 15. EXHIBITS, FINANCIAL STATEMENT SCHEDULES SIGNATURES EX−21.1 (SUBSIDIARIES OF RED HAT) EX−23.1 (CONSENT PF PRICEWATERHOUSECOOPERS LLP) EX−31.1 (CERTIFICATION) EX−31.2 (CERTIFICATION) EX−32.1 (CERTIFICATION) Table of Contents UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 10−K Annual Report Pursuant to Sections 13 or 15(d) of the Securities Exchange Act of 1934 (Mark One) x Annual Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 For the fiscal year ended February 28, 2007 OR ¨ Transition Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 For the transition period from to .
  • Measuring Parameters of the Ext4 File System

    Measuring Parameters of the Ext4 File System

    File System Forensics : Measuring Parameters of the ext4 File System Madhu Ramanathan Venkatesh Karthik Srinivasan Department of Computer Sciences, UW Madison Department of Computer Sciences, UW Madison [email protected] [email protected] Abstract An extent is a group of physically contiguous blocks. Allocating Operating systems are rather complex software systems. The File extents instead of indirect blocks reduces the size of the block map, System component of Operating Systems is defined by a set of pa- thus, aiding the quick retrieval of logical disk block numbers and rameters that impact both the correct functioning as well as the per- also minimizes external fragmentation. An extent is represented in formance of the File System. In order to completely understand and an inode by 96 bits with 48 bits to represent the physical block modify the behavior of the File System, correct measurement of number and 15 bits to represent length. This allows one extent to have a length of 215 blocks. An inode can have at most 4 extents. those parameters and a thorough analysis of the results is manda- 15 tory. In this project, we measure the various key parameters and If the file is fragmented, every extent typically has less than 2 a few interesting properties of the Fourth Extended File System blocks. If the file needs more than four extents, either due to frag- (ext4). The ext4 has become the de facto File System of Linux ker- mentation or due to growth, an extent HTree rooted at the inode is nels 2.6.28 and above and has become the default file system of created.
  • Oracle® Linux 7 Managing File Systems

    Oracle® Linux 7 Managing File Systems

    Oracle® Linux 7 Managing File Systems F32760-07 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
  • Migrating from Netware to OES 2 Linux

    Migrating from Netware to OES 2 Linux

    Best Practice Guide www.novell.com Migrating from NetWare to OES 2 prepared for Novell OES 2 User Community Published: November, 2007 Disclaimer Novell, Inc. makes no representations or warranties with respect to the contents or use of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Trademarks Novell is a registered trademark of Novell, Inc. in the United States and other countries. * All third-party trademarks are property of their respective owner. Copyright 2007 Novell, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of Novell, Inc. Novell, Inc. 404 Wyman Suite 500 Waltham Massachusetts 02451 USA Prepared By Novell Services and User Community Migrating from NetWare to OES 2—Best Practice Guide November, 2007 Novell OES 2 User Community The latest version of this document, along with other OES 2 Linux Best Practice Guides, can be found with the NetWare to Linux Migration Resources at: http://www.novell.com/products/openenterpriseserver/netwaretolinux/view/all/-9/tle/all Contents Acknowledgments.................................................................................. iv Getting Started...................................................................................... 1 Why OES 2?..............................................................................................1 Which Services Are Right for OES 2? ................................................................4