File System Usage Patterns

Total Page:16

File Type:pdf, Size:1020Kb

File System Usage Patterns –Independent Work Report, Fall 2013 (Revised)– File System Usage Patterns Abstract This paper studies the types of data that users store on their file systems and their patterns of interaction with that data. I compare the metrics gathered for this paper with those of previous studies to demonstrate the evolution of file system usage patterns and to predict future developments. In addition to the comparison, I summarize previous research on users’ desires for file system functionality. Finally, I suggest a hybrid cloud-local file system that will improve the file system user experience. The data presented in this paper and previous ones demonstrates that the hypothesized system is more suited to users’ present and future usage patterns compared to current file systems and that a prototype should be built for further testing. Contents 1 Introduction3 2 Data Collection5 2.1 Sample Selection...................................5 2.2 Infrastructure.....................................6 3 Research On Local Storage File Systems8 3.1 File Sizes.......................................8 3.2 File Ages....................................... 11 3.3 File Types....................................... 13 3.4 File Count....................................... 15 3.5 File System Sizes................................... 16 4 Previous Research on Cloud Storage 19 4.1 Consumer Cloud Usage................................ 19 4.2 Enterprise Cloud Usage................................ 19 5 The Hybrid Cloud-Local File System 21 5.1 File System Specifics................................. 21 5.2 Do Users Want It?................................... 22 6 Conclusion 25 2 Introduction Over previous decades, file system usage pattern research guided the development of file systems. Previous studies examined various aspects of file systems including rates of data storage and retrieval, the types of data stored, and the amount stored. [5, 22, p. 1, p. 93] Designers of industry standard file systems such as Microsoft’s NTFS used this data to make feature inclusion and implementation decisions. [22, p.103-4] However, these papers are not the final word in file system research. They are old and user activity patterns may have changed since their publications. Additionally, the researchers limited their analyses to file systems which keep all data within the physical boundary of a single computer’s case. File systems currently under development, particularly ones that utilize decentralized storage technologies, require guidance from research that does not have these limitations. This paper addresses the age issue by combining previously existing data with a new study to analyze the development of file systems usage patterns over the past 15 years. Computers have gotten faster processors and larger hard drives. Internet speeds have increased about 50% per year over the last decade. [16] In response to these changes, users may be storing different types of media on their system. Or, they may be keeping all of their data on servers that they connect to over the Internet. The data in my study shows current tendencies for data storage. This paper also offers a prediction of future file systems. I suggest an improved file system that utilizes both hard drive and Internet storage, a “hybrid cloud-local file system”. Such a system will improve the user experience by optimizing the file access patterns most commonly found in file system usage data. For the rest of this paper, I shall refer to a collection of remote servers as “the cloud” and storage solutions built on these computers as “cloud-based storage”. This paper has four components: 1. The first section explains the data collection procedures. This will allow future researchers to collect data more easily and to reproduce my results. 2. The second section summarizes previous research on local storage file systems and compares 3 their results to my data. This will demonstrate the evolution and current state of file system usage patterns. 3. The third section reviews research on consumer sentiment toward cloud storage. This research will show users’ opinions of non-local file system technologies. 4. The final section analyzes this and other papers’ data in order to evaluate the validity of the hypothesized, hybrid cloud-local file system. 4 Data Collection I collected the following data from the file systems of 24 subjects: total capacity, free space, and distribution of file sizes. Additionally, for each file type that consumed at least 0.1% of the used space on a file system, I collected the average size, total size, and average time since last access of files of that type. Sample Selection I attempted to select a population that was representative of all computer users. Previous studies had focused on limited populations such as Microsoft employees. [1, p. 1] In order to get a sufficiently broad sample, I started by randomly selecting students from the entire undergraduate population of a university as of December 2013. The data that resulted from these subjects may be biased due to a low survey response rate of 2.93%. 11 students participated out of the 375 that were invited. It is plausible that only users with particular usage patterns would be willing to participate in the experiment of running a program that collects data from their computer and reports the results to a central server. While this low response rate is troubling, there are no superior methods for data collection. One cannot ethically collect information from subjects who are unwilling to participate. Nevertheless, future researchers should attempt to achieve a higher response rate as I was forced to select additional subjects using inferior, less random methods. I directly recruited other users to participate in order to gather enough data to draw meaningful conclusions. I invited 13 friends and family members who were not part of the random sample. Their data may be biased as my network of acquaintances may not represent the average user. However, the recruited users come from a diverse set of age groups, from 20 to 70 years old, and occupations, such as student, engineer, and lawyer. The resulting data also appears to be representative of the entire universe of computer users. It includes all major operating systems, Windows, Linux, and Macintosh, and, as seen below, seems plausible when compared to previous papers. The 24 user sample is not optimal, but it is large enough and diverse enough to support meaningful conclusions about the average computer user. 5 Infrastructure Figure 1: Data Collection Infrastructure As described in the above picture, the experiment has two main technical components: a client and a server. The client runs on the users’ computers and collects information about a user’s file system. The server runs a program called Splunk which compiles the clients’ data. In order to collect data from a sample that is representative of average computer users, as many people and computers must be able to run it. The profiler is easy to use so that less technical users can provide data. Additionally, the profiler runs on all major operating system, Windows, Linux, and Macintosh. The multiplatform requirement presents difficulties as the profiler must collect OS specific information, such as the file path format. I made several language and library choices to accomplish this combination of ease-of-use, OS independence, and OS dependence. The profiler uses the Java programming language in order to be able to run on any computer. Additionally, it uses the JWrapper program and the Sigar libraries. JWrapper converts Java jar files into native executables for Windows, Linux, and Macintosh computers. [19] Since the Macintosh executable did not function properly, those users ran the jar using a bash file. Nontechnical users can run these executables and bash scripts with only a single click. The Sigar library provides Java with APIs for examining the file system. These libraries are 6 natively compiled for all targeted operating systems. They are then accessed through Java functions that provide OS specific information while allowing the programmer to write OS agnostic code. [21] Using these libraries, I wrote a profiler that easily downloads, executes, and reports back to a central server while running on any platform and requiring minimal user interaction. I created a data collection server using the Splunk software running on an Amazon EC2 instance. EC2 instances are easy to maintain as Amazon handles the issues of server uptime and router configuration. The Splunk software automatically listens for incoming packets on a particular port and then provides a report of the data in those packets. I used this capability to track each client’s report and then print out a file containing all of the data. The server and profiler pair provided the necessary infrastructure for my experiment. 7 Research On Local Storage File Systems In this section, I will analyze current statistics on file sizes, file ages, file types, file counts, and file system sizes and the evolution of these values over the last 15 years. In order to do this, I will compare my data with that of four academic papers from the 1990s and 2000s. I will first examine A Five-Year Study of File-System Metadata (referred to as the Five-Year Study paper). This paper analyzes the computers of Microsoft employees from 2000 to 2004 including file size, age, and type. File System Usage in Windows NT 4.0 (referred to as the NT paper) is a 1998 paper that measures the types and sizes of files stored in NTFS file systems as well as the file open, read, and write patterns of the Windows NT 4.0 operating system. A Large-Scale Study of File-System Contents (referred to as the Large Scale paper) is another 1998 study of Microsoft employees’ computers. This paper investigates file and directory properties and how they differ depending on the occupation of the computer user. Finally, A Study of Irregularities in File-Size Distributions (referred to as the Irregularities paper) records the types of files used by Windows, Linux, and Macintosh users at Harvey Mudd College in 2001.
Recommended publications
  • Dealing with Document Size Limits
    Dealing with Document Size Limits Introduction The Electronic Case Filing system will not accept PDF documents larger than ten megabytes (MB). If the document size is less than 10 MB, it can be filed electronically just as it is. If it is larger than 10 MB, it will need to be divided into two or more documents, with each document being less than 10 MB. Word Processing Documents Documents created with a word processing program (such as WordPerfect or Microsoft Word) and correctly converted to PDF will generally be smaller than a scanned document. Because of variances in software, usage, and content, it is difficult to estimate the number of pages that would constitute 10 MB. (Note: See “Verifying File Size” below and for larger documents, see “Splitting PDF Documents into Multiple Documents” below.) Scanned Documents Although the judges’ Filing Preferences indicate a preference for conversion of documents rather than scanning, it will be necessary to scan some documents for filing, e.g., evidentiary attachments must be scanned. Here are some things to remember: • Documents scanned to PDF are generally much larger than those converted through a word processor. • While embedded fonts may be necessary for special situations, e.g., trademark, they will increase the file size. • If graphs or color photos are included, just a few pages can easily exceed the 10 MB limit. Here are some guidelines: • The court’s standard scanner resolution is 300 dots per inch (DPI). Avoid using higher resolutions as this will create much larger file sizes. • Normally, the output should be set to black and white.
    [Show full text]
  • If You Have Attempted to Upload Your Files and You Receive an Error Or
    If you have attempted to upload your files and you receive an error or they simply will not upload to EDJOIN, your files are probably either in an inappropriate format, or too large. Please ensure that your document meets the appropriate format and size requirements below. Acceptable format: .PDF Size limit: Each file must not exceed 1 MB (megabyte) or 1024 KB File Name: The file name must contain less than 50 characters including spaces. If the file name contains more than 50 characters or any special characters, you may have trouble attaching your documents and/or the district you are applying to may have trouble viewing them. Please make sure your document title only contains letters and numbers. If the document is multiple pages, you may need to scan multiple sections of the document to maintain the 1MB file size allowance. Saving Your Documents to .PDF Format: If Using A PC: Microsoft Word 2007 or later allows you to save a document as .PDF on a PC 1. Open the document from where you have it saved in your computer. 2. Once the document opens in Microsoft Word, go to the File menu at the top left of your Microsoft Word window and click on Save as. 3. Note where you will be saving the document, go to the bottom of the window and select PDF (*.pdf) in the Save as type box. 4. Save and close. If Using A Mac: Using the print dialog box allows you to save just about any document to .PDF format from a Mac 1.
    [Show full text]
  • Sequence Alignment/Map Format Specification
    Sequence Alignment/Map Format Specification The SAM/BAM Format Specification Working Group 3 Jun 2021 The master version of this document can be found at https://github.com/samtools/hts-specs. This printing is version 53752fa from that repository, last modified on the date shown above. 1 The SAM Format Specification SAM stands for Sequence Alignment/Map format. It is a TAB-delimited text format consisting of a header section, which is optional, and an alignment section. If present, the header must be prior to the alignments. Header lines start with `@', while alignment lines do not. Each alignment line has 11 mandatory fields for essential alignment information such as mapping position, and variable number of optional fields for flexible or aligner specific information. This specification is for version 1.6 of the SAM and BAM formats. Each SAM and BAMfilemay optionally specify the version being used via the @HD VN tag. For full version history see Appendix B. Unless explicitly specified elsewhere, all fields are encoded using 7-bit US-ASCII 1 in using the POSIX / C locale. Regular expressions listed use the POSIX / IEEE Std 1003.1 extended syntax. 1.1 An example Suppose we have the following alignment with bases in lowercase clipped from the alignment. Read r001/1 and r001/2 constitute a read pair; r003 is a chimeric read; r004 represents a split alignment. Coor 12345678901234 5678901234567890123456789012345 ref AGCATGTTAGATAA**GATAGCTGTGCTAGTAGGCAGTCAGCGCCAT +r001/1 TTAGATAAAGGATA*CTG +r002 aaaAGATAA*GGATA +r003 gcctaAGCTAA +r004 ATAGCT..............TCAGC -r003 ttagctTAGGC -r001/2 CAGCGGCAT The corresponding SAM format is:2 1Charset ANSI X3.4-1968 as defined in RFC1345.
    [Show full text]
  • Mac OS X Server
    Mac OS X Server Version 10.4 Technology Overview August 2006 Technology Overview 2 Mac OS X Server Contents Page 3 Introduction Page 5 New in Version 10.4 Page 7 Operating System Fundamentals UNIX-Based Foundation 64-Bit Computing Advanced BSD Networking Architecture Robust Security Directory Integration High Availability Page 10 Integrated Management Tools Server Admin Workgroup Manager Page 14 Service Deployment and Administration Open Directory Server File and Print Services Mail Services Web Hosting Enterprise Applications Media Streaming iChat Server Software Update Server NetBoot and NetInstall Networking and VPN Distributed Computing Page 29 Product Details Page 31 Open Source Projects Page 35 Additional Resources Technology Overview 3 Mac OS X Server Introduction Mac OS X Server version 10.4 Tiger gives you everything you need to manage servers in a mixed-platform environment and to con gure, deploy, and manage powerful network services. Featuring the renowned Mac OS X interface, Mac OS X Server streamlines your management tasks with applications and utilities that are robust yet easy to use. Apple’s award-winning server software brings people and data together in innovative ways. Whether you want to empower users with instant messaging and blogging, gain greater control over email, reduce the cost and hassle of updating software, or build your own distributed supercomputer, Mac OS X Server v10.4 has the tools you need. The Universal release of Mac OS X Server runs on both Intel- and PowerPC-based The power and simplicity of Mac OS X Server are a re ection of Apple’s operating sys- Mac desktop and Xserve systems.
    [Show full text]
  • Converting Audio – Audio File Size
    Converting Audio – Audio File Size By the end of this worksheet you should: • know how to calculate the possible file size of an audio recording You should now be very familiar with the following concepts: • BIT: a single zero (0) or (1), short for Binary digIT • BYTE: 8 bits • Analogue: a continuously varying signal • Digital: a pulse that alternates between OFF (0) and ON (1) • ADC: Analogue to Digital Converter • Sample Rate: how many times per second the ADC tests the incoming analogue signal • Sample Resolution: the number of bits allocate to each sample taken We only need to add 1 more snippet of knowledge to calculate the possible file size of an audio recording, and that is the duration of the recording. Imagine that you record 10 seconds of audio, setting the sample rate to 44.1kHz and using 16 bits per sample. What does this all mean? Page 1 of 5 Converting Audio – Audio File Size Well, we know that… • each sample uses 16 bits • there are 44 100 samples per second • this means that we are using 44 100 samples per second x 16 bits = 705 600 bits per second • Since we recorded for 10 seconds we have 705 600 bits per second x 10 seconds = 7 056 000 bits in total • There are 8 bits to a byte so 7 056 000 ÷ 8 = 882 000 bytes • 1 000 is a kilo so 882 000 bytes = 882 kilobytes or 882KB So, a 10 second recording that is set at 16 bits/44.1kHz might result in a file of 882KB.
    [Show full text]
  • Lecture 03: Layering, Naming, and Filesystem Design
    Lecture 03: Layering, Naming, and Filesystem Design Principles of Computer Systems Spring 2019 Stanford University Computer Science Department Lecturer: Chris Gregg PDF of this presentation 1 Lecture 03: Layering, Naming, and Filesystem Design Just like RAM, hard drives provide us with a contiguous stretch of memory where we can store information. Information in RAM is byte-addressable: even if you’re only trying to store a boolean (1 bit), you need to read an entire byte (8 bits) to retrieve that boolean from memory, and if you want to flip the boolean, you need to write the entire byte back to memory. A similar concept exists in the world of hard drives. Hard drives are divided into sectors (we'll assume 512 bytes), and are sector-addressable: you must read or write entire sectors, even if you’re only interested in a portion of each. Sectors are often 512 bytes in size, but not always. The size is determined by the physical drive and might be 1024 or 2048 bytes, or even some larger power of two if the drive is optimized to store a small number of large files (e.g. high definition videos for youtube.com) Conceptually, a hard drive might be viewed like this: Thanks to Ryan Eberhardt for the illustrations and the text used in these slides, and to Ryan and Jerry Cain for the content. 2 Lecture 03: Layering, Naming, and Filesystem Design The drive itself exports an API—a hardware API—that allows us to read a sector into main memory, or update an entire sector with a new payload.
    [Show full text]
  • Tinkertool System 6 Reference Manual Ii
    Documentation 0632-1969/2 TinkerTool System 6 Reference Manual ii Version 6.99, July 21, 2021. US-English edition. MBS Documentation 0632-1969/2 © Copyright 2003 – 2021 by Marcel Bresink Software-Systeme Marcel Bresink Software-Systeme Ringstr. 21 56630 Kretz Germany All rights reserved. No part of this publication may be redistributed, translated in other languages, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of the publisher. This publication may contain examples of data used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. This publication could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. The publisher may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Make sure that you are using the correct edition of the publication for the level of the product. The version number can be found at the top of this page. Apple, macOS, iCloud, and FireWire are registered trademarks of Apple Inc. Intel is a registered trademark of Intel Corporation. UNIX is a registered trademark of The Open Group. Broadcom is a registered trademark of Broadcom, Inc. Trademarks or service marks are used for identification purposes only.
    [Show full text]
  • File System Structure Kevin Webb Swarthmore College March 29, 2018 Today’S Goals
    File System Structure Kevin Webb Swarthmore College March 29, 2018 Today’s Goals • Characterizing disks and storage media • File system: adding order and structure to storage • FS abstractions (files, directories, paths) and their implementation with data structures (inodes, block pointers, directory entries) Motivation for File Systems • Long-term storage is needed for • user data: text, graphics, images, audio, video • user programs • system programs (including OS kernel) and data • Persistent: remains “forever” • Large: “unlimited” size • Sharing: controlled access • Security: protecting information Using Disks for Storage • Why disks: persistent, random access, cheap Source: http://www.mkomo.com/cost-per-gigabyte Source: https://www.backblaze.com/blog/hard-drive-cost-per-gigabyte/ Using Disks for Storage • Why disks: persistent, random access, cheap • Biggest hurdle to OS: disks are slow Disk Geometry • Disk components • Platters • Surfaces • Tracks • Sectors • Cylinders • Arm • Heads disk r/w Disk Geometry head(s) disk arm sector platters track cylinder • Moving parts: spinning platters, disk actuator arm • seek time: moving the arm to the desired track • rotational latency: turning the platter so that the desired data is under the head Implications of moving parts… • https://www.youtube.com/watch?v=tDacjrSCeq4 • Disks have a MUCH higher failure rate than most other components • PSA: back up your data! Should the OS take these disk parameters into account when using a disk? Why or why not? • Disk components • Platters • Surfaces • Tracks • Sectors • Cylinders • Arm • Heads A. Yes B. No C. It depends (on?) Disk Interaction • In the old days: specifying disk requests required a lot of info: • Cylinder #, head #, sector # (CHS) • Disks did not have controller hardware built-in • Very early OSes needed to know this info to make requests, but didn’t attempt to optimize data storage for it.
    [Show full text]
  • Bit Nibble Byte Kilobyte (KB) Megabyte (MB) Gigabyte
    Bit A bit is a value of either a 1 or 0 (on or off). Nibble A Nibble is 4 bits. Byte Today, a Byte is 8 bits. 1 character, e.g. "a", is one byte. Kilobyte (KB) A Kilobyte is 1,024 bytes. 2 or 3 paragraphs of text. Megabyte (MB) A Megabyte is 1,048,576 bytes or 1,024 Kilobytes 873 pages of plaintext (1,200 characters) 4 books (200 pages or 240,000 characters) Gigabyte (GB) A Gigabyte is 1,073,741,824 (230) bytes. 1,024 Megabytes, or 1,048,576 Kilobytes. 894,784 pages of plaintext (1,200 characters) 4,473 books (200 pages or 240,000 characters) 640 web pages (with 1.6MB average file size) 341 digital pictures (with 3MB average file size) 256 MP3 audio files (with 4MB average file size) 1 650MB CD Terabyte (TB) A Terabyte is 1,099,511,627,776 (240) bytes, 1,024 Gigabytes, or 1,048,576 Megabytes. 916,259,689 pages of plaintext (1,200 characters) 4,581,298 books (200 pages or 240,000 characters) 655,360 web pages (with 1.6MB average file size) 349,525 digital pictures (with 3MB average file size) 262,144 MP3 audio files (with 4MB average file size) 1,613 650MB CD's 233 4.38GB DVD's 40 25GB Blu-ray discs Petabyte (PB) A Petabyte is 1,125,899,906,842,624 (250) bytes, 1,024 Terabytes, 1,048,576 Gigabytes, or 1,073,741,824 Megabytes. 938,249,922,368 pages of plaintext (1,200 characters) 4,691,249,611 books (200 pages or 240,000 characters) 671,088,640 web pages (with 1.6MB average file size) 357,913,941 digital pictures (with 3MB average file size) 268,435,456 MP3 audio files (with 4MB average file size) 1,651,910 650MB CD's 239,400 4.38GB DVD's 41,943 25GB Blu-ray discs Exabyte (EB) An Exabyte is 1,152,921,504,606,846,976 (260) bytes, 1,024 Petabytes, 1,048,576 Terabytes, 1,073,741,824 Gigabytes, or 1,099,511,627,776 Megabytes.
    [Show full text]
  • What Is Digital Forensics (Aka Forensic Computing)?
    Digital Forensics for Archivists: Fundamentals Instructor: Christopher (Cal) Lee University of North Carolina at Chapel Hill September 22, 2016 Greeley, CO 1 Digital Archives Specialist (DAS) Curriculum and Certification Program offered by SAA: . Foundational Courses—must pass 4 . Tactical and Strategic Courses—must pass 3 . Tools and Services Courses—must pass 1 . Transformational Courses—must pass 1 . Course examinations are administered online. 2 1 Agenda . Welcome and introductions . Motivation and scope . Technical background . Representation Information . File systems and file management . Extracting data from media . Tools and methods . Conclusions, questions, discussion 3 Welcome and Introductions 4 2 Motivation and Scope Applying Digital Forensics to Archival Work 5 Many archivists know how to process this stuff: Source: The Processing Table: Reflections on a manuscripts internship at the Lilly Library. https://processingtable.wordpress.com/tag/archival-processing/ 6 3 How about processing this stuff? Source: “Digital Forensics and creation of a narrative.” Da Blog: ULCC Digital Archives Blog. http://dablog.ulcc.ac.uk/2011/07/04/forensics/ 7 Same Goals as When Acquiring Analog Materials Ensure integrity of materials Allow users to make sense of materials and understand their context Prevent inadvertent disclosure of sensitive data 8 4 Same Fundamental Archival Principles Apply Provenance • Reflect “life history” of records • Records from a common origin or source should be managed together as an aggregate unit Original Order Organize and manage records in ways that reflect their arrangement within the creation/use environment Chain of • “Succession of offices or persons who have held Custody materials from the moment they were created”1 • Ideal recordkeeping system would provide “an unblemished line of responsible custody”2 1.
    [Show full text]
  • Pro Tools 11.0.1 Read Me
    Read Me Pro Tools 11.0.1 and Pro Tools HD 11.0.1 on Mac OS X 10.8.x This Read Me documents important compatibility information, resolved and known issues for Pro Tools® 11.0.1 and Pro Tools® HD 11.0.1 software on computers running Mac OS X 10.8.x (“Mountain Lion”). Issues Resolved in Pro Tools 11.0.1 The ReVibe II plug-in is now available with support for Pro Tools 11.0.1 (and higher) and Pro Tools 10.3.6 (PTSW-184681, PTSW-183157) Certain third-party plug-ins no longer cause your computer to hang when quitting Pro Tools (PTSW-185005) Compatibility Avid can only assure compatibility and provide support for qualified hardware and software configurations. For the latest compatibility information—including qualified computers, operating systems, and third-party products—visit the Avid website (www.avid.com/compatibility). iLok USB Key (2nd Generation) and Pro Tools Authorization An iLok (2nd generation) must be connected at all times while running Pro Tools Pro Tools requires that an iLok (2nd generation) with the appropriate authorizations be connected to the computer at all times. Do not remove the iLok while running Pro Tools. If the iLok is removed while Pro Tools is running, one of the following will occur: Pro Tools displays a dialog giving you the following choices: • Reinsert the iLok and click Save. Pro Tools saves the session and then quits. • Click Quit. Pro Tools closes the session without saving and then quits. –or– Pro Tools displays a dialog giving you the following choices: • Reinsert the iLok and click OK to continue working in Pro Tools.
    [Show full text]
  • The Second Extended File System Internal Layout
    The Second Extended File System Internal Layout Dave Poirier <[email protected]> The Second Extended File System: Internal Layout by Dave Poirier Copyright © 2001-2019 Dave Poirier Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license can be acquired electronically from http://www.fsf.org/licenses/fdl.html or by writing to 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Table of Contents About this book ............................................................................................................... viii 1. Historical Background ...................................................................................................... 1 2. Definitions ..................................................................................................................... 2 Blocks ....................................................................................................................... 2 Block Groups ............................................................................................................. 3 Directories ................................................................................................................. 3 Inodes ....................................................................................................................... 3
    [Show full text]