
File system fun Why disks are different Disk = First state we’ve seen that doesn’t go away • File systems: traditionally hardest part of OS CRASH! • - More papers on FSes than any other single topic memory disk Main tasks of file system: • - So: Where all important state ultimately resides - Don’t go away (ever) Slow (milliseconds access vs. nanoseconds for memory) - Associate bytes with name (files) • normalized Processor speed: 2 /18mo - Associate names with each other (directories) speed × - Can implement file systems on disk, over network, in memory, in Disk access time: 7%/yr non-volatile ram (NVRAM), on tape, w/ paper. - We’ll focus on disk and generalize later year Today: files, directories, and a bit of performance Huge (100–1,000x bigger than memory) • • - How to organize large collection of ad hoc information? - Taxonomies! (Basically FS = general way to make these) 1 / 38 2 / 38 Disk vs. Memory Disk review Disk reads/writes in terms of sectors, not bytes • MLC NAND - Read/write single sector or adjacent groups Disk Flash DRAM Smallest write sector sector byte Atomic write sector sector byte/word Random read 8 ms 75 µs 50 ns How to write a single byte? “Read-modify-write” • Random write 8 ms 300 µs* 50 ns - Read in sector containing the byte Sequential read 100 MB/s 250 MB/s > 1 GB/s - Modify that byte Sequential write 100 MB/s 170 MB/s* > 1 GB/s - Write entire sector back to disk Cost $0.04/GB $0.65/GB $10/GiB - Key: if cached, don’t need to read in Persistence Non-volatile Non-volatile Volatile Sector = unit of atomicity. • *Flash write performance degrades over time - Sector write done completely, even if crash in middle (disk saves up enough momentum to complete) Larger atomic units have to be synthesized by OS • 3 / 38 4 / 38 Some useful trends Files: named bytes on disk Disk bandwidth and cost/bit improving exponentially • File abstraction: - Similar to CPU speed, memory size, etc. • - User’s view: named sequence of bytes Seek time and rotational delay improving very slowly • - Why? require moving physical object (disk arm) Disk accesses a huge system bottleneck & getting worse • - FS’s view: collection of disk blocks - Bandwidth increase lets system (pre-)fetch large chunks for about - File system’s job: translate name & offset to disk blocks: the same cost as small chunk. file, offset FS disk address - Trade bandwidth for latency if you can get lots of related stuff. { }−−→ −→ - How to get related stuff? Cluster together on disk File operations: • Desktop memory size increasing faster than typical • - Create a file, delete a file workloads - Read from file, write to file - More and more of workload fits in file cache Want: operations to have as few disk accesses as possible - Disk traffic changes: mostly writes and new data • & have minimal space overhead (group related things) - Doesn’t necessarily apply to big server-side jobs 5 / 38 6 / 38 - Simple, fast access, both sequential and random - External fragmentation What’s hard about grouping blocks? FS vs. VM In both settings, want location transparency Like page tables, file system metadata are simply data • • In some ways, FS has easier job than than VM: structures used to construct mappings • - CPU time to do FS mappings not a big deal (= no TLB) - Page table: map virtual page # to physical page # - Page tables deal with sparse address spaces and random access, 23 Page table 33 files often denser (0 . filesize 1), sequentially accessed −−−−−−−−−→ −−−−−−−−−→ − ∼ - File metadata: map byte offset to disk block address In some ways FS’s problem is harder: • 512 Unix inode 8003121 - Each layer of translation = potential disk access −−−−−−−−→ −−−−−→ - Space a huge premium! (But disk is huge?!?!) Reason? - Directory: map name to disk address or file # Cache space never enough; amount of data you can get in one foo.c directory 44 −−−−−−−→ −−−−−−−−−→ fetch never enough - Range very extreme: Many files <10 KB, some files many GB 7 / 38 8 / 38 Some working intuitions Common addressing patterns FS performance dominated by # of disk accesses • Sequential: - Say each access costs 10 milliseconds • ∼ - File data processed in sequential order - Touch the disk 100 extra times = 1 second - By far the most common mode - Can do a billion ALU ops in same time! - Example: editor writes out new file, compiler reads in file, etc Access cost dominated by movement, not transfer: • Random access: seek time + rotational delay + # bytes/disk-bw • - Address any block in file directly without passing through - 1 sector: 5ms + 4ms + 5µs ( 512 B/(100 MB/s)) 9ms ≈ ≈ predecessors - 50 sectors: 5ms + 4ms + .25ms = 9.25ms - Examples: data set for demand paging, databases - Can get 50x the data for only 3% more overhead! ∼ Keyed access Observations that might be helpful: • • - Search for block with particular values - All blocks in file tend to be used together, sequentially - Examples: associative data base, index - All files in a directory tend to be used together - Usually not provided by OS - All names in a directory tend to be used together 9 / 38 10 / 38 Problem: how to track file’s data Straw man: contiguous allocation “Extent-based”: allocate files like segmented memory • Disk management: • - When creating a file, make the user pre-specify its length and - Need to keep track of where file contents are on disk allocate all space at once - Must be able to use this to map byte offset to disk block - Inode contents: location and size - Structure tracking a file’s sectors is called an index node or inode - Inodes must be stored on disk, too Things to keep in mind while designing file structure: • Most files are small - Example: IBM OS/360 - Much of the disk is allocated to large files • Pros? - Many of the I/O operations are made to large files • - Want good sequential and good random access (what do these require?) Cons? (Think of corresponding VM scheme) • 11 / 38 12 / 38 - Easy dynamic growth & sequential access, no fragmentation - Linked lists on disk a bad idea because of access times - Pointers take up room in block, skewing alignment 65,536 entries 32 MiB Straw man: contiguous allocation Linked files Basically a linked list on disk. “Extent-based”: allocate files like segmented memory • • - Keep a linked list of all free blocks - When creating a file, make the user pre-specify its length and allocate all space at once - Inode contents: a pointer to file’s first block - Inode contents: location and size - In each block, keep a pointer to the next one Example: IBM OS/360 Examples (sort-of): Alto, TOPS-10, DOS FAT • • Pros? Pros? • • - Simple, fast access, both sequential and random Cons? (Think of corresponding VM scheme) Cons? • • - External fragmentation 12 / 38 13 / 38 Linked files Example: DOS FS (simplified) Basically a linked list on disk. Uses linked files. Cute: links reside in fixed-sized “file • • - Keep a linked list of all free blocks allocation table” (FAT) rather than in the blocks. - Inode contents: a pointer to file’s first block Directory (5) FAT (16-bit entries) - In each block, keep a pointer to the next one a: 6 0 free file a b: 2 1 eof 6 4 3 2 1 3 eof file b 4 3 Examples (sort-of): Alto, TOPS-10, DOS FAT 2 1 • 5 eof Pros? 6 4 • ... - Easy dynamic growth & sequential access, no fragmentation Cons? Still do pointer chasing, but can cache entire FAT so can • • - Linked lists on disk a bad idea because of access times be cheap compared to disk access - Pointers take up room in block, skewing alignment 13 / 38 14 / 38 FAT discussion FAT discussion Entry size = 16 bits Entry size = 16 bits • • - What’s the maximum size of the FAT? - What’s the maximum size of the FAT? 65,536 entries - Given a 512 byte block, what’s the maximum size of FS? - Given a 512 byte block, what’s the maximum size of FS? 32 MiB - One solution: go to bigger blocks. Pros? Cons? - One solution: go to bigger blocks. Pros? Cons? Space overhead of FAT is trivial: Space overhead of FAT is trivial: • • - 2 bytes / 512 byte block = 0.4% (Compare to Unix) - 2 bytes / 512 byte block = 0.4% (Compare to Unix) ∼ ∼ Reliability: how to protect against errors? Reliability: how to protect against errors? • • - Create duplicate copies of FAT on disk - Create duplicate copies of FAT on disk - State duplication a very common theme in reliability - State duplication a very common theme in reliability Bootstrapping: where is root directory? Bootstrapping: where is root directory? • • - Fixed location on disk: - Fixed location on disk: 15 / 38 15 / 38 - Both sequential and random access easy - Mapping table requires large chunk of contiguous space . Same problem we were trying to solve initially Indexed files Indexed files Each file has an array holding all of it’s block pointers Each file has an array holding all of it’s block pointers • • - Just like a page table, so will have similar issues - Just like a page table, so will have similar issues - Max file size fixed by array’s size (static or dynamic?) - Max file size fixed by array’s size (static or dynamic?) - Allocate array to hold file’s block pointers on file creation - Allocate array to hold file’s block pointers on file creation - Allocate actual blocks on demand using free list - Allocate actual blocks on demand using free list Pros? Pros? • • - Both sequential and random access easy Cons? Cons? • • - Mapping table requires large chunk of contiguous space . Same problem we were trying to solve initially 16 / 38 16 / 38 Indexed files Multi-level indexed files (old BSD FS) Issues same as in page tables Solve problem of first block access slow • • inode = 14 block pointers + “stuff” • - Large possible file size = lots of unused entries - Large actual size? table needs large contiguous disk chunk Solve identically: small regions with index array, this • array with another array, .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-