ZFS — and Why You Need It
Total Page:16
File Type:pdf, Size:1020Kb
ZFS | and why you need it Nelson H. F. Beebe and Pieter J. Bowman University of Utah Department of Mathematics 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Email: [email protected], [email protected] 10 November 2016 Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 1 / 26 What is ZFS? Zettabyte File System (ZFS) developed by Sun Microsystems from 2001 to 2005, with open-source release in 2005 (whence OpenZFS project): SI prefix zetta ≡ 10007 = 1021 Sun Microsystems acquired in 2010 by Oracle [continuing ZFS] ground-up brand-new filesystem design exceptionally clean and well-documented source code enormous capacity 28 ≈ 255 bytes per filename 248 ≈ 1014 files per directory 264 ≈ 1018 bytes per file [1 exabyte] 78 23 1 2 ≈ 10 ≈ 2 Avogadro's number bytes per volume disks form a pool of storage that is always consistent on disk disk blocks in pool allocatable to any filesystem using pool [relatively] simple management optional dynamic quota adjustment ACLs, snapshots, clones, compression, encryption, deduplication, case-[in]sensitive filenames, Unicode filenames, . Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 2 / 26 Why use ZFS? ZFS provides a stable flexible filesystem of essentially unlimited capacity [in current technology] for decades to come we have run ZFS under Solaris for 11+ years, with neither data loss, nor filesystem corruption easy to implement n-way live mirroring [n up to 12 (??limit??)] snapshots, even in large filesystems, take only a second or so optional hot spares in each storage pool with ZFS zpool import, a filesystem can be moved to a different server, even one running a different O/S, as long as ZFS feature levels permit ZFS filesystems can be exported via FC, iSCSI, NFS (v2{v4) or SMB/CIFS to other systems, including those without native support for ZFS blocksize can be set in powers-of-two from 29 = 512 to 217 = 128K or with large blocks feature, to 220 = 1M; default on all systems is 128K. small files are stored in 512-byte sub-blocks of disk blocks Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 3 / 26 Where do we run ZFS? Solaris ( main filesystem for 10,000+ users Dyson ( fork of illumos and OpenSolaris with Debian GNU toolset FreeBSD FreeNAS and TrueNAS ( products of iXsystems GhostBSD ( fork of FreeBSD 10.3 GNU/Linux CentOS ( unsupported Red Hat GNU/Linux Debian GNU/Linux Ubuntu Hipster ( rolling update of OpenIndiana Illumian ( fork of OpenSolaris 11 illumos Mac OS X ( from OpenZFS, not from Apple OmniOS ( fork of OpenSolaris 11 illumos OpenIndiana ( fork of OpenSolaris 11 illumos PC-BSD ( fork of FreeBSD 10.3 Tribblix ( fork of illumos, OpenIndiana, and OpenSolaris TrueOS ( rolling-update successor to PC-BSD XStreamOS ( fork of OpenSolaris 11 illumos Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 4 / 26 zfs subcommands allow rename clone rollback create send destroy set get share groupspace snapshot inherit unallow mount unmount promote upgrade receive userspace # zfs snapshot tank/ROOT/initial@auto-`date +%Y-%m-%d` # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank/ROOT/initial@auto-2016-09-13 136M - 16.8G - tank/ROOT/initial@auto-2016-09-19 304K - 16.9G - ... Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 5 / 26 zpool subcommands add iostat attach list clear offline create online destroy remove detach replace export scrub get set history status import upgrade # zpool iostat -v capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- tank 21.7G 56.3G 1 0 39.8K 3.58K ada0p2 21.7G 56.3G 1 0 39.8K 3.58K ---------- ----- ----- ----- ----- ----- ----- Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 6 / 26 ZFS storage pools zero or more hot spares allocated at pool-creation time, or later: # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0 ( two disks in pool with one spare # zpool replace tank c0t0d0 c0t3d0 ( replace bad disk c0t0d0 # zpool remove tank c0t2d0 ( remove hot spare hot spares can be shared across multiple pools easy expansion: zpool add pool vdev disk-size agnostic [though best if pool members are identical] disk-vendor agnostic pools can grow, but cannot shrink optional quotas provide additional level of usage control within a pool quotas can oversubscribe pool storage quotas can grow or shrink: # zfs set quota=50G saspool01/students Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 7 / 26 ZFS data replication and recovery none (no media-failure protection) stripe over n disks (fast, but no media-failure protection) mirror: recover from failure of 1 of 2 disks triple-mirror: recover from failure of 2 of 3 disks RAID Z1: recover from failure of 1 of 4 or more disks RAID Z2: recover from failure of 2 of 9 or more disks RAID Z3: recover from failure of 3 of many disks Recoverable data errors result in replacement of the erroneous block, making ZFS self healing. Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 8 / 26 ZFS performance and reliability optional compression with LZJB, GZIP GZIP-2, or GZIP-9 algorithms compression may increase performance by reducing data transfer, as long as enough spare CPU cycles are available copy-on-write policy means that existing data blocks are never overwritten: once the new blocks are safely in place, old blocks are freed for re-use if they are not in a snapshot supports n-way mirrors: a mirror of n disks can lose up to n − 1 disks before data loss supports striping and RAID-Z[1-3] internal per-block checksums [no hardware RAID needed or desirable: JBOD is good enough because ZFS is better than RAID] ZFS is being optimized for SSDs, which suffer from severe wear limits that ultimately reduce disk and pool capacity Nelson H. F. Beebe and Pieter J. Bowman Why ZFS? 10 November 2016 9 / 26 ! *6J!.*JX!:'!E&N5+J!&6$#(*E(E!'%(!'/'*+!*-*&+*D+(!$$*5*$&'J!0/#!*++!0&+(EO!*++!E%*#(EO!*6F!*++!5#/K($'E!*EE/$&*'(F!.&'%!'%(! 5//+X! 123!G*'*!B#/'($'&/6! ;%(!,#*$+(!123!3'/#*4(!"55+&*6$(!5#/-&F(E!#/D)E'!F*'*!5#/'($'&/6!*'!D/'%!N&$#/E$/5&$!*6F!N*$#/E$/5&$!+(-(+EO! (6E)#&64!F*'*!&6'(4#&'J!*6F!5#/'($'&64!*4*&6E'!E&+(6'!F*'*!$/##)5'&/6!.%&+(!*+E/!5#/'($'&64!F*'*!0#/N!%*#F.*#(! 0*&+)#(EX!"'!'%(!N&$#/E$/5&$!+(-(+O!123!&E!*D+(!'/!5(#0/#N!(6FI'/I(6F!$%($ME)NN&64!'%#/)4%/)'!*++!$/6'#/++(#ID*E(F! $*$%(!$/N5/6(6'E!*++!'%(!.*J!F/.6!'/!'%(!F&EM!+(-(+X!!;%&E!&E!D($*)E(!'%(!,#*$+(!123!3'/#*4(!"55+&*6$(!)E(E!6/! 7":G!$/6'#/++(#Ei#*'%(#!'%(!E'/#*4(!$/6'#/++(#dZFSEE!/5(#*'&64!EJE'(N!&'E(+0!#)6E!123!F*'*!5#/'($'&/6X!;%&E!5#/-& checksums F(E!'%(! 0&+(!EJE'(N!0)++!(6FI'/I(6F!$/6'#/+!/0!F*'*!N/-(NN(6'!*6F!-&E&D&+&'J!/0!'%(!D+/$ME!E/!'%*'!0)++!*6F!&6'(4#*'(F!(6FI'/I(6F! $%($ME)NN&64!$*6!D(!5(#0/#N(FX!123!$%($ME)NN&64!&E!*!N/#(!*F-*6$(F!'J5(!/0!%&(#*#$%&$*+!$%($ME)NN&64! Unlike most-(#E)E!'%(!'#*F&'&/6*+!0+*'!$%($ME)NN&64!*55#/ other filesystems, each*$%X!;#*F&'&/6*+!$%($ME)NN&64!$*6!$%($M!'%(!&6'(4#&'J!/0!/6+J! data and metadata block of/6(! a ZFS D+/$M!&6!&E/+*'&/6!/0!/'%(#EO!N(*6&64!'%*'!N(F&*!!D&'!#/'!$*6!D(!E)$$(EE0)++J!E$#((6(FO!D)'!/'%(#!'J5(E!/0!F*'*! filesystem$/##)5'&/6!*$#/EE!'%(!:b,!5*'%!$*66/'!D(!&F(6'&0 has a SHA-256 checksum0&(FX!!123!$%($ME)NN&64!)E(E!*!H(#M+(!'#((O!.%(#(DJ!F*'*!D+ stored in the pointer to the/ block$ME! , not in the block*#(!F&E'&6$'!0#/N!*FF#(EE!D+/$M!$%($ME)NE!*6F! itself, so less subject'%(!(6'&#(!:b,!5*'%!$*6!D(!5#/'($'(F!0#/N!(6F!'/!(6FX! to corruption. ! [From Architectural Overview of the Oracle ZFS Storage Appliance] 2&4)#(!kc!;%(!123!*55#/*$%!'/!$%($ME)NE!$*6!F('($'!N/#(!'J5(E!/0!(##/#E!'%*6!'%(!'#*F&'&/6*+!*55#/*$%X! Checksums! on new blocks are recalculated, not copied, and errors can be "6/'%(#!*F-*6'*4(!/0!123!$%($ME)NN&64!&E!'%*'!&'!(6*D+(E!*!E(+0I%(*+&64!*#$%&'($')#(X!:6!E/N(!'#*F&'&/6*+!$%($ME)N! corrected*55#/*$%(EO!.%(#(-(#!F*'*!D+/$ME!4('!#(5+&$*' if there is sufficient redundancy(F!0#/N!/6(!+/$*'&/6!'/!*6/'%(#!'%(#(!&E!*6!/55/#')6&'J!'/!5#/5 (mirror or RAID-Zn replication).*4*'(! Nelson H. F. BeebeF*'*!$/##)5'&/6X!;%&E!&E!D($*)E(O!.&'%!'#*F&'&/6 and Pieter J. Bowman *+!$%($ME)N!*55#/*$%(EO!'%(!6(.(E'!F*'*!D+/$M!E&N5+J!4('E!Why ZFS? 10 November 2016 10 / 26 #(5+&$*'(FX!@&'%!123!$%($ME)NEO!(*$%!D+/$M!#((5+&$*!5*&#!%*E!*!$%($ME)N!$*+$)+*'(F!&6F(5(6F(6'+JX!:0!/6(!&E!0/)6F! '/!D(!$/##)5'(FO!'%(!%(*+'%J!D+/$M!&E!'%(6!)E(F!*E!*!#(0(#(6$(!'/!#(5*&#!'%(!)6%(*+'%J!D+/$MX! ! ! PT!!W!!!"789:;<8;=7">!,?<7?:<@!,2!;9<!,7"8><!123!3;,7"A<!"BB>:"C8<! ZFS snapshots fast: one or two seconds, independent of filesystem size unlimited number of snapshots snapshots are read-only snapshots are user visible [e.g., /.zfs/snapshot/auto-2016-10-11/home/jones/mail] /.zfs normally hidden from directory listing commands [management configurable] disk blocks captured in a snapshot are in use until snapshot is destroyed removing recent large files on a disk-full condition may free no space at all: instead, need to remove oldest snapshots a snapshot can be cloned to create a new writable filesystem Nelson H.