<<

APPENDIX:ARTIFACT DESCRIPTION blocks d, parity blocks p, and spare drives s. They need A. Description to match with the total number of drives n. Then dRAID storage pool tank can be created by using the configuration ZFS is a widely used file system, for example as the storage file. Similar to the creation of RAIDZ, we need to specify the backend of . Speeding up the recovery and improving redundancy level of dRAID between draid1 and draid3. the reliability of ZFS is crucial for HPC storage systems. This artifact presents the test platform configuration, code, $ d r a i d c f g −p 3 −d 8 −s 3 −n 36 d r a i d 3 3 6 . n v l execution instructions, and experiment workflow that can be $ zpool create −f tank draid3 cfg=draid3 used to reproduce the experimental results. 36 . n v l

B. Test Platform tag">E. Performance Benchmarking • Hardware: Intel Xeon E5620, 120GB DDR3 DRAM (set To compare with ZFS raidz and mirroring, we create to 70GB during I/O benchmarking), each JBOD server raidz and dRAID separately using the same redundancy and have 45 x 4TB Hitachi 3.5” SATA drives in Supermicro equivalent configurations. SC847 JBOD enclosure connected to storage server by 1) I/O benchmarking: Our scripts use dd command to SFF-8088 cable to LSI MegaRAID SAS controller. Each gradually random data to the zpool, and then save the drive is configured to RAID 0 Write Through mode to measured performance metrics. minimize interference from the RAID card. f o r i in { 1 . . 1 0 0 } • Runtime: CentOS 7.5 with kernel v3.10.0-862. do • dRAID: Source code is available at https://github.com/ dd i f =/dev/urandom of=/tank/ ‘ date +%m%d%H thegreatgazoo/ (draid branch). %M%S‘. dat bs=10M count=27000 • Software dependence: dRAID uses ZFS 0.7 and has not done been merged into the latest official release. Build dRAID IOZone generates and measures a variety of file operations. requires which is available at ZFS’ official Github, Place the IOZone executable in the zpool , and then https://github.com/zfsonlinux/spl (spl-0.7-release). run the . The results are saved in an Excel file. Note • Benchmark: We use IOZone v3.482 to test dRAID and that in CentOS 7.5 ( v3.10.0-862), ZFS’ directory raidz’ I/O performance. does not allow to run ls and throws error: ls: reading directory C. Installation .: Not a directory. Changing to kernel v3.10.0-693 can fix the problem. Ensure that spl is installed properly before running dRAID. Clone the source code from ZFS’ official GitHub and use the $ . / i o z o n e −Rab i o z o n e d r a i d 3 3 6 h d d . x l s −y 16384 −n 16384 −g 140G spl-0.7 release. 2) Error injection: The partition table and RAID configu- $ git clone https ://github.com/zfsonlinux/ s p l ration files are usually located in the first and last sections of $ git checkout spl −0.7 −r e l e a s e a disk drive. We flush the area on the target drive by zeros $ ./autogen.sh so that ZFS detects the target drive as failed. This approach $ ./configure −−enable−debug is hardware dependent. A more general approach is to use a $ make & make install virtual machine to make the drive inaccessible. To install dRAID, clone the source code and checkout $ dd bs =512 i f =/dev/zero of=/dev/sdx \ draid branch. Follow dRAID’s HOWTO at https://github.com/ count=$((2048 ∗ 500)) seek=$((‘blockdev zfsonlinux/zfs//dRAID-HOWTO for detailed instructions. −−getsz /dev/sdx‘ − 2048∗500 ) ) $ dd bs =512 i f =/dev/zero of=/dev/sdx \ $ git clone https ://github.com/thegreatga count=$((2048 ∗ 5 0 0 ) ) zoo/zfs.git $ git checkout draid 3) dRAID rebuild vs. RAIDZ resilvering: Write random $ ./autogen.sh data to the storage pool so that dRAID and RAIDZ can $ ./configure −−enable−debug the pool later. After injecting errors to the target drive, we call $ make \& make install zpool scrub tank to trigger ZFS to detect the corruption in the To use dRAID, load ZFS module using the following ”failed” drive. dRAID will rebuild data on spare drives. options. $ zpool replace tank sdx ’%draid3 −0−s0 ’ $ modprobe zfs \ RAIDZ resilvers data to spare drives. z f s v d e v s c r u b m a x a c t i v e =10 \ z f s v d e v a s y n c w r i t e m i n a c t i v e =4 \ $ zpool replace −f tank sdx sdo d r a i d d e b u g l v l =5 Use zpool status tank to get more infomration of the . ZFS reports how long it takes to recover D. Configure and Create dRAID data and the amount of data recovered. A configuration file must be specified before creating dRAID. Use draidcfg command to specify the number of