<<

Benchmarking ZFS and Hardware RAID performance for AuriStor platform Mathew Binkley1 1Vanderbilt University, Nashville, TN 37203

Introduction Results

Vanderbilt University’s Advanced Computing Center for Research and Education (ACCRE) is Blogbench (Higher is Better): collaborating with AuriStor on a campus-wide storage array based on their AuriStor and Read Write ======ACCRE’s LStore to allow them to host and share large volumes of research data both , ZFS, No SSD: 552659 2796 internally and with other collaborators around the globe. Ubuntu, ZFS, SSD: 477923 2479 <- Worst Ubuntu, Hardware RAID: 865944 7839 <- Best To gauge performance, we wish to benchmark different storage configurations which might FreeBSD, ZFS, No SSD: 549793 3817 FreeBSD, ZFS, SSD: 750471 1094 approximate the final solution. Note that currently AuriStor only supports hardware/software RAID and not ZFS. Compilebench (Higher is Better): Compile Create Read Tree ======Ubuntu, ZFS, No SSD: 444 85 427 Ubuntu, ZFS, SSD: 438 82 412 Ubuntu, Hardware RAID: 593 175 447 <- Best FreeBSD, ZFS, No SSD: 531 93 333 FreeBSD, ZFS, SSD: 194 104 461

Postmark (Higher is Better): Score ======Ubuntu, ZFS, No SSD: 1589 Ubuntu, ZFS, SSD: 1536 Ubuntu, Hardware RAID: 2908 <- Best FreeBSD, ZFS, No SSD: 2615 FreeBSD, ZFS, SSD: 701

SQLite (Lower is Better): Score ======Ubuntu, ZFS, No SSD: 967.00 <- Worst Ubuntu, ZFS, SSD: 31.65 Ubuntu, Hardware RAID: 17.75 FreeBSD, ZFS, No SSD: 925.63 FreeBSD, ZFS, SSD: 11.51 <- Best

Unpack- (Lower is Better): Seconds ======Ubuntu, ZFS, No SSD: 13.72 Ubuntu, ZFS, SSD: 14.10 Ubuntu, Hardware RAID: 10.20 <- Best FreeBSD, ZFS, No SSD: 19.43 <- Worst FreeBSD, ZFS, SSD: 15.67

Dbench (Higher is Better), output is MB/sec:

# Clients: 1 6 12 48 128 256 ======Ubuntu, ZFS, No SSD: 9.91 30.68 47.34 114.00 221.00 361.00 Ubuntu, ZFS, SSD: 141.00 370.00 543.00 1411.00 746.00 557.00 Ubuntu, Hardware RAID: 227.00 623.00 804.00 1223.00 1352.00 1337.00 <- Best FreeBSD, ZFS, No SSD: 8.80 37.14 89.45 212.02 134.91 153.32 <- Worst FreeBSD, ZFS, SSD: 200.92 370.64 387.06 205.05 127.12 152.04

Methodology Conclusions

Three identical servers [Supermicro X8DTH motherboard, 2 x Intel Xeon E5620 CPUs @ 2.40 Ghz Choosing the fastest file system turned out to be relatively simple. The hardware RAID came (8C/16T), 48 GB DDR3, 16 x Seagate 4 TB Constellation hard drives, 500 GB Samsung 860 SATA in 1st on 5 out of 6 benchmarks, and 2nd on the 6th benchmark. ZFS inherently performs somewhat SSD for optional caching] were configured. slower because automatic verification/checksumming and the nature of Copy-On-Write file systems will yield slower performance than a hardware RAID that does not perform these additional steps. The following configurations were tested using the “out-of-the-box” OS settings: However, there are other important metrics for considering a file system, including cost, reliability, • Ubuntu Bionic 18.04 (64 , current on all updates) running ZFS 0.7.5, ZPOOL 5000 with no SSD safety, compatibility that are still being investigated. The average user will prefer a slower file system caching which keeps all their in the correct order over a faster but “lossy” file system…

• Ubuntu Bionic 18.04 (64 bit, current on all updates) running ZFS 0.7.5, ZPOOL 5000 with the ZFS and hardware RAID offer different strengths and trade-offs, such as: SSD split into a 250 GB Write (Intent log) and a 250 GB Read cache (ARC/L2ARC) • ZFS sometimes has strange incompatibilities with other storage software. In the case of AuriStor, • Ubuntu Bionic 18.04 (64 bit, current on all updates) running a hardware array (MegaRAID SAS it experiences performance degradation when using Linux ACL’s on ZFS. As hardware RAID is a 9361-8i) block device, you can feel free to use EXT4, XFS, or any other file system to match your application. • FreeBSD 11.2 (64 bit, current on all ports) running the stock ZFS (ZPOOL 5000) with no SSD caching • ZFS comes with no warranty, while hardware RAID usually does.

• FreeBSD 11.2 (64 bit, current on all ports) running the stock ZFS (ZPOOL 5000) with the SSD • ZFS is free, whereas a hardware RAID adapter and backplane can add hundreds of dollars to the split into a 250 GB Write cache (Intent log) and a 250 GB Read cache (ARC/L2ARC) price of a single server. This can rapidly become expensive when you’re dealing with large-scale deployments. Each group of 8 drives was used to derive a RAIDZ2 vdev (ZFS) or a RAID6 array (hardware RAID). On ZFS the two vdevs were combined to form a pool. On the hardware RAID server the two RAID6 • ZFS is portable between multiple OS’s (Solaris, Linux, FreeBSD, OS X), so if you experience a arrays were combined into a single volume using RAID0. server fault, you can simply remove the drives, stick them in another server, and access the data. A hardware RAID can lock you into a specific car from a specific vendor, often on a specific server The Phoronix Test Suite was used to run benchmarks. After eliminating tests that didn’t function and specific OS. The inability to find spares or replacements at a later date (usually after the (officially or de facto) on FreeBSD, the following common tests were run: warranty expires but before the hardware is life-cycled out) means a dead drive can result in a dead storage array. • pts/blogbench • pts/compilebench • ZFS can automatically and transparently compress and dedupe data, create snapshots, as well as • pts/postmark encrypt (though with a loss of compatibility between OS’s) It can also immediately verify data • pts/sqlite writes to ward against data loss. As hardware RAID merely exposes a block device to the OS, it • pts/unpack-linux depends on additional software layers to provide these services, which may require higher • system/dbench administrative costs.