eqw

Spectra Logic’s BlackPearl® NAS Technical Guide

Simple, Affordable, Flexible Data Storage Abstract The extended lifespan of data has led to previously unimagined complexities, regulation, use cases, and migration planning scenarios. To address the difficulties stemming from data growth, including minimizing errors that can be introduced as data migrates over the network and across storage technologies, a new kind of mid-tier, secondary storage is required that focuses not only on the cost required to store massive data, but also on protecting that data.

September 2019

Contents Introduction ...... 4 BlackPearl NAS – Solution Snapshot ...... 5 Data Growth Rates ...... 10 Flexible NAS Storage to Meet Data’s Mid-Life Demands ...... 11 Purpose-Built Storage for Mid-Tier Data ...... 11 Disk Technologies ...... 11 Drive Types ...... 12 Enterprise Drives ...... 12 Expansion Nodes ...... 12 Data Protection and Reliability ...... 12 Data Mirror and Parity ...... 12 Triple-Parity – Probability of data loss less than 1 in 1.5 Billion years ...... 12 Global Spares ...... 13 Rebuild Time ...... 13 Continuous and Data Resiliency ...... 14 Replication ...... 15 NFI Policies for DR and Archive ...... 16 Other General Features ...... 17 Internal Data Handling ...... 17 Copy on Write ...... 17 High Availability ...... 17 BlackPearl NAS, Single Controller System ...... 17 BlackPearl NAS, ColdPair ...... 18 BlackPearl NAS, HotPair ...... 19 The Spectra Ecosystem ...... 19 BlackPearl Object Storage ...... 19 Using Network File Interface (NFI) ...... 21 BlackPearl NAS Workflow Targets (where to use it, and where not to…) ...... 22 Enterprise Drive workflow requirements ...... 22 Performance – what to expect ...... 22 Very Wide RAID for performance ...... 22 Typical performance and how to measure ...... 22

1

CIFS impact on performance ...... 23 File size implications on performance ...... 23 NFS impact on performance ...... 23 ZIL kit – Write Cache Acceleration ...... 24 System CPU and DRAM impact on performance ...... 24 Network Setup Considerations ...... 25 Configuration ...... 25 Link Aggregation Notes ...... 26 BlackPearl NAS Implementation ...... 27 Theory of Operation ...... 27 Software Components ...... 27 Virtualizing Disk using Pools ...... 28 Multiple Pools ...... 28 Set Level of Protection ...... 29 BlackPearl NAS Advantages ...... 29 Balancing Capacity and Performance with Parity ...... 30 Examples of Array Sizing ...... 32 Considerations in Setting Protection Levels and Balancing Capacity and Performance ...... 32 Define Pool Availability to System Network/Protocols ...... 33 Thin Provisioning ...... 33 Oversubscribing ...... 33 Assign Access to Volumes ...... 34 Raw vs Usable space ...... 34 As low as 6¢/GB (USD) – how much capacity is required to get this pricing ...... 35 Expansion of Capacity ...... 35 Physical Buildout Considerations ...... 36 Drive Expansion Node ...... 36 Cooling considerations ...... 36 Add-in card options: ...... 37 Monitoring and Maintenance ...... 37 Status Bar ...... 38 Visual Status Beacon ...... 38 : The Modular Design of Spectra Storage Systems ...... 38 2

Modular Expansion: Scaling the BlackPearl NAS System ...... 38 Management and Reporting Features ...... 39 Command Line Interface ...... 39 SNMP Management Protocol ...... 39 Performance Monitoring ...... 39 System Messages ...... 40 Hardware Status ...... 40 Network Configuration ...... 40 Support Tools and Continuity Features ...... 40 AutoSupport ‘Phone Home’ Feature ...... 40 Hot-Swappable Hard Drives ...... 40 Global Spare Hard Drives ...... 41 Intelligent Rebuilds ...... 41 Redundant Power ...... 41 BlackPearl NAS Support Lifecycle Phases ...... 41 SpectraGuard Support Overview ...... 41 Hardware (HW) Support ...... 42 SpectraGuard Basic HW Support ...... 42 SpectraGuard NBD (Next Business Day) On-Site HW Support ...... 42 SpectraGuard 4HR (Four-Hour) On-Site HW Support ...... 42 Software (SW) Support ...... 42 SpectraGuard 9x5 Telephone SW Support ...... 42 SpectraGuard 24x7 Telephone SW Support ...... 43 Professional Services Overview ...... 43 Specifications ...... 43 Environmental Specifications ...... 43 Power ...... 43 Data Storage...... 44 System ...... 44

Copyright ©2019 Spectra Logic Corporation. All rights reserved worldwide. Spectra and Spectra Logic are registered trademarks of Spectra Logic. All other trademarks and registered trademarks are property of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. All opinions in this white paper are those of Spectra Logic and are based on information from various industry reports, news reports and customer interviews. 3

Introduction When data is initially created, it is generally stored on a high-performance, production storage system. Over time, the data being stored generally falls into two categories – actively used data, and inactive data (aka mid-tier data).

• Active data is best for rack-scale flash systems with Demands on mid-tier data: scale-out architectures and high-availability, all • E-discovery protected by robust backup application(s) • Regulation • Inactive (mid-tier) data that is no longer ‘hot’ can be • Continuity of Business moved to a self-protecting, flexible, affordable, • Protection of personal data purpose-built secondary storage system that does not • Lengthy retention while require continual backup. Note that backups are also keeping data accessible considered inactive data because they are rarely • Data that is inactive but still accessed, and so they should also be stored on has value or cannot be deleted secondary storage.

When looking at mid-tier data requirements, it becomes clear that a purpose-built secondary storage device can reduce cost compared to a storage system built to meet the requirements for active data. BlackPearl NAS is designed as simple, flexible, and affordable systems, purpose-built for mid-tier data.

The demands and requirements on mid-tier data may dictate different types of drives used in the secondary NAS system to best complement the finite requirements to a specific customer’s mid-tier data workflow. BlackPearl NAS offers multiple drive types within a single system, to facilitate multiple, specific customer workloads in a converged manner. With four types of Enterprise Drives, the flexibility of a BlackPearl NAS system allows multiple mid-tier dataset workloads within a single system.

4

BlackPearl NAS – Solution Snapshot Chassis Types o 2U Master Node, with a system controller, can hold up to 11 drives o Holds Enterprise Drives o A single 2U Master Node can have up to 2 Expansion Nodes, each directly attached via SAS cable(s) o 4U Master Node, with a system controller, can hold up to 35 drives o Holds Enterprise Drives or ZIL Write Cache drives o A single 4U Master Node can have up to 9 Expansion Nodes, each directly attached via SAS cable(s) o 4U Expansion Node holds up to 107 Drives each. Expansion Node connects to the Master Node using a dedicated SAS cable o The 4U master accepts up to 9 Expansion Nodes, for up to 15.41 PB raw configurations and 4TB, 8TB, 12 TB, and 16 TB drives

Drive Types and Capacities o Enterprise Drives o 4 TB, 8 TB, 12 TB, or 16 TB HDD Physical Characteristics • Dimensions: 2U Master Node: 3.5” H x 19” W x 27.5” D 89 mm H x 483 mm W x 699 mm D 4U Master Node: 7” H x 19” W x 29.5” D 178 mm H x 483 mm W x 750 mm D 4U Expansion Node: 7” H x 17” W x 41” D 176 mm H x 434 mm W x 1050 mm D • Weights (approximate): 2U Master Node with 11 drives: 60.6 lbs (27.5 Kg) 4U Master Node with 35 drives: 120.2 lbs (54.5 Kg) 4U Expansion Node with 107 drives: 258 lbs (117 Kg) Power • 2U Master Node: 920W max Redundant Power Supplies • 4U Master Node: 1280W max Redundant Power Supplies • 4U Expansion Node: 2000W max Redundant Power Supplies

5

BlackPearl NAS – Solution Snapshot Simple Expansion Node Pricing Each Expansion Node has the same price, whether it is the first one or last one, whether only one is purchased or three are purchased together • Expansion Node (with 107 drives) list for 6¢/GB for HW o Expansion Node that is full, with 107 16TB drives, has the best price per GB • Each Expansion Node includes: o Visual Status Beacon (LED) Bezel, cabling, and HBA (if needed) to attach to Master Node

Installation, Master Node, and Support are separate costs The support contract includes software upgrades High-Availability Features • Redundant, hot-pluggable cooling system • Redundant power supplies, each with its own AC power connector and an LED to indicate status • Hot-swap/spare drives • Redundant boot drives for the operating system, logical volume manager and file system, NFS, CIFS, and SNMP servers, and the Web Management Interface • Enterprise Drive only system has HotPair and ColdPair failover capability

Simple to Use • Buttons: Power on/off, system reset • LEDs: power, hard drive activity, network activity (x2), system overheat, power failure Rapid Installation and Configuration • Users can unpack, install, and have the BlackPearl NAS system operational in 30 minutes or less

Web Management Interface • Graphical interface for ease of use o Web interface to manage the system through an Internet browser • Command line interface for flexibility

6

BlackPearl NAS – Solution Snapshot Status Light Bar: The Visual Status Beacon light bar in the front bezel provides an at‐a‐glance status of the array. The light bar changes color to indicate the status of the array. If a BlackPearl NAS system requires attention, the lit bar, or beacon, helps administrators identify the unit quickly. Included with 4U Master Node, 2U Master Node, and each Expansion Chassis.

Color Display Condition Purple Scroll The system is operating normally.

The system is experiencing a Yellow Scroll Warning condition. The system is experiencing an Red Scroll Error condition. Figure 1: BlackPearl NAS with purple scroll The system is experiencing a Orange Scroll move failure in the attached tape library. The system is currently powering Rainbow on and performing self-tests. The beacon feature is activated for this system. This can help you Flashing Blue identify a specific system when you have more than one system in your environment. The Visual Status Beacon lost Pulsing Red communication with the system. The BlackPearl system is powered No Light off.

Access Protocols • CIFS – SMB 2, 3 • NFS – NFS v3d

Network Monitoring and Configuration Support • DHCP • SNMP • SMTP • NTP

7

BlackPearl NAS – Solution Snapshot Data Connection Specifications Using the Web Management Interface, a single data connection is configurable per NAS system. Link aggregation is available for better performance. On a 4U NAS system configure the data connection with one of the following: • One port 10 GBase-T (RJ45) copper • One port 10 GigE (SFP+) optical • Aggregate of two ports 10 GigE (SFP+) optical for 20GigE throughput • Aggregate of two ports 10 GBase-T (RJ45) copper for 20GigE throughput Optional upgrade network card (two-port 40GigE): o One port 40 GigE (QSFP+) copper or optical o Aggregate of two ports 40 GigE (QSFP+) copper or optical for 80GigE throughput On a 2U NAS system configure the data connection with one of the following: • One port 10 GBase-T (RJ45) copper Optional add-on upgrade network card (two-port 10GigE (SFP+) or two-port 10GBase-T): • One port 10 GigE (SFP+) optical • Aggregate of two ports 10 GigE (SFP+) optical for 20GigE throughput • Aggregate of two ports 10 GBase-T (RJ45) copper for 20GigE throughput

Additional Master Node Port • 1/10 GBase-T (RJ45) copper port: Management connection to the Web Management Interface (not used for data traffic)

Boot Drives Two dedicated mirrored boot drives for the operating system, logical volume manager and file system. Includes NFS, CIFS, and SNMP servers, and the Web Management Interface.

Monitoring • Hardware status • System messages • Remote access • SNMP client support • Thermal monitoring • Email notification when issues arise o Log collection

8

BlackPearl NAS – Solution Snapshot Support Options • 90-day warranty of Basic HW Support with 9x5 Telephone SW Support • AutoSupport feature • Recovery from failed drives with efficient automatic rebuilds • On-site professional services available • 3-year and Extended 5-year Hardware Support (and Onsite Service) Contract Options: o Basic Hardware Support with standard shipping for replacement parts, self-service o NBD (Next Business Day) On-Site Service o 4HR (Four-Hour) On-Site Service (includes 24x7 software support) • Software Support Options o 9x5 Telephone Software Support o Upgrade to 24x7 Telephone Software Support (included with 4hr onsite) Onsite Spare Parts Kit The onsite spare parts are available as an optional purchase; enabling customer to store BlackPearl NAS components at their data center expediting replacement while reducing downtime to minutes. User-replaceable, on-site components available: • Master Node Chassis • System drive • Power supplies • Fans • 10 GigE Ethernet card • 40 GigE Ethernet card • Data Storage Drives • SAS Expander Modules (107-bay Expansion Chassis – only)

Additional features supported by BlackPearl NAS: • Data integrity – Advanced protect against undetected errors and result in a much better error rate than traditional disk • Parity protection options – Single and Double parity are available. In addition, Triple Parity is unique, allowing up to 3 drive failures per parity group without data loss. This higher parity protection enhances reliability when working with large datasets. • Thin provisioning – BlackPearl NAS lets administrators increase storage efficiency by virtualizing available space and allocating it as required. Administrators can make better use of existing capacity, handling data growth as needed. • Compression – Administrators can configure compression at the volume level. • Snapshots – Snapshots, or point-in-time copies of a volume, let you restore a volume to the state it was in when the snapshot was created. A snapshot only consumes the space of the changed blocks, which makes them very space-efficient. Snapshots can be generated automatically (hourly, daily, or weekly); snapshots can also be created on demand. And the number of snapshots is configurable.

9

BlackPearl NAS – Solution Snapshot • On-demand integrity check - BlackPearl NAS features an on‐demand data integrity check for data drives configured in one or multiple storage pools. The check scans the drives for data corruption and corrects any errors found. • Intelligent automatic rebuilds – Instead of rebuilding an entire failed drive, the system rebuilds only the portion of the drive that held data, potentially saving hours on rebuilds. • Redundant power – Each node ships with two high-efficiency redundant power supplies. • Performance monitoring – The Web Management Interface lets you view the performance of pools, drives, CPUs, and the network. • Global spare drives – A drive that is not configured in a storage pool acts as a global spare drive. If a drive failure occurs on the system, the array immediately and automatically activates a global spare. When the failed drive is replaced, the replacement drive acts as the global spare. • Hot-swappable data drives – BlackPearl NAS drives are easily inserted into the Expansion Node or Master Node, without tools, by on-site data center staff. The drives can be replaced without powering down the system or stopping access to the data. • Replication – BlackPearl NAS supports asynchronous replication to a secondary unit. It keeps the latest snapshot on the remote system, protecting against a local DR event at the primary system. • Network File Interface (NFI) policies – BlackPearl NAS has two NFI policies for sending data to BlackPearl object store bucket: Copy and Keep, and Copy and Delete. Each NAS volume can enable an NFI policy targing one bucket. The NFI policy Copy and Keep is used to protect data with by sending a disaster recovery (DR) copy to BlackPearl object store, while one copy is kept on BlackPearl NAS. Whereas the Copy and Delete policy, at a high level, is used to migrate data to a BlackPearl object store. The data on BlackPearl object store is persisted to any type of storage: disk, tape, or public cloud. The object data is available from a wide range of BlackPearl Spectra S3 certified clients, including free & open source graphical-based Eon Browser. • Transcale Upgrade path to BlackPearl Converged Storage System – Start with a NAS-only product, BlackPearl NAS, then TranScale and upgrade it to a conveged BlackPearl that adds on Spectra S3 Object Storage capability when the need arises. A TranScale or upgrade kit will include the S3 Object Storage enablement key, and hardware required for Object Storage functionality, such as Cache, Database and Tape HBA. • AutoSupport – Automatically contact mail recipients upon generation of messages. Also, generate logs for Spectra Logic Technical Support

Data Growth Rates The Digital Universe is expected to double every two years, growing to 44 zettabytes (ZB) in 2020. With nearly as many digital as there are stars in the universe, data growth is expanding as fast as the cosmos. By 2020, it is expected that nearly 60% of the data created will come from emerging markets countries, 40% of all data will be touched by the cloud in some way (stored or processed), and 10% of the data will be generated by the Internet of Things.

10

Data retention requirements can come from many sources including legal requirements or corporate governance. Outside of regulatory reasons for storing data for long-term, there is another reason; the unknown value of the data being stored.

Flexible NAS Storage to Meet Data’s Mid-Life Demands In data’s mid-life phase, the data needs to remain accessible but is too infrequently accessed to leave on high-performance, expensive disk. This mid-tier data warrants its own specialized storage platform. This platform needs to protect data against the challenges of longer-term storage, including accessibility, affordability, scalability, and concerns surrounding the risk of data corruption.

Spectra Logic is meeting this challenge with the flexibility of multiple drive types. This storage is affordable and easily used to scale. With BlackPearl NAS, sites can handle data that continues to expand at dizzying rates, while remaining confident in the data’s integrity.

Purpose-Built Storage for Mid-Tier Data Spectra Logic’s BlackPearl NAS addresses the specific requirements of mid-tier Bulk NAS disk, which are: • Storing data affordably - Through modular design, with parts that can be easily swapped and upgraded in place, to preserve initial investment. • Easy to install, maintain, and scale - Easy disk system to use, maintain, expand, and upgrade • Preserving data - Preserve data by providing multiple levels of integrity, with checking systems beyond that found in typical disk systems.

Disk Technologies It is important to split descriptors of disk technologies into three distinct categories: interface (SAS or SATA), drive type (consumer or enterprise), and recording technology (PMR, GMR, etc.). Although we commonly use just one of these to refer to the drive, they are each becoming independent items that no longer necessarily imply the other two. In the past, SAS (Serial Attached SCSI) drives were used exclusively in the enterprise and were built to last, with extra mechanical support of the drive platters and long-wearing components. Likewise, SATA (Serial ATA) drives have traditionally been consumer drives, with only single point spindle attachments and lower end components. The SAS interface actually has other advantages such as dual port capabilities, the ability to put a semi-fabric like system together where many drives are on the same controller via switches, and other speed enhancements.

SATA, however, no longer directly implies a consumer grade drive as manufacturers have begun blurring the lines between the different technologies. At the same time, new recording technologies which define how a magnetic “head” actually records bits onto the surface of a metallic platter is also changing rapidly. The race for “areal density”, or how many bits can fit into an area on a disk, is becoming aggressive but has recently not been able to achieve the same magnitude of advancement as the processor companies have. Disks have always had many “levers to pull” to increase capacity from smaller read/write heads, to adding more platters, to using different recording technologies, but each new technology has only so much to offer, and when it maxes out, a technology change must occur.

11

Drive Types BlackPearl NAS utilizes new technology whenever available, while also increasing the capacity of each drive type that is added. There are multiple drive types available for use in a single, flexible, BlackPearl NAS system. Enterprise Drives Spectra’s Enterprise Drive is a traditional “NL-SAS” or nearline SAS, a high capacity 7200RPM disk. Using both SAS or SATA versions of enterprise . BlackPearl NAS systems support several capacities of Enterprise Drives: 4TB, 8TB, 12TB, and 16TB. These drives are the workhorse for high capacity NAS systems.

Self-Encrypting Drives (SED) can optionally be used in BlackPearl NAS. These SED drives are available as 12TB SAS SED drives. Expansion Nodes BlackPearl NAS consists of a 2U or 4U master node and may connect 1 or more expansion nodes.

• This expansion node, which uses SAS or SATA interfaces for drives, supports 4TB SAS, 8TB SAS, 12TB SAS SED (encrypting), 12TB SATA, and 16TB SATA enterprise drives.

Data Protection and Reliability BlackPearl NAS has the ability to employ a 4-tier data protection strategy to ensure your data is safe. Data Mirror and Parity RAID is already a familiar technology in every data center, with options from a mirror (RAID 0), to adding a single parity bit (RAID 5), to double parity (RAID 6). Spectra’s implementation of ZFS offers a data Mirror option, as well as single parity (RAID Z1), double parity (RAID Z2) and triple parity (RAID Z3). RAID Z3 provides unprecedented triple redundancy to the system. Statistically, a RAID Z3 system will lose data once in over 2 million years, for the recommended array size, while properly monitored and maintained. Triple-Parity – Probability of data loss less than 1 in 1.5 Billion years This kind of probability is easily calculated. Using any one of the numbers of online calculators - we set the MTBF to 1.2M* hours, set the non-recoverable error rate (10^17), 16TB, 18 disk array size, and set a rebuild speed of 90 MB/s.

*Note, the Enterprise Drive MTBF is 2.5 million hours for helium drives (12TB and 16TB), and 2 million hours for air drives (4TB and 8TB), which would reduce the probability of data loss even further.

When assessing the meantime before data failure (MTBF) in a Raid Z3 array, a calculation for the likelihood of 3 simultaneous drive failures must be determined. Then, a calculation for the probability of losing an additional drive before the rebuild(s) of the initial 3 drives is complete; and then how long would it take before the probability of this happening. The answer is a VERY long time. 12

To get to this level of protection, it requires a lot of individual features working together to protect your data, all of which are present and standard in a BlackPearl NAS system.

RAID Formatted Mean Time To Data Failure Mean Time To Data Bit Error Rate MTTDL MTTDL (Years) Level Capacity (GB) (MTTDF) in hours Loss (MTTDL) in hours RAID 0 271,319 66,667 < .01 33,333 3.81 RAID 1 135,660 3,358,592,158 < .01 1,679,296,079 191,700 RAID 10 135,660 3,358,592,158 < .01 1,679,296,079 191,700 RAID-Z1 256,246 5,810,713 3,060,236 4,435,474 506 RAID-Z2 241,172 607,486,929 300,921,443 454,204,186 51,850 RAID-Z3 226,099 77,380,298,303 25,662,055,865,248 12,869,718,081,776 1,469,145,900

Triple-Parity: A system which creates a mathematical checksum of every block of data that can be used to recover any lost data up to and including three full hard drive failures in a single array. A system would have to have four full hard drives fail at the same time on a single array before any data is lost.

In BlackPearl NAS, in particular, the example above of a wide 15+3 (15 data and 3 parity) array and automatic rebuild to global spares, with intelligent rebuild (reducing build times for partially full arrays) provides excellent protection as long as global spares are available and the system is properly maintained.

Another recommended option with higher utilization and less overhead is achieved using double parity arrays. This is a 16+2 array as denoted by RAID-Z2 line in the image above, has a probability of data loss as 1 in 51,850 years. Global Spares BlackPearl NAS is constantly monitoring individual drives for failure indications. When the system detects that a drive is no longer reliable, it immediately begins a rebuild of that drive to an available Global Spare. For Enterprise Drives, Spectra recommends one global spare for the first 34 drives, then one spare per 100 drives. Rebuilds can take place on any available spare in the system. Once complete, the failed drive is marked as bad and available for replacement. Rebuild Time If a drive fails for any reason and a rebuild is required, BlackPearl NAS will immediately begin rebuilding the failed drive to an available Global Spare. That process is math and processor intensive, as it requires reading each block on all other drives in that array, performing the math functions, and rewriting the result to the Global Spare. Current rebuild performance on a 23-wide triple parity array is approximately 30MB/s. This translates to roughly 70 hours for a completely full 8TB drive. As soon as a drive begins to rebuild, a service call can be initiated allowing shipment of a replacement drive to take place and be on hand when the rebuild completes replacing the failed drive.

Expected Performance During Rebuild (When the Pool is Idle) Array Size in a pool Rebuild rate*

Less than 19 drives 90MBps

19 or more drives 30MBps *will vary greatly depending on several factors including dataset, pool capacities, and other array IO

13

Note that a BlackPearl NAS system uses intelligent rebuilds, where only the blocks with actual data are rebuilt; unlike in traditional hardware RAID systems, where an entire disk is rebuilt sector by sector, including blank sectors. Assuming a drive is only half-full, it only takes a BlackPearl NAS system half the time to rebuild. These systems are designed to provide alerts when 80% of capacity is reached so even with a heavily loaded system, a 70-hour theoretical rebuild will typically only take 50 or so hours.

During the rebuild, any external data movement takes priority over the rebuild. The affected pool will still be fully available, and end-user performance effect will be slight. Continuous Checksum and Data Resiliency A (CRC) is an error-detecting code, commonly used in digital networks and storage devices, used to detect accidental changes to raw data. Blocks of data entering these systems get a short check value attached, based on the remainder of a polynomial division of their contents. ZFS Checksums

BlackPearl NAS uses ZFS for its file system. ZFS calculates a checksum for each chunk (ZFS record) of data in the system, storing the checksum in separate blocks from the data. This checksum is verified on every read access to the media, guaranteeing that data corruption is detected before erroneous data is presented to the user. The user can also instruct the NAS system to run an on-demand data integrity check. ZFS also utilizes a Merkle Hash Tree to further protect checksum blocks from accidental change on a checksum block.

ZFS Copy on Write

Active data blocks are never overwritten in ZFS. Instead, the data block is copied to a newly allocated block with the modified changes applied, and a new checksum of the new data block is storage as described previously. The advantage of copy on write, is the old data is retained, unchanged, and ZFS snapshots will hold the previous data in place until any & all snapshots are expired.

How Data Can Become Corrupted

• Data is transmitted through a chain of components each imposing a certain probability that it may corrupt a given bit of the data. This includes but is not limited to components such as software, CPU, RAM, NIC, I/O bus, HBA, cables, and the disks themselves. • Studies have shown that the error rate in a large data warehousing company can be high as every 15 minutes. This only serves to illustrate the need for data integrity protection when selling systems designed to store hundreds or thousands of terabytes.

To address the problem for the entire I/O stack, ZFS performs checksums on every data block, and critical blocks are replicated. ZFS also has the ability to correct errors before they are no longer correctable, on both a passive and active basis. The combination of ZFS checksums and parity allows ZFS to self-heal in situations that other RAID solutions cannot. For example, Hardware RAID filers can be configured to use RAID parity to detect silently corrupted data in much the same way that ZFS uses checksums. However, there is nothing these systems can do to safely reconstruct the data or to inform the user which drives are causing the problem. In this situation, a ZFS solution will try all of the possible combinations of parity reconstruction 14

until the record checksum is correct, detecting which member or members of the RAID have bad data and recovering the original information.

Multi-Level Error Correction Codes (ECC): BlackPearl NAS performs checksums on every 256K block of data. The block’s checksum is stored separately in another block, with a pointer to the data block, rather than storing checksum with the data block, adding a layer of protection between the corrective mechanism and the data itself. If the checksums do not match, BlackPearl NAS identifies an accurate copy of that data or rebuilds another copy through smart RAID (see below).

Memory with ECC and Interleaving: An often-overlooked cause of data corruption is the memory component of the storage system. BlackPearl NAS addresses this by utilizing memory that incorporates integrated ECC and interleaving. ECC checksums ensure that individual errors in memory are corrected, and interleaving permits recovery from an error that affects more than a single bit within the memory of the system.

Using these mechanisms, even though raw disk systems are prone to periodic bit errors, i.e., “bit-flip”, “bit-rot”, BlackPearl NAS has the ability to detect those bit anomalies and then use parity system to correct those errors, pulling data from single, double or triple parity redundancy calculations. With the specific coding scheme used, errors are rarely missed. The undetected bit-error rate internally is on the order of one in 1067. To illustrate just how rare this is, 1067 is roughly the number of atoms in our solar system. So if we were to store the value for every atom in the solar system, reading back every value of the solar system, only one atom (a single value) would possibly slip by us as incorrect or as an undetected value. Replication In addition to the mirroring and parity configuration settings in BlackPearl NAS, the easiest way to add redundancy to a storage system is with replication. Making a direct second copy on an independent system, preferably on one geographically separated from the first, prevents data loss due to natural and man-made disasters. The ZFS file system includes an ability to take snapshots of data on a scheduled basis and then volumes can be replicated to another BlackPearl NAS system over the switched network. Setup is straightforward and can be tailored to meet workflow needs by customizing snapshot timing and frequency.

In the disk industry, many vendors support a form of replication from one system to another, typically for disaster recovery. This is usually found in enterprise-class tier 1 and tier 2 type NAS servers.

Since BlackPearl NAS utilizes ZFS to provide file-based access, it currently supports the ability to move volumes from one ZFS storage pool to another within a given BlackPearl NAS system. The ZFS file system provides functionality to create a snapshot of the file system contents, transfer the snapshot to another machine, and extract the snapshot to recreate the file system. You can create a snapshot at any time, and you can create as many snapshots as you like. By continually creating, transferring, and restoring snapshots, you can provide synchronization between one or more machines in a fashion similar to Distributed Replicated Block Device (DRBD).

BlackPearl NAS provides the ability for a customer to start and stop replication between two BlackPearl NAS systems and monitor the status of the replication.

15

Automatic Replication Users can also set up automatic replication and snapshots on a schedule via the GUI. Replication Requirements Two independent BlackPearl NAS systems connected over a physical network would be configured as follows:

• Each server hosts its own local storage pools • There is no monitoring of the process for purposes of handling errors • There is no automatic source/target control • User initiates replication process through either the GUI or CLI • User cancels replication process through either the GUI or CLI • User specifies target system and pool to replicate volume • User specifies secure or non-secure transfer method • Replication is on a volume basis • Status of replication in progress is provided

Replication Design Details The replicated data on the target system is read-only.

Additional logging will be added in a separate log file to track the ZFS operations associated with a volume replication, error messages are also displayed in the GUI for monitoring by the user.

In order to transfer between two systems, the customer will have to provide either the remote system IP address or system name that can be resolved via DNS. This is required in order to initiate a remote snapshot operation between two BlackPearl NAS.

The user will be able to monitor the status of the remote snapshot operation. The possible states are Up-to-date, In Progress, and Failed. The progress of the current operation is also reported. Replication Authentication All BlackPearl NAS user accounts have a set of public/private SSH keys generated either on first boot or when they are created. The GUI makes the public keys visible on the user’s show page. The GUI allows users to add public SSH keys to a particular user’s authorized keys file. This framework allows users to easily authenticate to remote NAS systems as replication targets.

Every BlackPearl NAS system has a replicator account that is visible in the GUI, and thus a set of public/private keys. In order to initiate a replication, the target system must authorize the source system. The user does this by copying the source system’s replicator public SSH key (from the GUI/CLI) and pasting it into the public SSH key dialog on the target system. NFI Policies for DR and Archive NFI stands for Network File Interface. It is a unique feature of BlackPearl NAS encompassing two things: a simple policy engine, and communication protocol integration with BlackPearl object storage (via the Spectra S3 API). It is the easiest way to add onto BlackPearl NAS systems a disaster-recovery (DR) copy to tape, disk, and cloud.

16

BlackPearl NAS NFI has two policies for sending data to a BlackPearl Buckets: “Copy and Keep”, and “Copy and Delete”. The NFI policy “Copy and Keep” is used to protect data similar to a DR strategy by sending a copy of data to BlackPearl, while one copy is kept on BlackPearl NAS. The “Copy and Delete” policy is used to migrate, rather than copy-only, data to a BlackPearl. The BlackPearl can be anywhere on the network to which the BlackPearl NAS system has access.

When using a BlackPearl converged storage system, which has a dedicated BlackPearl NAS partition, NFI operations may happen within the same unit, where a partition which manages the object storage will handle the data policies and replication. Each BlackPearl object storage target is set up in the NFI services page on BlackPearl NAS. Each BlackPearl target is a combination of the BlackPearl IP address and an S3 User on that BlackPearl; multiple S3 Users on a single BlackPearl can each have their own BlackPearl target on the BlackPearl NAS systems.

The NFI policies are configured on a Volume within the BlackPearl NAS. When editing or creating a new volume, the admin can enable NFI, select the policy, name the bucket, and set the schedule for the NFI policy. NFI makes use of the snapshot technology of ZFS to select the data or new data added to a volume to copy to BlackPearl. During the physical copy, NFI makes use of the Spectra S3 Bulk PUT operations, which is a very efficient way to copy data to BlackPearl bucket.

The data on BlackPearl follows BlackPearl’s own unique, robust data policy engine called Advanced Bucket Management (ABM). BlackPearl ABM policy will persist data in as many duplicated copies as desired, including disk, tape, automatically ejected tape, replicated to another BlackPearl bucket for high availability plus disaster recovers (HA+DR), or additionally onto public cloud storage for hands-off DR. Details of the BlackPearl system are covered in the Spectra Ecosystem section that follows.

If the data copied by NFI is deleted from a BlackPearl NAS system, the data is no longer available via the NAS system but still available in the Bucket. A BlackPearl Spectra S3 Client, like the Eon Browser, must be used to copy data back from the bucket to BlackPearl NAS share or another storage location. The data in a BlackPearl bucket is not visible through the NAS shares (CIFS or NFS), but only through the Spectra S3 interface of BlackPearl. This provides a layer of security by preventing the files from being accessed or deleted via the BlackPearl NAS. Other General Features Internal Data Handling Within ZFS, data blocks are never overwritten when the client updates the data file with new versions of itself. Instead, the data remains on disk and a second copy is created. This permits the simple addition of snapshots, when appropriate for the data set or workflow.

Copy on Write Data writes precede and are completed as a step separate from parity writes, so if power is lost between the copy and the parity write, data is not silently corrupted.

High Availability BlackPearl NAS, Single Controller System BlackPearl NAS utilizes a single controller. The dual controller was invented to protect against hardware failure when, at the time, the market and users had experienced many hardware failures. Today, 17

hardware is very reliable, with very few failures at the controller level. Most disk system failures are software in nature, and these system errors clear when the process restarts or with a simple system reboot.

In order for a dual controller system to work, all data ends up being written to memory or cache that is mirrored to the other controller. If the secondary controller has to take over, it must determine what was committed to disk, what is in shared memory, and how to assemble the pieces of data, and the data that was in flight during a dual-controller failure will hopefully be resent by the client. This requires a lot of moving parts, both software, and hardware. The cache needs to have battery backup. The batteries need to be replaced on a regular basis. The larger part is the software. Supporting failover can drive a million lines of additional code.

During the development of Spectra’s NAS, Spectra looked at the field history of three previous generations of disk systems and found that the selected controller hardware does not fail. Spectra looked at the hardware and software requirements to implement a dual controller system and determined that for a storage system for mid-tier data, it actually added more risk than it eliminated.

BlackPearl NAS uses ZFS, a software RAID, focused on high data reliability. The system supports single, double, and triple parity. All data is committed directly to disk. The sequential nature of secondary storage applications means cache provides little to no performance increase. Eliminating the battery- backed cache reduces the maintenance and complexity. With BlackPearl NAS, as the data is written to disk, all of it is secured with a Merkle Hash Tree of checksums. This self-protecting mechanism is designed to ensure the integrity of the data. No BlackPearl NAS system has ever lost or corrupted data.

A dual controller system is not inherently more reliable than a single controller system today. The added complexity, hardware, and million lines of additional software code increase the chance of failure and data corruption in a use case that does not need this type of high availability (HA) capability. BlackPearl NAS, ColdPair BlackPearl NAS systems are designed with a modern, highly reliable single controller design. Typical deployments of the system do not require the complexity or the expense of traditional high-availability disk storage solutions. In cases where users need a method that helps sites quickly recover from a system failure, BlackPearl NAS offers ColdPair and HotPair recovery options.

The ColdPair option lets users quickly and economically recover from a Master Node failure. This involves storing on site a spare Master Node without any data drives. In the case of a Master Node failure, the customer can rapidly bring the system back up using the spare Master Node by following these steps: make sure the original NAS system is powered off, move all cables and data drives to the ColdPair chassis, then power up the ColdPair node. The system reads all configuration data from the migrated disks, as the disks store Replicated System Configuration (RSC). All pools, volumes, shares, and data typically come on-line in less than 30 minutes.

18

BlackPearl NAS, HotPair HotPair is a more automated way to failover, similar to how ColdPair works. However, in this case, the HotPair chassis is always powered on and the failover takes less time.

HotPair setup is configured with two Master Nodes and one or more Expansion Nodes. The Master Nodes do not contain data drives but instead, operate as disk controllers. Both Master Nodes are connected to all Expansion Nodes via SAS cables. Each Master Node is connected to the other through a serial cable and over the Ethernet network. One master is active, the other is passive. Configuration information is sent from the active to passive node, and the passive node monitors the active master with a ‘heartbeat’ cable. If the active master fails, the passive master takes over. While data access is affected during the recovery, the system does recover and return to serving data with no administrator intervention.

HotPair is available on BlackPearl NAS with Enterprise Drives.

The Spectra Ecosystem In the larger data center ecosystem, BlackPearl NAS is just one part of a larger storage architecture. With ZFS, replication, and NFI, BlackPearl NAS stores and protects the data safely for a very long time.

BlackPearl NAS provides a file-based storage system accessible either through CIFS (Windows primarily) or NFS (a file interface used by Linux and other similar systems) that can be mounted like any normal network drive. The Enterprise Drives provide fast access to files that can be modified and deleted at will with little effect since the system is based on enterprise-grade fast SAS hard drives. BlackPearl Object Storage Spectra has been in the business of building digital tape libraries for close to 40 years, and has a large selection of tape options from a small stackable tape-slot robotic system (tape library), to one that holds tens of thousands of tapes, and can store more than an Exabyte of data. However, there is often a complicated and expensive “middleware” software required to interface between the “file system” world and storing data on tape libraries. It takes a lot of code to cache data, stream it to tape drives, control the robots that load tape media into those tape drives, all while cataloging the entire process and handling error conditions as well as recovering from failures. In an effort to streamline this process and provide an alternative to middleware-software, Spectra launched the BlackPearl Converged Storage System. BlackPearl is a converged storage system that completely automates and controls the process of

19

long-term data storage, including the management of tape libraries.

BlackPearl has a front end that accepts data via a RESTful, programmatic interface called Spectra S3. This interface was developed by Spectra as an extension of the standard Amazon S3 interface, popularized by public cloud providers. The Spectra S3 protocol is designed to deal with groups of files, rather than individual files, as well as longer latencies, in order to support tape. BlackPearl essentially provides a large pool of on-site storage, using the same interface as public cloud provider(s), saving the customer money and providing positive control of data.

Since a normal file system is not directly capable of outputting data using Spectra S3, a client must be created so an application can move data to BlackPearl. Spectra has developed several extensive Software Development Kits (SDK), with helper classes to make creating clients for BlackPearl simple. Spectra’s Developer Program (https://developer.spectralogic.com) is available to create those clients for customers who need this capability, to enable BlackPearl within their workflow.

BlackPearl has a large fast-access cache to accept incoming data on the Spectra S3 interface, catalogs all the metadata from those objects for later retrieval, and directly control both the robots as well as the tape drives via a fibre channel or SAS interface.

Inside BlackPearl is a technology called Advanced Bucket Management (ABM), which includes the ability to set data policies for “buckets”. Buckets are a logical container for data, organizing data into defined

20

groups of both storage media (disk and/or tape) as well as advanced policies that govern how long and how many copies of the data are stored. As an example, under BlackPearl a bucket’s policy could be configured to write data to both tape and disk, with a copy or two on an LTO partition in a tape library, as well as another copy on a disk pool. In this example, the policy is created such that any data written to that bucket automatically creates a copy on disk, and a copy on LTO tape, and a second copy on LTO tape that is automatically ejected from the library for sending off to a secondary site for safety (DR).

BlackPearl is the most robust place to store and protect data but requires a direct integration. Therefore, if a customer has either a manual process for archiving data or does not have a Spectra S3 client built directly into their application, what do they do if they still want the ability to move data to BlackPearl? BlackPearl NAS includes its own Spectra S3 client for BlackPearl that is called NFI – Network File Interface; enabling data on NAS to be copied to BlackPearl.

The goal of NFI is to provide a method to directly copy the contents of the file based NAS storage system directly to object storage and tape through BlackPearl. If that copy is intended to be an eject copy, the cost of the ejected tape can be accomplished for less than 2¢/GB USD. Using Network File Interface (NFI) The NFI setup process is straightforward in the Web Management Interface. The User Guide walks the administrator through the setup and configuration of NFI. The policies for NFI are applied to the NAS Volume level on a BlackPearl NAS system, creating a one-to-one link between the NAS Volume and a Bucket on BlackPearl Object Storage.

Once the NAS volume is shared to either NFS or CIFS, then any data written to that share/volume will be output to BlackPearl on the NFI schedule set for that volume.

With a tape backend on BlackPearl, the tape cartridges are allocated as more data is added to a bucket. This allows for the BlackPearl to share empty cartridges across many buckets in a scratch pool. When disk is on the backend, the buckets are created in a ZFS disk pool and data is stored there.

There is also the ability to combine both disk and tape behind BlackPearl on a single policy. For more on that, please see the BlackPearl product page. The tape system behind BlackPearl can be any Spectra Logic tape library, ranging from a small Spectra Stack, up to and including the TFinity. Multiple tape libraries can also be placed behind the same BlackPearl to create a more Highly Available (HA) system. The tape library does not need to be dedicated to BlackPearl, as BlackPearl works with a partition in a library. The minimum size of a partition in a library is 1 drive and 10 tape slots, the maximum partition size is the number of slots licensed in the library.

With NFI, once any data is stored on BlackPearl (on tape or disk), the Eon Browser of another BlackPearl Spectra S3 client is used to view the data and retrieve it back to disk.

The Eon Browser is a simple multiview GUI that allows users to drag and drop data between file systems and BlackPearl buckets. Eon Browser is loaded onto any client computer/server with network access to BlackPearl, using the same credentials used in the initial setup of NFI. The Eon Browser interface provides the local file system on one side and the Buckets on the other.

21

BlackPearl NAS Workflow Targets (where to use it, and where not to…) BlackPearl NAS may be tailored for a specific workflow and use case by picking a specific drive type and drive array size. If there are multiple workflows that each require a different drive type, then multiple drive types can be configured in a single BlackPearl NAS system. Enterprise Drive workflow requirements Enterprise Drives are good at random write and random read workloads. The Enterprise Drives can be configured into smaller arrays, like 3+2 or 4+2, with many arrays in a pool, to provide a much better random read performance; with mirrored being the best for random reads.

Performance will be described in detail in the following sections, but to get the ideal best performance with Enterprise Drives, the workload would be 100% read or write; as soon as there is a mixed workload then performance will decrease. However, on Enterprise Drives a mixed workload does not decrease performance as dramatically as it would with other workflows.

Performance – what to expect BlackPearl NAS is a purpose-built storage for mid-tier data and displays better performance with larger files. Smaller files can be a performance challenge for many file systems, and this is true for BlackPearl NAS as well. Large file workloads can have a large impact on performance; a mixed read/write workload will have a large impact on performance. A sequential workload and 100% writes (or 100% reads) will produce better performance over random and mixed workload; increasing the file size will also increase performance (compared to a smaller file). Tier1 offload is an ideal workload for BlackPearl NAS systems. Typically, these applications will move older, less accessed data to BlackPearl NAS; this workload is almost entirely written, and can be sequential depending on how the application moves data. Not every workload will be ideal of course; it is, therefore, important to set expectations and understand that when any of these variables change, they can have an impact and change the overall performance. Very Wide RAID for performance A unique feature of BlackPearl NAS is the ability to use of Very Wide RAID (VWR) for write-intensive workloads. Most RAID systems use smaller arrays of 5-10 disks to balance system performance with random read workloads. Spectra’s implementation of ZFS, as previously described, has the ability to use triple parity (RAID Z3) which will maximize data integrity. While this provides excellent data protection, it also requires more overhead for parity (an extra hard drive per array over double parity RAID Z2 or RAID6). To counter this overhead loss, a 20+3 array size is available using RaidZ3, for write-intensive workloads, such as daily archiving or moving data from the primary storage systems. Our best practice recommends stay at 18 drives per array or less for increased rebuild times, which becomes more important with larger capacity hard drives.

As another benefit, a VWR array also helps throughput. More HDDs, in general, always increases performance. Typical performance and how to measure Whether we are using a single array or multiple arrays, once we have between 23 to 30 drives in a single pool, it is possible to get the maximum performance from a BlackPearl NAS system. As mentioned in the intro paragraph to performance, there are many variables that affect performance.

22

The amount of data streams also has a significant impact on performance. With a single stream, if a user sets up a VWR pool of 23 drives (20+3), with an expectation of 1GB/s typical performance using the windows/CIFS mount to drag and drop of large 100MB files, they will only see around 100MB/s of performance. This is because they only have one stream of data. Multiple streams of data must be sent to BlackPearl NAS in parallel in order to achieve maximum performance.

CIFS impact on performance Windows CIFS interface is notoriously plagued by tremendous overhead, including headers, pauses to receive responses from the file system, as well as a variety of delays known only to the programmers at Microsoft. A single client computer, connected through a high speed 10 Gigabit optical network, is likely to only able to realize around 100MB/s, even with optimal large (100MB) files, when using a single stream. The constraint is not in the network or the NAS system, nor the number or size of arrays (user can monitor both network usage as well as CPU and disk usage, all of which do not move over the 10% mark), since only a single client is writing data.

BlackPearl NAS is capable of receiving data streams from multiple clients simultaneously and to truly maximize performance, multiple streams are required. Two distinct options are available: a single client- server with multiple streams or multiple servers. Various free test software suites allow multiple independent streams to be initiated from a single client, but Spectra also typically uses multiple independent clients (multiple Virtual Machines on a single server and/or multiple physical windows servers) that each drives large data stream(s) to BlackPearl NAS. In each of these cases, between 1GB/s to 1.25GB/s can be achieved using large files; however, it is recommended for sizing purposes to use the guideline that 800MB/s to 1GB/s is the maximum performance for a single NAS system. If more throughput is needed, then more Master Node heads can be added, although each will be its own pool/volume/share.

File size implications on performance Due to the nature of the ZFS Copy on Write “virtual cache” system, larger files tend to provide better performance

Assuming multiple (7 in our testing) optically-connected servers, each with streams of large 100MB files, results in long-term BlackPearl NAS performance throughput of about 1GB/s, as sent by FIO and measured by the BlackPearl NAS performance measurement system (network not saturated and CPU at only 33%).

Also, remember the performance of a single server and single stream (100MB files using a CIFS drag/drop) has an average of 100MB/s. If the user uses a mix of 1x 100MB files and 20x 4KB files (in the CIFS drag and drop), it slows down to an average of 70MB/s, using the same system setup.

NFS impact on performance Typically, NFS presents a more streamlined file interface and is the choice of many high-performance computing systems. Unfortunately, NFS has another “feature”, where clients send a sync command to the NAS system regularly, sometimes as often as each second or on each write. This issue is partially mitigated with high quality Enterprise Drives, however, the ZIL (see below) can mitigate this even more and increase write performance when NFS is mounted synchronously.

23

It is important to note that the key issue is that NFS can sync very often. Depending on what operating system is used and what client applications are used, it may be possible to reduce syncs and therefore greatly improve performance.

ZIL kit – Write Cache Acceleration With NFS, syncs are the most common cause of performance issues. Spectra has also seen some poorly written applications that use Block (SCSI) communication against NAS systems (even on CIFS), and these applications have poor performance against certain drive types. Whenever there are performance issues with these drives, a ZIL kit can be added and configured on a pool, to get an acceleration on the write performance. ZIL is a ZFS write cache applied to a single pool of drives.

For those not already intimate with the ZFS file system, a ZIL is a ZFS Intent Log. Essentially, it is a very fast write cache that is placed in between main system memory and the spinning hard drive media. The idea is that data can be written there first as a fast cache, the client can sync to the ZIL, and then internally the system can stream off to the hard disk pool. Spectra offers a hardware kit to upgrade a BlackPearl NAS system for faster write performance using this technology built into ZFS.

Multiple ZIL kits are offered that include fast Solid State Drives, which are set up as mirrored pairs (or set of mirrored pairs) to a single Pool on BlackPearl NAS. The number of SSDs is in direct relation to overall system capacity, as the SSDs need to be actively managed to prevent premature wear-out and are mirrored for redundancy. The SSDs used are a smart variety that automatically distributes writes in order to evenly distribute data and provide an even wear pattern. More SSDs provide more total capacity before wear-out and some incremental performance gains.

• A system with no Acceleration ZIL kit is capable of typical maximum 300MB/s write performance. • A two (mirrored) SSD ZIL kit, is capable of 2.2 Petabytes written into the system and approximately raises the maximum performance to 600MB/s of write throughput. • A four SSD ZIL kit, striped and mirrored, is capable of 4.4 Petabytes written into the system and approximately raises the maximum performance to 700MB/s write throughput. • A six SSD ZIL kit, striped and mirrored, is capable of 6.6 Petabytes written into the system and approximately raises the maximum performance to 800MB/s write throughput.

Note that a ZIL will only work for a single pool, all volumes and shares on that pool will use the ZIL.

System CPU and DRAM impact on performance Unlike traditional hardware-based RAID systems, BlackPearl NAS has a software file system and software RAID system. In the past, the only way to achieve reasonable performance, particularly for rebuilds, was to use custom processors with dedicated math engines, to perform the XOR and multiply functions quickly. By tying system performance to the processor speed improvement curve, software RAID systems have been able to provide exceptional performance gains. A 4U BlackPearl NAS system, in particular, uses dual CPU (each Xeon is 6 cores), plus 64GB of DRAM to support the ZFS file system. The system memory acts as a virtual cache and also supports fast rebuilds of RAID blocks as necessary.

24

Network Setup Considerations Most throughput and connectivity issues, and the vast majority of Spectra support calls are a direct result of network setup inconsistencies.

The management port is separated from the data ports on BlackPearl NAS. The management port and data ports each have their own default routes. This does not prevent a user from having management and data utilizing the same network if desired.

The basic steps on configuring the management and data ports for access to the network are simple and straightforward. However, each customer network environment is unique and may require some additional troubleshooting, in order to properly connect to a BlackPearl NAS system, and utilize the 10 Gb interfaces properly. Configuration The first step is to configure the management and data ports accordingly, using the web management interface.

Connectivity to the Data Network The data path is supported in the following manner:

• Single 10 Gb logical connection utilizing the onboard 10 Gb (10GBase-T copper) physical port OR • Single 10 Gb logical connection utilizing one of the 10 GbE physical ports on the PCI expansion network interface card (NIC) OR • Single 20 Gb logical connection utilizing Link Aggregation on two 10 GbE physical ports on the PCI expansion network interface card (NIC) OR • Single 40 Gb logical connection utilizing one of the 40 GbE physical ports on the PCI expansion network interface card (NIC) OR • Single 80 Gb logical connection utilizing Link Aggregation on two 40 GbE physical ports on the PCI expansion network interface card (NIC)

4U BlackPearl: Example Network Diagram The following diagram shows an example environment with a BlackPearl NAS system used in an architecture supporting data transfer and data management networks.

25

The user should assign the appropriate IP address, either statically or via DHCP, to the management and data ports. If setting the MTU to something other than 1500, the user should ensure that their switch configuration (all switches in the data path) supports larger MTU settings as well as all hosts on the network. BlackPearl NAS can support Jumbo frames (MTU=9000), but all switches and hosts on the same network must be configured to support Jumbo frames if this is chosen, or performance may be degraded. Additionally, all switches must also be able to support Link Aggregation if the user specifies in the BlackPearl NAS network configuration to aggregate or “trunk” the data ports together to provide higher bandwidth. Switches must support LACP and hash the destination IP addresses as there are multiple methods for link aggregation. The user must manually configure LACP on those switch ports. LACP does not get enabled automatically. Link Aggregation Notes Different switches use different methods of routing traffic from clients to NAS servers. There are also many different network configurations to move data from clients to NAS servers. For example, some Cisco switches route traffic based on the MAC address and the IP address. The BlackPearl NAS unit presents only one MAC and IP address when the data ports are aggregated via DHCP. If static link aggregation is chosen, the unit presents only one MAC address but can have up to 16 IP addresses aliased to the MAC address. It is up to the switch to rotate data transfers amongst the ports being used to physically connect to the BlackPearl NAS system in order to achieve the highest throughput possible. This is an issue when a customer has only a single client connected to BlackPearl NAS system and is measuring performance. A customer may only see 100 MB/s performance over two aggregated data ports since the other ports are not being utilized by the switch. If one were to connect multiple clients to a BlackPearl NAS system, or mount a share multiple times using different IP addresses and start transfers from all three clients, one would see up to the max performance (1GB/s typical). A user may have to configure more than three IP addresses on the BlackPearl NAS to get the switch hashing algorithm to utilize all physical ports and maximize performance.

26

Once configured properly and attached to the network, the status in the UI should indicate the speed of the connection and whether the port is active. The link lights on the network ports should be on and active at both the NAS system and network switch.

The user should be able to “ping” the assigned IP for the given management or data port that was set during the configuration of the ports from a client external to the BlackPearl NAS on the customer network. If not, please follow the troubleshooting tips below to ascertain if the problem is a network setup issue.

Using Static IP

(DHCP would only have one IP address)

BlackPearl NAS Implementation Theory of Operation Software Components Two mirrored boot disks in a BlackPearl NAS system are dedicated to the software necessary to run the system. These, along with the controller, manage the entire NAS system.

Master Node operating and file system: The BlackPearl NAS Master Node has internal, specialized dual boot disks, (separate from the data disks) that support the unit’s operating system, integrating a logical volume manager and file system, used for data stored on the unit. The mirrored boot disks control the structure and management of data storage. The operating system provides data verification to protect against corruption.

SNMP Server: The system accepts SNMP queries used by some network management applications, making it easy to track BlackPearl NAS system status in the context of the entire network.

NFS and CIFS Servers: The NFS and CIFS servers running on the Master Node provide network file system access to host computers over an Ethernet network. Most major operating environments, including Microsoft Windows, Apple Macintosh, UNIX, and Linux, can access NFS or CIFS shares.

27

Web Management Interface: The web graphic user interface (GUI) provides browser-based configuration, management, and monitoring of the system. The web GUI is available on the management network.

Virtualizing Disk using Pools The following figure illustrates the physical disks that are installed on the front of the Master Node or an Drive Expansion Node.

Note: Throughout this example, physical hard drives are shown for illustrative purposes only.

The first step in configuring a BlackPearl NAS system is to create a storage pool. A storage pool is a virtualization of multiple disks; in other words, it is a logical grouping of a set of physical drives. A pools’ disk drives are the location where NAS volumes physically reside. Volumes are described in a later section.

Multiple Pools In this example, there are of 23 disks physically installed in the BlackPearl NAS Master Node. However, only 22 disks for data storage are selected, leaving one disk that is set aside as a hot spare. (Note the top left sled in a BlackPearl NAS Master Node is reserved to be used by the Bezel.)

The preceding image shows the 22 disk drives divided and logically grouped into two distinct pools: Pool1 with 10 drives, and Pool2 with 12 drives. In this example, you have defined 2 large pools of disk rather than having to work with 22 individual physical disks. Note, that graphical selection of the physical hard drives is shown for illustrative purposes only (left versus right) because a user simply specifies how many drives they want in the pool, the user does not have to select specific, individual drives.

28

The grouping of drives within a pool is called an array. When using a double parity configuration, for example, the 12 drive array will contain 10 drives for data storage and 2 drives for parity. One or more arrays may be configured within a pool to improve performance, increase storage, and provide better data protection. Arrays are described in more detail below.

Configuring a Pool requires a pool name, selecting a drive type, and how many drives for the pool. Set Level of Protection When configuring a pool, after assigning a name and selecting the type and number of disks, set the level of protection the system will provide for data in each pool. The system uses this level of protection, along with some optimization choices, to further subdivide the number of disks in each pool into arrays. The system creates arrays automatically, simplifying setup while balancing protection, capacity, and performance.

The choices of protection (and optionally lack thereof) for a set of disks within a single pool include: • None (Striping): The data is striped across all disks in the pool, without redundancy and without storing parity, as a single array. In a stripe-only configuration, if you lose one disk, all data in the striped data set is lost. This may be appropriate for data that is intentionally short-lived. All of the disk capacity is available with this method of storage. • Mirror: This setup creates redundant data disks. In doing this, only half of the disk capacity is available. However, all data is available even with disk failure, and the failed drive can be rebuilt quickly using the global hot spare and the mirror of the failed drive. • Parity: This is similar to setting a traditional RAID, with some notable improvements on RAID (see later section on BlackPearl NAS Advantages). Parity information is a kind of metadata that systems use to rebuild data if a disk (or multiple disks, depending on the parity level) fails. The level of parity determines the number of drives that can fail without risking data loss. o Single Parity: One disk in each array can fail without putting data at risk—if a disk fails, all data can still be rebuilt o Double Parity: two disks in each array can fail without putting data at risk—all data can be rebuilt o Triple-Parity: up to three drives in an array can fail without losing data BlackPearl NAS Advantages With ZFS, the BlackPearl NAS file system uses checksums, providing data integrity advantages not available in standard RAID configurations. One of the significant advantages is that the system prevents what is called a write-hole in RAID. With RAID, data written to disks can be corrupted if a power loss occurs during the data write process.

29

With BlackPearl NAS, the ZFS file system does not overwrite existing data—it writes data to a new location on the disk. Further, until the parity information is created and the checksum is verified, the operation is not committed as completed to the system. This prevents a write, and resulting restore error, that can occur in RAID systems, when a problem interferes with the system while data is being written to disk—in a RAID system, the data is committed as sent, without verifying the integrity of data. Balancing Capacity and Performance with Parity After selecting a parity level, the user will select the Optimization Level by using the slider bar (pictured below). Depending on the level of protection selected, the slider bar configures the arrays within the pool, allowing the user to balance between performance and capacity. The selected protection level (parity) and optimization level (slider position) determine how the disks in the pool are divided into sets of matching size, with each set referred to as an array. The pool then stripes all the arrays into a single pool. As the protection level or optimization level change, the pool overhead (as a %) changes as well. The overhead and usable formatted capacity are both displayed toward the bottom of the Pool configuration screen.

Changing the slider bar and changing the number of drives in each array gives the admin user the flexibility to decide on how many drives they want to expand a pool by (at a later time) since you can add another array of the same size at any time to the existing pool.

Below is a side-by-side comparison of an 8-drive pool, requiring double parity, comparing the Capacity optimization setting to the Performance optimization setting. In this example, when more Performance is required, the system will automatically configure the pool into equal sized arrays and will stripe the data across the arrays, providing higher data throughput. Because the number of double parity arrays increases, the pool will sacrifice the available capacity, in order to provide enough parity data space. Depending on the total number of data drives and protection level selected, more arrays will be created to allow for an increase in performance.

30

For another example using triple parity, a pool of 18 disks can be automatically grouped as follows: • A single array of 18 disks • Two arrays of 9 disks • Three arrays of 6 disks Then when the existing pool needs to be expanded, it would be expanded by that increment – 18, 9 or 6 – HDDs respectively as the set increment.

For example, the system never subdivides a pool of 18 disks using an array size of five disks, because this grouping wastes capacity. BlackPearl NAS will not configure a system so that one or more disks in an array cannot be used. However, choosing a pool drive number with more primes in it will provide more slider bar options. For example, 30 drives which are divisible by 2, 3, and 5. Also, once a pool’s array size is defined, the number of disks in the array can’t be changed. To expand the pool, add the number of disks used in the pool’s arrays (such as 6 disks per array). You can always create additional pools with alternate array sizes. It is also very easy to move a Volume from one pool to another.

Parity Level of Pool Array Size/ Subset of Pool All arrays within a pool are the same size Double Parity • 4 to 17 disks per array • If the number of disks in the pool cannot be divided by a number in the specified range of 4-17, then all disks are used in a single array. For example: 31

Parity Level of Pool Array Size/ Subset of Pool All arrays within a pool are the same size o a pool with 19 disks set to double parity uses one array of 19 disks o a pool with 21 disks is divisible by 7 (within the 4-17 range), so three arrays of 7 is used Triple-Parity • 6 to 19 disks per array • If the number of disks in the pool cannot be divided by a number in the specified range of 6 to 19, then the pool size equals the array size. For example: o a pool with 23 drives set to triple parity uses one array of 23 disks

The system configuration works with these parameters, which results in some disallowed array sizes. The following table shows some examples of permitted and disallowed array sizing. Examples of Array Sizing Parity Level Pool made up of 10 disks Pool made up of 12 disks Double parity— • 10 disks per array - one array • 12 disks per array - one array per pool default value per pool • 6 disks per array - two arrays in the pool (4-17 disks per array • 5 disks per array - two arrays • 4 disks per array - three arrays in the or all disks in in the pool pool pool=array) Not allowed: 2 disks per array, Not allowed: 3 disks per array, because because each double parity each double parity array must have at least array must have at least 4 disks. 4 disks. Triple parity • 10 disks per array - one array • 12 disks per array - one array per pool (6-19 disks per array per pool • 6 disks per array - two arrays in the pool or all disks in Not allowed: 5 disks per array, Not allowed: 4 disks per array, or 3 disks pool=array) because each triple parity array per array, because each triple parity array must have at least 6 disks. must have at least 6 disks.

To continue with the earlier example: Pool1 is made up of ten disks and is mirrored. Pool2 is made of 12 disks and is protected using double parity. In this example, each pool is divided into arrays that work with the parity level assigned. (Note that this example is intended only to illustrate the principles behind a BlackPearl NAS system configuration.) Considerations in Setting Protection Levels and Balancing Capacity and Performance The number of disks per array is a result of setting the Protection level and then balancing capacity and performance with the Optimization level.

Consider the following when configuring the system:

32

• Capacity: With more disks in an array (larger array), the system uses fewer disks for parity (less overhead) so there is a greater capacity. At the same time, this sacrifices some performance; larger arrays aren’t read as fast as smaller arrays. • Speed of data reads: With fewer disks in an array (smaller array), the data is read more quickly, enhancing performance and increasing the rebuild time. The tradeoff is that the increase in performance reduces overall capacity because each additional array requires dedicated parity creating more overhead. Which means that parity space is used for data protection rather than simple data storage. • Expansion: The number of disks in an array is the base unit of expansion in a pool. That means that, if you have, for example, a single array of 30 disks, you can only expand the pool in increments of 30 disks. Alternately, if your pool uses double parity and is set up so that each array has 5 disks, then you can add any multiple of 5 disks at a time to the pool. This ensures that all disks can be used. Define Pool Availability to System Network/Protocols The next step in configuring a BlackPearl NAS system is to define the logical subsets of each pool. These are referred to as volumes. The volumes contain a file system and a share. Volumes are associated with pools, independent of array size. Thin Provisioning BlackPearl NAS uses thin provisioning so that space on the system is allocated only at the point of data writes. If a minimum value for a volume has been defined, then that space is allocated specifically to the volume and cannot be used by another volume. You can also set a maximum value. Only the minimum value is allocated specifically to the volume; the space beyond the minimum and up to the maximum value is not specifically assigned to that volume until the write occurs.

Oversubscribing The possible size of each volume is limited only by the size of the pool. Further, the sum of the volumes sizes can be larger than the space available in the pool. This is referred to as oversubscribing the space. The examples illustrated in this section show volumes that are sized so that together they use more space than is physically available. This lets the system allocate space only as data is written. This feature reduces the need for manual intervention. In this example, Pool2 has been divided into three volumes, whose aggregate maximum capacity is larger than the size of Pool2. The volumes are ExecVol, EngVol, and SupportVol.

33

For example, if the SupportVol needs space that the other two volumes have not yet claimed but is within the maximum capacity set for the SupportVol volume, then SupportVol can expand without requiring the intervention of a system administrator. The BlackPearl NAS system notifies the administrator if the space available within a pool is low, set with a watermark; alerting the storage admin, for example, to expand the pool, delete some data from the pool, or create a new pool for the data. Further, take note that although you can remove data from a pool, the allocated pool size remains the same, allowing other volumes to utilize the storage that was made available.

Additionally, the Maximum Size of a volume may be left blank, allowing a volume to expand to the maximum capacity of the pool. If multiple volumes are configured in a pool without a maximum size limit, storage will be available to each volume on a first-come-first-serve basis. Assign Access to Volumes After the pool, arrays, and volumes are configured, the next step is to define how the BlackPearl NAS system connects to the data network using mount point or shares that a network can access. The system’s Web Management Interface makes it easy to configure the data network connection.

After the data network IP address is configured, set up a share on the volume with the network by using the NFS or CIFS service on the BlackPearl NAS system. Volumes are associated with either CIFS or NFS, never both. Once the system is configured, use a data mover to write and read data to the system. Raw vs Usable space When an array is utilizing parity disks, it is important to understand the difference between raw and usable space. In a double parity array, consisting of 16 x 8TB drives, the raw capacity would simply be 16 x 8 (TB) = 128 TB. With double parity, the equivalent space of 2 x 8TB drives would not be used for data storage, so the usable capacity would be calculated with 128TB – (2 x 8TB) = 112 TB.

It should also be noted that ZFS systems perform most efficiently when not fully loaded. Spectra recommends adding a high water mark alert at 80% capacity utilization and expanding when this level is reached. ZFS and enterprise drives can sustain performance up to 96% full before seeing a detrimental impact on write performance.

When developing a system with Spectra Logic representatives, a clear understanding on the size required is crucial; please specify whether Spectra needs to size the system by using: RAW capacity, Usable Capacity (excludes: parity, formatting, and spare drives) or formatted usable (which includes the parity overhead and the ZFS file system formatting overhead). Also specify whether we are talking base 10 or base 2.

34

Base 10 is TB or terabyte or 1 TB = 10^12 = 1,000,000,000,000 Byte Base 2 is TiB or tebibyte or tera(binary)byte which is 1 TiB = 2^40 = 1,099,511,627,776 Byte

Note that there is a difference between TB and TiB. One hundred terabytes (100TB) equals 90.95TiB. Sizing of System as Configuration Note that Windows and BlackPearl NAS are actually using 18 Drive Array (16+2) with 4 Spares reporting TiB (base2) even though it says TB (Base10) TB (Base 10) TiB (Base 2) in the GUI. This has implications, however, as the RAW 1,216 TB 1,106 TiB customer needs to be very specific as to which Usable 1,024 TB 931 TiB “scale” they are referring when asking for a particular capacity (normal TB or the Base2 TiB). Formatted Usable 984 TB 895 TiB

The example sizing of a NAS system with 1,000TB Number of Number of 16TB Enterprise Chassis SATA Drives usable, using 16TB Enterprise SATA with a 16+2 array Grand Total 2 76 size. Our array size would provide 1,024TB usable, and the formatted usable would be approximately 35-Bay Master Node 1 35 984TB after ZFS parity and file system overhead. 107-Bay JBOD 1 41

As low as 6¢/GB (USD) – how much capacity is required to get this pricing BlackPearl NAS hardware is available for as little as 6 cents per gigabyte (USD). BlackPearl NAS always starts with the Master Node chassis, with the server controller, and where some amount of storage is installed The Master Node will cost a little more, $/TB metric, than the expansion chassis, due to the extra components and lower hard drive density. The lowest priced storage comes as soon as a full Expansion Node/chassis is ordered. A full 4U Drive Expansion Node holds 107 drives, and the 4U Expansion Node full with 107 16TB drives is priced at 6¢/GB or $102,720 per expansion node. This pricing is available on the very first full Expansion Node, as well as any successive upgrade Expansion Nodes. That way, a system reaches the 6¢/GB target with only the first full Expansion Node. The drives that go into Expansion Node(s) are sold in single drive increments, meaning any amount of drives can be purchased with each Expansion Node to meet the exact sizing requirement. Expansion of Capacity Any BlackPearl NAS pool can be expanded up to the maximum of a single rack (40 RU) capacity, with all storage managed by a single Master Node controller – using a scale-up storage model. Multiple Master Nodes can be used independently to expand the overall storage system indefinitely. Each capacity increment is individually priced with the best $/GB achieved when buying full Expansion Nodes.

BlackPearl NAS units are licensed for a specific number of disk drives, so upon expansion the drive pack will come a new license keys. Note that disk drive license keys come only with initial purchase or disk expansion packs, but are not time sensitive, they never expire.

Expansion Nodes require the addition of a PCI-E SAS HBA card in the Master Node controller in order to add more SAS cabling from the controller to the new Expansion Node(s). All Drive Expansion Node units require Spectra on-site installation services. A Master Node only system, without drive expansion node units, can be self-installed by the customer.

35

Physical Buildout Considerations Drive Expansion Node Weight and rack considerations Each fully loaded Drive Expansion Node unit weighs 258lbs (117kg) and comes with heavy-duty slide-out rails, as well as a cable management system. In order to account for safety concerns, Spectra recommends best practices that include:

• Install the Drive Expansion Node starting from the bottom of the rack – putting the heaviest components on the bottom of a rack o In a full rack system with 9 Drive Expansion Node units, the Master Node must be in installed the middle (+/- 6RU) of the rack • Ensure any rack used meets weight and structural requirements • Carefully evaluate any raised floor static load weight specifications to make sure that current, as well as potential future expansion, do not overload the floor o Extra caution should be used if racks will be moved while loaded, evaluating the dynamic loading for the floor • Spectra highly recommends bolting the rack to the floor to prevent tipping o Note that Spectra racks for this system come with bolt down connectors that should be used only on reinforced floor systems. Simply bolting the rack to a raised unstable floor can create a significant safety hazard. • See users guide and installation guide with best practices for a full list of requirements

Cooling considerations Proper temperature control is perhaps the single most important element to support a stable long-term storage system. Hard drive maximum temperature is, in fact, the most critical aspect to monitor in your BlackPearl NAS system. When expressed as annual failure rate, the nominal operating temperature, including all contributors such as self and case heating, is 40 degrees C. Even a temperature increase of 10 degrees C drastically increases the annual failure rate of drives, as shown in the following graph.

For this reason, it is critical that both ambient temperatures are carefully controlled and adequate airflow is provided to cool BlackPearl NAS units. Cooling is from front to back and requires some special considerations for the Drive Expansion Node.

• See users and installation manual for details on cooling recommendations as well as buildout • In Expansion Nodes: o Blank slots will arrive with empty drive sleds installed.

36

o If replacing empty drive sleds with actual drives, save the blank sleds for future use. • System will automatically monitor temperatures and alert the user via the front panel Visual Status Beacon, email alerts, and in extreme cases, audible tones. If temperatures reach a critical threshold, the system will automatically shut down.

Note that the system records temperature over time and excessive violations of the recommendation in the user manual will void warranty and support.

Add-in card options: BlackPearl NAS 4U systems come with a dual port 10GigE optical network card standard. The network card is optional on 2U NAS Systems.

Units can be upgraded with a different network card, a 10GBase-T copper network card, or a 40GigE network card.

Each Drive Expansion Node requires dual SAS cables, connected from the controller unit PCI-E SAS HBA card, into integrated ports on the Expansion Node (these SAS ports are split, so one SAS port maps to some of the drives in the Expansion Node, while the other SAS port maps to the rest of the drives).

Monitoring and Maintenance BlackPearl NAS systems have a simple Web Management Interface (also referred to as the GUI) that is extremely easy to use, whether configuring, managing or monitoring the system status. The system interface is password-protected and lets you remotely monitor and manage the system via a web browser on the management network. The first screen, the Dashboard, shows general system data and performance data. In the menu bar easily navigate between the Dashboard screen and the Configuration, Status, and Support screens.

The dashboard provides an overview of configured storage pools, volumes, and shares, which are the logical components used to interact with the data storage capacity provided by the system.

37

Status Bar The Web Management Interface provides the status of the system at a glance, providing component status and information about any messages that require attention. The status bar, at the bottom of every screen, provides the following: • Icons that indicate hardware status at a glance • Severity, date, and time of the most recent warning or error message • Controls for rebooting and shutting down an array Visual Status Beacon The Visual Status Beacon light bar in the front bezel provides an at‐a‐glance status of the system. The light bar changes color to indicate the status. If a BlackPearl NAS system requires attention, the LED bar in the bezel helps administrators identify the unit quickly when walking down a rack row full of system. The visual status beacon is included with 4U Master Node, 2U Master Node, and all Expansion Nodes.

Scalability: The Modular Design of Spectra Storage Systems Spectra Logic’s storage systems are designed with modular media and components that let users add or swap drives and replace components stored on site as needed, most with no downtime. Modular Expansion: Scaling the BlackPearl NAS System To scale-up a BlackPearl NAS system to meet your site’s evolving storage requirements, you can easily add capacity by adding an Expansion Node. Each single 4U Master Node supports up to 9 Expansion Nodes. The Expansion Node connects to the Master Node using dedicated, direct SAS cabling.

On the BlackPearl NAS, the operating system data is stored on dedicated mirrored operating system (boot) drives.

Drive storage bays for each node/chassis:

Front Drive Back Drive Top Load Total Drive Bays Bay Access Bay Access Access - Master Node (4U) 35 23 12

Master Node (2U) 11 11 - -

Drive Expansion 107 - - 107 Node (4U) * The 4U Expansion Nodes for 2U Master Nodes have one more drive available (non-illuminating bezel)

The Front and Back Drive Bay Access use drive sleds that slide into bays in the system and lock in place. Use the handle on the drive sled to remove and insert a sled, and the latch to lock the inserted drive sled.

38

When the chassis is installed less than full with drives, empty drive sleds are installed in the unused drive bays, to prevent contaminants from entering the enclosure and to maintain proper airflow. To add a drive, order one from Spectra Logic, then replace the bays empty drive slide with the new drive. You may then use the Web Management Interface to add the drives to a pool or to create a new pool. Keep in mind, expanding an existing pool requires the same number of drives as a single set (array) for the pool, see Balancing Capacity and Performance section for reference.

Management and Reporting Features BlackPearl NAS includes many built-in management and reporting features that simplify its use and monitoring. The Web Management Interface, shown in an earlier section and in following sections, increases system ease of use. Command Line Interface The command line interface provides an alternate method of accessing the NAS system, duplicating the functions available through the graphical interface. System administrators may use SSH to remotely access and manage the system using commands and associated parameters. SNMP Management Protocol BlackPearl NAS systems support SNMP (Simple Network Management Protocol), with a MIB (Management Information Base), available through the Web Management Interface that can be used to communicate between the system and other servers on the network. Performance Monitoring The performance pane displays the performance of • Pools • Drives • CPUs • Network

39

System Messages The NAS systems provide ready access to error messages, including flagged messages that may require attention.

Hardware Status Through a web browser from any location, you can use the Web Management Interface to check hardware status. From the interface’s main dashboard, select hardware then select the component you are checking on. You can check the status of data drives, fans, power supplies, and the system. The system screen provides information about processors, memory, and the boot drives. Network Configuration The system displays information about the configuration of the network, including all network connections and status of each, DNS servers, and email configuration. This greatly simplifies remote management of the NAS system.

Support Tools and Continuity Features The following are some integrated features of BlackPearl NAS systems that help to identify potential issues and allow administrators to proactively address them before they interfere with ongoing system operations: AutoSupport ‘Phone Home’ Feature BlackPearl NAS systems have an AutoSupport feature that can be configured to automatically generate and send e-mail messages about issues or problems to designated e-mail users, including directly to SpectraGuard Technical Support. Certain critical error messages may also trigger the auto-creation of a technical support incident, depending on the severity of the issue. Hot-Swappable Hard Drives Hard drives in the BlackPearl NAS are in drive sleds that can be pulled out easily from Master Nodes; the Drive Expansion Node does not use sleds. With hot-swappable drives, users can replace a failed drive without the use of tools, and without requiring the unit to be powered down.

40

Global Spare Hard Drives It is recommended that users install one or more spare HDD’s in the NAS system to act as a Global Spare. In the event of a disk failure, intelligent rebuilds will begin immediately on the spare HDD with no user intervention required. Intelligent Rebuilds When a drive fails, instead of rebuilding the entire drive, BlackPearl NAS systems rebuild only the portion of the drive that held data, potentially saving hours on rebuilds. Redundant Power Each BlackPearl NAS Master Node and Expansion Node ships with two redundant, high-efficiency power supplies.

BlackPearl NAS Support Lifecycle Phases All disk and server-based products have a limited shelf life before HDD and other hardware failures start to occur at an accelerated rate year over year. The BlackPearl NAS Support Lifecycle Phases are designed to help users manage their investment in BlackPearl NAS technology and to plan ahead for refresh expenditures as the system approaches the end of its supportable life. The BlackPearl NAS Product Lifecycle consists of two phases:

Prime Support Phase- Three years from date of shipment (or date of install if PS Installation Services are purchased).

Extended Support Phase- Two years beyond Prime Support Phase, or total of five years from date of shipment (or date of install if PS Installation Services are purchased).

Support contracts available for purchase with a BlackPearl NAS system must align with one or both of these phases. Customers should choose the support contract duration (3yrs or 5yrs) that is most closely in line with their planned depreciation and/or technology refresh schedule. Regardless of the support contract duration that is purchased, a BlackPearl NAS system, including all subcomponents and/or Expansion Nodes, will reach End of Support Life (EOSL) after 5 years (at the conclusion of its Extended Support Phase).

There is a higher cost associated with renewing a support contract through the Extended Support Phase after the time of sale vs. purchasing a contract for the full 5-year support life duration upfront.

Upgrades made to the system after the time of the original sale will assume the same lifecycle phase(s) of the Master Node that the components/Expansion Node(s) are added to. The incremental support cost associated with additional components will be charged to be coterminous with the then-current support contract end date of the Master Node.

End of Support Life (EOSL): A technology refresh is required once a system reaches EOSL. A new BlackPearl NAS system is purchased and the support lifecycle phases begin again, ensuring that the data migrated from the EOSL unit is protected on a fully supported system.

SpectraGuard Support Overview There are three levels of Hardware (HW) Support for component replacement. Two of the offerings available include onsite repair service. There are two different levels of Software (SW) telephone 41

support to troubleshoot all issues. The following service options are available for purchase with a BlackPearl NAS system. Hardware (HW) Support A variety of Hardware (HW) replacement options are available. Disk- and Server-based systems, from a HW perspective, are highly reliable devices with very low failure rates at a sub-component level. There are three HW support offerings, from fast onsite repair service to an affordable self-service offering.

Check with your sales representative for the local support offering(s) in your region and country, not all support offerings are available in every country or region.

SpectraGuard Basic HW Support Basic HW Support includes standard shipping of replacement parts after final telephone diagnosis is complete. Components are replaced by the customer with telephone assistance; onsite repair services are available (see NBD and 4HR).

All drives (HDDs or SSDs) come with Basic HW Support. Additional spares should be purchased so intelligent rebuilds may begin immediately after a failure, and the replacement HDD can be mailed to replace the failed drive, acting as the new global spare drive.

SpectraGuard NBD (Next Business Day) On-Site HW Support NBD HW Support includes an onsite service option to deliver and repair or replace any failed HW components. Dispatched parts will arrive within the next business day after final diagnosis has been made by the telephone technician.

SpectraGuard 4HR (Four-Hour) On-Site HW Support 4HR HW Support includes the fastest onsite repair service available. After final diagnosis and the dispatch is made, then a service representative will be onsite within 4 hours (24 hours a day, 7 days a week, 365 days a year).

The 24x7 telephone SW Support is required and included with 4HR on-site HW Support. Software (SW) Support Each HW support will come with one of the following software support options; note that 4-Hour onsite is always paired with 24x7 telephone SW Support. The timezone for 9x5 telephone SW support is in the local time zone of the storage device.

Note: Telephone diagnosis must be completed before hardware replacement parts can be shipped or service dispatched.

SpectraGuard 9x5 Telephone SW Support Telephone Support is available during local business hours - 08:00 to 17:00 during the standard business week of the country. Includes Software (SW) support, SW upgrades and SW patches (security or bug fixes). The timezone for 9x5 telephone SW support is in the local time zone of the storage device.

42

SpectraGuard 24x7 Telephone SW Support An option to add 24x7 telephone support to any service contract is available. Includes Software (SW) support, SW upgrades and SW patches (security or bug fixes). The 24x7 telephone support is automatically included with SpectraGuard 4HR HW Support.

Professional Services Overview Spectra Logic’s Professional Services group offers on-site installation services to assist with racking, setup, and configuration of the BlackPearl NAS system, as well as providing training. Additional on-site services are offered from Professional Services for prevention, maintenance, and site-specific consulting. These services include: • Preventive maintenance • Customized training • Optimization services • Media migration support • Security assessment and consulting • System consolidation • Backup and disaster-recovery consulting • System relocation and reintegration services

Specifications Environmental Specifications Description Values Temperature range - operating environment 10° C to 35° C 50° F to 95° F Temperature range – environment when -40° C to 70° C -40° F to 158° F storing & shipping Relative humidity - operating environment 8%-90% (non-condensing) Relative humidity –environment when 5%- 95% (non-condensing) storing & shipping Altitude - operating environment Sea level to 3,048 meters Sea level to 10,000 ft. Altitude – environment when storing & Sea level to 12,000 meters Sea level to 39,370 ft. shipping Maximum wet bulb temperature - operating 29° C 84° F environment Maximum wet bulb temperature– 35° C 95° F environment when storing & shipping

Power Unit Specification Current 4.2 amps (4U master) 4 amps (2U master) Input frequency 50-60 Hz 4U Master Input voltage 100-140 VAC, 12-8 A, 1,000 W maximum 180-240 VAC, 8-6 A, 1,280 W maximum 43

2U Master Input voltage 180-240 VAC, 11-4.5 A, 920 W maximum Expansion Input voltage 200-240V AC, 15 A, 2,000 W maximum

Data Storage Parameter Specification Drive types 7200 RPM Enterprise (SAS & SATA) Single drive capacity, native 4TB, 8TB, 12TB (SED), 12TB, 16TB System capacity options 2U Master Node • Maximum – 11 drives 4U Master Node • Maximum – 35 drives Drive Expansion Node (4U) • Maximum – 107 drives System Parameter Specification CPU 2* multi-core processors System disk drives Two HDDs

Memory 64 GB (8** x 8 GB DIMMs) Interface • integrated, two 10 Gigabit 10GBase-T Ethernet ports connections (one dedicated for management) • 1 dual-port 10 Gigabit Ethernet NIC*** • 1 dual-port 40 Gigabit Ethernet NIC (optional) with 2U Master Node: *1, **4, ***optional

44

[Grab your reader’s attention with a great quote from the document or use this space to emphasize a key point. To place this text box anywhere on the page, just drag it.]

About Spectra Logic Corporation Spectra Logic develops data storage and data management solutions that solve the problem of long-term digital preservation for organizations dealing with exponential data growth. Dedicated solely to storage innovation for 40 years, Spectra Logic’s uncompromising product and customer focus is proven by the adoption of its solutions by leaders in multiple industries globally. Spectra enables affordable, multi-decade data storage and access by creating new methods of managing information in all forms of storage—including archive, backup, cold storage, private cloud and public cloud. To learn more, visit www.SpectraLogic.com.

Copyright ©2019 Spectra Logic Corporation. All rights reserved worldwide. Spectra and Spectra Logic are registered trademarks of Spectra Logic. All other trademarks and registered trademarks are property of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. All opinions in this white paper are those of Spectra Logic and are based on information from various industry reports, news reports and customer interviews. 45

V1-12919