HP IBRIX 9000 Storage File System User Guide

Abstract This guide describes how to configure and manage IBRIX software file systems and how to use NFS, SMB, FTP, and HTTP to access file system data. The guide also describes the following file system features: quotas, remote replication, snapshots, data retention and validation, data tiering, and file allocation. The guide is intended for system administrators managing 9300 Storage Gateway, 9320 Storage, 9720 Storage, and 9730 Storage. For the latest IBRIX guides, browse to

nl http://www.hp.com/support/IBRIXManuals.

HP Part Number: TA768-96076 Published: December 2012 Edition: 9 © Copyright 2009, 2012 Hewlett-Packard Development Company, L.P.

Confidential software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Acknowledgments

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.

Revision History

Edition Date Software Description Version

1 November 2009 5.3.1 Initial release of HP 9000 File Serving Software

2 December 2009 5.3.2 Updated license and quotas information

3 April 2010 5.4.0 Added information about file cloning, CIFS, directory tree quotas, the Statistics tool, and GUI procedures

4 July 2010 5.4.1 Removed information about the Statistics tool

5 December 2010 5.5.0 Added information about authentication, CIFS, FTP, HTTP, SSL certificates, and remote replication

6 April 2011 5.6 Updated CIFS, FTP, HTTP, and snapshot information

7 September 2011 6.0 Added or updated information about data retention and validation, software snapshots, block snapshots, remote replication, HTTP, case insensitivity, quotas

8 June 2012 6.1 Added or updated information about file systems, file share creation, rebalancing segments, remote replication, user authentication, CIFS, LDAP, data retention, data tiering, file allocation, quotas, Antivirus software

9 December 2012 6.2 Added or updated information about file systems, physical volumes, segment rebalancing, remote replication, Antivirus scans, REST API, Express Query, auditing, HTTP, quotas, data tiering, renamed CIFS to SMB Contents

1 Using IBRIX software file systems...... 9 File system operations...... 9 File system building blocks...... 11 Configuring file systems...... 11 Accessing file systems...... 12 2 Creating and mounting file systems...... 13 Creating a file system...... 13 Using the New Filesystem Wizard...... 13 Configuring additional file system options...... 17 Creating a file system using the CLI...... 18 File limit for directories...... 19 Managing mountpoints and mount/unmount operations...... 19 GUI procedures...... 19 CLI procedures...... 21 Mounting and unmounting file systems locally on IBRIX 9000 clients...... 22 Limiting file system access for IBRIX 9000 clients...... 23 Using Export Control...... 24 3 Configuring quotas...... 25 How quotas work...... 25 Enabling quotas on a file system and setting grace periods...... 25 Setting quotas for users, groups, and directories...... 26 Using a quotas file...... 29 Importing quotas from a file...... 29 Exporting quotas to a file...... 30 Format of the quotas file...... 30 Using online quota check...... 31 Configuring email notifications for quota events...... 32 Deleting quotas...... 32 Troubleshooting quotas...... 33 4 Maintaining file systems...... 34 Best practices for file system performance...... 34 Viewing information about file systems and components...... 34 Viewing physical volume information...... 35 Viewing volume group information...... 35 Viewing logical volume information...... 36 Viewing file system information...... 36 Viewing disk space information from a 9000 client...... 39 Extending a file system...... 39 Rebalancing segments in a file system...... 40 How rebalancing works...... 40 Rebalancing segments on the GUI...... 41 Rebalancing segments from the CLI...... 43 Tracking the progress of a rebalance task...... 43 Viewing the status of rebalance tasks...... 44 Stopping rebalance tasks...... 44 Deleting file systems and file system components...... 44 Deleting a file system...... 44 Deleting segments, volume groups, and physical volumes...... 44 Deleting file serving nodes and IBRIX 9000 clients...... 45 Checking and repairing file systems...... 45

Contents 3 Analyzing the integrity of a file system on all segments...... 46 Clearing the INFSCK flag on a file system...... 46 Troubleshooting file systems...... 46 Segment of a file system is accidently deleted...... 46 ibrix_pv -a discovers too many or too few devices...... 47 Cannot mount on an IBRIX 9000 client...... 47 NFS clients cannot access an exported file system...... 48 User quota usage data is not being updated...... 48 File system alert is displayed after a segment is evacuated...... 48 SegmentNotAvailable is reported...... 48 SegmentRejected is reported...... 48 ibrix_fs -c failed with "Bad magic number in super-block"...... 50 5 Using NFS...... 51 Exporting a file system...... 51 Unexporting a file system...... 54 Using case-insensitive file systems ...... 54 Setting case insensitivity for all users (NFS/Linux/Windows)...... 55 Viewing the current setting for case insensitivity...... 55 Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows)...... 55 Log files...... 56 Case insensitivity and operations affecting directories...... 56 6 Configuring authentication for SMB, FTP, and HTTP...... 58 Using Active Directory with LDAP ID mapping...... 58 Using LDAP as the primary authentication method...... 59 Requirements for LDAP users and groups...... 59 Configuring LDAP for IBRIX software...... 59 Configuring authentication from the GUI...... 60 Viewing or changing authentication settings...... 68 Configuring authentication from the CLI...... 69 Configuring Active Directory ...... 69 Configuring LDAP...... 69 Configuring LDAP ID mapping...... 70 Configuring Local Users and Groups authentication...... 71 7 Using SMB...... 73 Configuring file serving nodes for SMB...... 73 Starting or stopping the SMB service and viewing SMB statistics...... 73 Monitoring SMB services...... 74 SMB shares...... 75 Configuring SMB shares with the GUI...... 76 Configuring SMB signing ...... 80 Managing SMB shares with the GUI...... 81 Configuring and managing SMB shares with the CLI...... 82 Managing SMB shares with Microsoft Management Console...... 83 Linux static user mapping with Active Directory...... 87 Configuring Active Directory...... 88 Assigning attributes...... 89 Synchronizing Active Directory 2008 with the NTP used by the cluster...... 90 Consolidating SMB servers with common share names...... 90 SMB clients...... 92 Viewing quota information...... 92 Differences in locking behavior...... 92 SMB shadow copy...... 92 Permissions in a cross-protocol SMB environment...... 94

4 Contents How the SMB server handles UIDs and GIDs...... 94 Permissions, UIDs/GIDs, and ACLs...... 95 Changing the way SMB inherits permissions on files accessed from Linux applications...... 96 Troubleshooting SMB...... 96 8 Using FTP...... 98 Best practices for configuring FTP...... 98 Managing FTP from the GUI...... 98 Configuring FTP ...... 98 Managing the FTP configuration...... 102 Managing FTP from the CLI...... 103 Configuring FTP ...... 103 Managing the FTP configuration...... 103 The vsftpd service...... 104 Starting or stopping the FTP service manually...... 104 Accessing shares...... 105 FTP and FTPS commands for anonymous shares...... 105 FTP and FTPS commands for non-anonymous shares...... 106 FTP and FTPS commands for Fusion Manager...... 107 9 Using HTTP...... 108 HTTP share types...... 108 Process checklist for creating HTTP shares...... 108 Best practices for configuring HTTP...... 109 Managing HTTP from the GUI...... 109 Configuring HTTP shares...... 109 Managing the HTTP configuration...... 116 Tuning the socket read block size and file write block size ...... 116 Managing HTTP from the CLI...... 117 Configuring HTTP...... 117 Managing the HTTP configuration...... 118 Starting or stopping the HTTP service manually...... 118 Accessing shares...... 119 Configuring Windows clients to access HTTP WebDAV shares...... 120 Troubleshooting HTTP...... 121 10 Managing SSL certificates...... 123 Creating an SSL certificate...... 123 Adding a certificate to the cluster...... 125 Exporting a certificate...... 126 Deleting a certificate...... 126 11 Using remote replication...... 127 Overview...... 127 Continuous or run-once replication modes...... 127 Using intercluster replications...... 128 Using intracluster replications...... 129 File system snapshot replication...... 129 Configuring the target export for replication to a remote cluster...... 129 GUI procedure...... 130 CLI procedure...... 131 Configuring and managing replication tasks on the GUI...... 133 Viewing replication tasks...... 133 Starting a replication task ...... 135 Pausing or resuming a replication task...... 138 Stopping a replication task...... 138 Configuring and managing replication tasks from the CLI...... 139

Contents 5 Starting a remote replication task to a remote cluster...... 139 Starting an intracluster remote replication task...... 140 Starting a run-once directory replication task...... 140 Stopping a remote replication task...... 140 Pausing a remote replication task...... 140 Resuming a remote replication task...... 140 Querying remote replication tasks...... 140 Replicating WORM/retained files...... 141 Configuring remote failover/failback...... 141 Troubleshooting remote replication...... 142 12 Managing data retention...... 143 Overview...... 143 Data retention...... 143 Data validation scans...... 144 Enabling file systems for data retention...... 145 Viewing the retention profile for a file system...... 148 Changing the retention profile for a file system...... 149 Managing WORM and retained files...... 149 Creating WORM and WORM-retained files...... 149 Viewing the retention information for a file...... 150 File administration...... 150 Running data validation scans...... 153 Scheduling a validation scan...... 153 Starting an on-demand validation scan...... 154 Viewing, stopping, or pausing a scan...... 155 Viewing validation scan results...... 155 Viewing and comparing hash sums for a file...... 156 Handling validation scan errors...... 156 Creating data retention reports...... 157 Generating and managing data retention reports...... 158 Generating data retention reports from the CLI...... 159 Using hard links with WORM files...... 160 Using remote replication...... 160 Backup support for data retention...... 160 Troubleshooting data retention...... 160 13 Express Query...... 162 Managing the metadata service...... 162 Backing up and restoring file systems with Express Query data...... 163 Saving and importing file system metadata...... 164 Metadata and continuous remote replication...... 166 Metadata and synchronized server times...... 166 Managing auditing...... 167 Audit log...... 167 Audit log reports...... 168 StoreAll REST API...... 170 Dual nature of REST API shares...... 170 Component overview...... 171 File content transfer...... 175 Custom metadata assignment...... 177 Metadata queries...... 179 Retention properties assignment...... 189 HTTP Status Codes...... 191

6 Contents 14 Configuring Antivirus support...... 192 Adding or removing external virus scan engines...... 193 Enabling or disabling Antivirus on IBRIX file systems...... 194 Updating Antivirus definitions...... 194 Configuring Antivirus settings...... 195 Managing Antivirus scans...... 199 Starting or scheduling Antivirus scans...... 199 Viewing, pausing, resuming, or stopping Antivirus scan tasks ...... 201 Viewing Antivirus statistics...... 202 Antivirus quarantines and software snapshots...... 202 15 Creating IBRIX software snapshots...... 204 File system limits for snap trees and snapshots...... 204 Configuring snapshot directory trees and schedules...... 204 Modifying a snapshot schedule...... 206 Managing software snapshots...... 206 Taking an on-demand snapshot...... 206 Determining space used by snapshots...... 207 Accessing snapshot directories...... 207 Restoring files from snapshots...... 208 Deleting snapshots...... 209 Moving files between snap trees...... 212 Backing up snapshots...... 212 16 Creating block snapshots...... 213 Setting up snapshots...... 213 Preparing the snapshot partition...... 213 Registering for snapshots...... 214 Discovering LUNs in the array...... 214 Reviewing snapshot storage allocation...... 214 Automated block snapshots...... 214 Creating automated snapshots using the GUI...... 215 Creating an automated snapshot scheme from the CLI...... 218 Other automated snapshot procedures...... 219 Managing block snapshots...... 220 Creating an on-demand snapshot...... 220 Mounting or unmounting a snapshot...... 220 Recovering system resources on snapshot failure...... 220 Deleting snapshots...... 220 Viewing snapshot information...... 221 Accessing snapshot file systems...... 222 Troubleshooting block snapshots...... 224 17 Using data tiering...... 225 Creating and managing data tiers...... 225 Viewing tier assignments and managing segments...... 230 Viewing data tiering rules...... 231 Running a migration task...... 232 Configuring tiers and migrating data using the CLI...... 233 Changing the tiering configuration with the CLI...... 236 Writing tiering rules...... 236 Rule attributes...... 236 Operators and date/time qualifiers...... 237 Rule keywords...... 238 Migration rule examples...... 238 Ambiguous rules...... 239

Contents 7 18 Using file allocation...... 240 Overview...... 240 File allocation policies...... 240 How file allocation settings are evaluated...... 241 When file allocation settings take effect on the 9000 client...... 242 Using CLI commands for file allocation...... 242 Setting file and directory allocation policies...... 242 Setting file and directory allocation policies from the CLI...... 243 Setting segment preferences...... 243 Creating a pool of preferred segments from the CLI...... 244 Restoring the default segment preference...... 244 Tuning allocation policy settings...... 245 Listing allocation policies...... 245 19 Support and other resources...... 247 Contacting HP...... 247 Related information...... 247 HP websites...... 247 Subscription service...... 247 20 Documentation feedback...... 248 Glossary...... 249 Index...... 251

8 Contents 1 Using IBRIX software file systems File system operations The following diagram highlights the operating principles of the IBRIX file system.

The topology in the diagram reflects the architecture of the HP 9320, which uses a building block of server pairs (known as couplets) with SAS attached storage. In the diagram: • There are four file serving nodes, SS1–SS4. These nodes are also called segment servers. • SS1 and SS2 share access to segments 1–4 through SAS connections to a shared storage array. • SS3 and SS4 share access to segments 5-8 through SAS connections to a shared storage array. • One client is accessing the name space using NAS protocols. • One client is using the proprietary IBRIX client. The following steps correspond to the numbering in the diagram: 1. The “namespace” of the file system is a collection of segments. Each segment is simply a repository for files and directories with no implicit namespace relationships among them.

File system operations 9 (Specifically, a segment need not be a complete, rooted directory tree). Segments can be any size and different segments can be different sizes. 2. The location of files and directories within particular segments in the file space is independent of their respective and relative locations in the namespace. For example, a directory (Dir1) can be located on one segment, while the files contained in that directory (File1 and File2) are resident on other segments. The selection of segments for placing files and directories is done dynamically when the file/directory is created, as determined by an allocation policy. The allocation policy is set by the system administrator in accordance with the anticipated access patterns and specific criteria relevant to the installation (such as performance and manageability). The allocation policy can be changed at any time, even when the file system is mounted and in use. Files can be redistributed across segments using a rebalancing utility. For example, rebalancing can be used when some segments are too full while other have free capacity, or when files need to be distributed across new segments. 3. Segment servers are responsible for managing individual segments of the file system. Each segment is assigned to one segment server and each server may own multiple segments, as shown by the color coding in the diagram. Segment ownership can be migrated between servers with direct access to the storage volume while the file system is mounted. For example, Seg1 can be migrated between SS1 and SS2 but not to SS3 or SS4. Additional servers can be added to the system dynamically to meet growing performance needs, without adding more capacity, by distributing the ownership of existing segments for proper load balancing and utilization of all servers. Conversely, additional capacity can be added to the file system while in active use without adding more servers—ownership of the new segments is distributed among existing servers. Servers can be configured with failover protection, with other servers being designated as standby servers that automatically take control of a server’s segments if a failure occurs. 4. Clients run the applications that use the file system. Clients can access the file system either as a locally mounted cluster file system using the IBRIX Client or using standard network attached storage (NAS) protocols such as NFS and Server Message Block (SMB). 5. Use of the IBRIX Client on a client system has some significant advantages over the NAS approach—specifically, the IBRIX Client driver is aware of the segmented architecture of the file system and, based on the file/directory being accessed, can route requests directly to the correct segment server, yielding balanced resource utilization and high performance. However, the IBRIX Client is available only for a limited range of operating systems. 6. NAS protocols such as NFS and SMB offer the benefits of multi-platform support and low cost of administration of client software, as the client drivers for these protocols are generally available with the base . When using NAS protocols, a client must mount the file system from one (or more) of the segment servers. As shown in the diagram, all requests are sent to the server from which the share is mounted, which then performs the required routing. 7. Any segment server in the namespace can access any segment. There are three cases: a. Selected segment is owned by the segment server initiating the operation (for example, SS1 accessing Seg1). b. Selected segment is owned by another segment server but is directly accessible at the block level by the segment server initiating the operation (for example, SS1 accessing Seg3). c. Selected segment is owned by another segment server and is not directly accessible by the segment server initiating the operation (for example, SS1 accessing Seg5). Each case is handled differently. The data paths are shown in heavy red broken lines in the diagram: a. The segment server initiating the operation services the read or write request to the local segment. b. In this case, reads and writes take different routes:

10 Using IBRIX software file systems 1) The segment server initiating the operation can read files directly from the segment across the SAN; this is called a SAN READ. 2) The segment server initiating the operation routes writes over the IP network to the segment server owning the segment. That server then writes data to the segment. c. All reads and writes must be routed over the IP network between the segment servers. 8. Step 7 assumed that the server had to go to a segment to read a file. However, every segment server that reads a file keeps a copy of it cached in its memory regardless of which segment it was read from (in the diagram, two servers have cached copies of File 1). The cached copies are used to service local read requests for the file until the copy is made invalid, for example, because the original file has been changed. The file system keeps track of which servers have cached copies of a file and manages cache coherency using delegations, which are IBRIX file system metadata structures used to track cached copies of data and metadata. File system building blocks A file system is created from building blocks. The first block comprises the underlying physical volumes, which are combined in volume groups. Segments (logical volumes) are created from the volume groups. The built-in volume manager handles all space allocation considerations involved in file system creation.

Configuring file systems You can configure your file systems to use the following features: • Quotas. This feature allows you to assign quotas to individual users or groups, or to a directory tree. Individual quotas limit the amount of storage or the number of files that a user or group can use in a file system. Directory tree quotas limit the amount of storage and the number of files that can be created on a file system located at a specific directory tree. See “Configuring quotas” (page 25). • Remote replication. This feature provides a method to replicate changes in a source file system on one cluster to a target file system on either the same cluster or a second cluster. See “Using remote replication” (page 127).

File system building blocks 11 • Data retention and validation. Data retention ensures that files cannot be modified or deleted for a specific retention period. Data validation scans can be used to ensure that files remain unchanged. See “Managing data retention” (page 143). • Antivirus support. This feature is used with supported Antivirus software, allowing you to scan files on an IBRIX file system. See “Configuring Antivirus support” (page 192). • IBRIX software snapshots. This feature allows you to capture a point-in-time copy of a file system or directory for online backup purposes and to simplify recovery of files from accidental deletion. Users can access the file system or directory as it appeared at the instant of the snapshot. See “Creating IBRIX software snapshots” (page 204). • Block Snapshots. This feature uses the array capabilities to capture a point-in-time copy of a file system for online backup purposes and to simplify recovery of files from accidental deletion. The snapshot replicates all file system entities at the time of capture and is managed exactly like any other file system. See “Creating block snapshots” (page 213). • Data tiering. This feature allows you to set a preferred tier where newly created files will be stored. You can then create a tiering policy to move files from initial storage, based on file attributes such as such as modification time, access time, file size, or file type. See “Using data tiering” (page 225). • File allocation. This feature allocates new files and directories to segments according to the allocation policy and segment preferences that are in effect for a client. An allocation policy is an algorithm that determines the segments that are selected when clients write to a file system. See “Using file allocation” (page 240). Accessing file systems Clients can use the following standard NAS protocols to access file system data: • NFS. See “Using NFS” (page 51) or more information. • SMB. See “Using SMB” (page 73) for more information. • FTP. See “Using FTP” (page 98) for more information. • HTTP. See “Using HTTP” (page 108) for more information. You can also use IBRIX 9000 clients to access file systems. Typically, these clients are installed during the initial system setup. See the HP IBRIX 9000 Storage Installation Guide for more information.

12 Using IBRIX software file systems 2 Creating and mounting file systems

This chapter describes how to create file systems and mount or unmount them. Creating a file system You can create a file system using the New Filesystem Wizard provided with the GUI, or you can use CLI commands. The New Filesystem Wizard also allows you to create an NFS export or an SMB share for the file system. Using the New Filesystem Wizard To start the wizard, click New on the Filesystems top panel. The wizard includes several steps and a summary, starting with selecting the storage for the file system.

NOTE: For details about the prompts for each step of the wizard, see the GUI online help. On the Select Storage dialog box, select the storage that will be used for the file system.

Configure Options dialog box. Enter a name for the file system, and specify the appropriate configuration options.

Creating a file system 13 WORM/Data Retention dialog box. If data retention will be used on the file system, enable it and set the retention policy. See “Managing data retention” (page 143) for more information.

14 Creating and mounting file systems You can configure the following: • Default retention period. This period determines whether you can manage WORM (non-retained) files as well as WORM-retained files. (WORM (non-retained) files can be deleted at any time; WORM-retained files can be deleted only after the file's retention period has expired.) To manage only WORM-retained files, set the default retention period to a non-zero value. WORM-retained files then use this period by default; however, you can assign a different retention period if desired. To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention Period. The default retention period is then set to 0 seconds. When you make a WORM file retained, you will need to assign a retention period to the file. • Autocommit period. When the autocommit period is set, files become WORM or WORM-retained if they are not changed during the period. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time period. The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period. • Data validation. Select this option to schedule periodic scans on the file system. Use the default schedule, or click Modify to open the Data Validation Scan Schedule dialog box and configure your own schedule.

• Report Data Generation. Select this option if you want to create data retention reports. Use the default schedule, or click Modify to open the Report Data Generation Schedule dialog box and configure your own schedule.

Creating a file system 15 • Express Query. Check this option to enable StoreAll Express Query on the file system. Express Query is a database used to record metadata state changes occurring on the file system. Auditing Options dialog box. If you enabled Express Query on the WORM/Data Retention dialog box, you can also enable auditing and select the events that you want to log.

16 Creating and mounting file systems Default File Shares dialog box. Use this dialog box to create an NFS export and/or an SMB share at the root of the file system. The default settings are used. See “Using NFS” (page 51) and “Using SMB” (page 73) for more information.

Review the Summary to ensure that the file system is configured properly. If necessary, you can return to a dialog box and make any corrections. Configuring additional file system options The New Filesystem wizard creates the file system with the default settings for several options. You can change these settings on the Modify Filesystem Properties dialog box, and can also configure data retention, data tiering, and file allocation. To open the dialog box, select the file system on the Filesystems panel. Select Summary from the lower Navigator, and then click Modify on the Summary panel. The General tab allows you to enable or disable quotas, Export Control, and 32-bit compatibility mode on the file system. When Export Control is enabled on a file system, by default, IBRIX 9000 clients have no access to the file system. Instead, the system administrator grants the clients access by executing the ibrix_mount command. Export Control affects only for NFS access for IBRIX 9000 clients. Enabling Export Control does not affect access from a file serving node to a file system. File serving nodes always have RW access. By default, file systems are created in 64-bit mode. If clients need to run a 32-bit application, you can enable 32-bit compatibility mode. This option is applied to the file system at mount time and can be enabled or disabled as necessary.

Creating a file system 17 The Data Retention tab allows you to change the data retention configuration. The file system must be unmounted. See “Configuring data retention on existing file systems” (page 147) for more information.

NOTE: Data retention cannot be enabled on a file system created on IBRIX software 5.6 or earlier versions until the file system is upgraded. The Allocation, Segment Preference, and Host Allocation tabs are used to modify file allocation policies and to specify segment preferences for file serving nodes and IBRIX 9000 clients. See “Using file allocation” (page 240) for more information. Creating a file system using the CLI The ibrix_fs command is used to create a file system. It can be used in the following ways: • Create a file system with the specified segments (segments are logical volumes): ibrix_fs -c -f FSNAME -s LVLIST [-t TIERNAME] [-a] [-q] [-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]

• Create a file system and assign specify segments to specific file serving nodes: ibrix_fs -c -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... [-a] [-q] [-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]

• Create a file system from physical volumes in a single step: ibrix_fs -c -f FSNAME -p PVLIST [-a] [-q] [-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]

18 Creating and mounting file systems In the commands, the –t option specifies a tier. TIERNAME can be any alphanumeric, case-sensitive, text string. Tier assignment is not affected by other options that can be set with the ibrix_fs command.

NOTE: A tier is created whenever a segment is assigned to it. Be careful to spell the name of the tier correctly when you add segments to an existing tier. If you make an error in the name, a new tier is created with the incorrect tier name, and no error is recognized.

Options for data retention

Feature Option Data –o "retenMode=,retenDefPeriod=,retenMinPeriod=, retention retenMaxPeriod=,retenAutoCommitPeriod=" Express -T -E Query Auditing -A -oa OPTION1=VALUE1[,OPTION2=VALUE2,...]

The following example enables data retention, Express Query, and auditing, with all events being audited: ibrix_fs -o "retenMode=Enterprise,retenDefPeriod=5m,retenMinPeriod=2, retenMaxPeriod=30y,retenAutoCommitPeriod=1d" –T –E -A –oa audit_mode=on,all=on c -f ifs1 -s ilv_[1-4] -a

Creating a file system manually from physical volumes This procedure is equivalent to using ibrix_fs to create a file system from physical volumes in a single step. Instead of a single command, you build the file system components individually: 1. Discover the physical volumes in the system. Use the ibrix_pv command. 2. Create volume groups from the discovered physical volumes. Use the ibrix_vg command. 3. Create logical volumes (also called segments) from volume groups. Use the ibrix_lv command. 4. Create the file system from the new logical volumes. Use the ibrix_fs command. See the HP IBRIX 9000 Storage CLI Reference Guide for details about these commands. File limit for directories The maximum number of files in a directory depends on the length of the file names, and also the names themselves. The maximum size of a directory is approximately 4 GB (double indirect blocks). An average file name length of eight characters allows about 12 million entries. However, because directories are hashed, it is unlikely that a directory can contain this number of entries. Files with a similar naming pattern are hashed into the same bucket. If that bucket fills up, another file cannot be created there, even if free space is available elsewhere in the directory. If you try to create another file with a different name, it may succeed, but randomly. Managing mountpoints and mount/unmount operations GUI procedures When you use the New Filesystem Wizard to create a file system, you can specify a name for the mountpoint and indicate whether the file system should be mounted after it is created. The wizard will create the mountpoint if necessary. The Filesystems panel shows the file systems created on the cluster. To view the mountpoint information for a file system, select the file system on the Filesystems panel, and click Mountpoints in the lower Navigator. The Mountpoints panel shows the hosts that have

File limit for directories 19 mounted the file system, the name of the mountpoint, the access (RW or RO) allowed to the host, and whether the file system is mounted.

To mount or remount a file system, select it on the Filesystems panel and click Mount. You can select several mount options on the Mount Filesystem dialog box. To remount the file system, click remount.

The available mount options are: • atime: Update the inode access time when a file is accessed

NOTE: If you do not specifically set atime as an option, noatime is set instead and the inode access time is not updated when the file is accessed. There is not an option to specifically set noatime as an option.

• nodiratime: Do not update the directory inode access time when the directory is accessed

20 Creating and mounting file systems • nodquotstatfs: Disable file system reporting based on directory tree quota limits • path: For IBRIX 9000 clients only, mount on the specified subdirectory path of the file system instead of the root. • remount: Remounts a file system without taking it offline. Use this option to change the current mount options on a file system. You can also view mountpoint information for a particular server. Select that server on the Servers panel, and select Mountpointsfrom the lower Navigator. To delete a mountpoint, select that mountpoint and click Delete.

CLI procedures The CLI commands are executed immediately on file serving nodes. For IBRIX 9000 clients, the command intention is stored in the active Fusion Manager. When IBRIX software services start on a client, the client queries the active Fusion Manager for any commands. If the services are already running, you can force the client to query the Fusion Manager by executing either ibrix_client or ibrix_lwmount -a on the client, or by rebooting the client. If you have configured hostgroups for your IBRIX 9000 clients, you can apply a command to a specific hostgroup. For information about creating hostgroups, see the administration guide for your system.

Creating mountpoints Mountpoints must exist before a file system can be mounted. To create a mountpoint on file serving nodes and IBRIX 9000 clients, enter the following command: ibrix_mountpoint -c [-h HOSTLIST] -m MOUNTPOINT To create a mountpoint on a hostgroup , enter the following command: ibrix_mountpoint -c -g GROUPLIST -m MOUNTPOINT For information about mountpoint options, see the "ibrix_mountpoint" section in the HP IBRIX 9000 CLI Reference Guide.

Deleting mountpoints Before deleting mountpoints, verify that no file systems are mounted on them. To delete a mountpoint from file serving nodes and IBRIX 9000 clients, use the following command: ibrix_mountpoint -d [-h HOSTLIST] -m MOUNTPOINT

Managing mountpoints and mount/unmount operations 21 To delete a mountpoint from specific hostgroups, use the following command: ibrix_mountpoint -d -g GROUPLIST -m MOUNTPOINT

Viewing mountpoint information To view mounted file systems and their mountpoints on all nodes, use the following command: ibrix_mountpoint -l

Mounting a file system File system mounts are managed with the ibrix_mount command. The command options and the default file system access allowed for IBRIX 9000 clients depend on whether the optional Export Control feature has been enabled on the file system (see “Using Export Control” (page 24) for more information). This section assumes that Export Control is not enabled, which is the default.

NOTE: A file system must be mounted on the file serving node that owns the root segment (that is, segment 1) before it can be mounted on any other host. IBRIX software automatically mounts a file system on the root segment when you mount it on all file serving nodes in the cluster. The mountpoints must already exist. Mount a file system on file serving nodes and IBRIX 9000 clients: ibrix_mount -f FSNAME [-o {RW|RO}] [-O MOUNTOPTIONS] -h HOSTLIST -m MOUNTPOINT Mount a file system on a hostgroup: ibrix_mount -f FSNAME [-o {RW|RO}] -g GROUP -m MOUNTPOINT

NOTE: If you do not include the -o parameter, the default access option for the mounted file system is Read Write.

Unmounting a file system Use the following commands to unmount a file system.

NOTE: Be sure to unmount the root segment last. Attempting to unmount it while other segments are still mounted will result in failure. If the file system was exported using NFS, you must unexport it before you can unmount it (see “Exporting a file system” (page 51)). To unmount a file system from one or more file serving nodes, IBRIX 9000 clients, or hostgroups: ibrix_umount -f FSNAME [-h HOSTLIST | -g GROUPLIST] To unmount a file system from a specific mountpoint on a file serving node, IBRIX 9000 client, or hostgroup: ibrix_umount -m MOUNTPOINT [-h HOSTLIST | -g GROUPLIST]

Enabling or disabling 32-bit compatibility mode If clients are running 32-bit applications, you can enable 32-bit compatibility mode. This mode is applied to the file system at mount time. You can enable or disable 32-bit compatibility on the Modify Filesystems Properties dialog box, and can also use the following commands. Enable 32-bit compatibility mode: ibrix_fs_tune -c -e -f FSNAME Disable 32-bit compatibility mode: ibrix_fs_tune -c -d -f FSNAME Mounting and unmounting file systems locally on IBRIX 9000 clients On both Linux and Windows IBRIX 9000 clients, you can locally override a mount. For example, if the Fusion Manager configuration database has a file system marked as mounted for a particular client, that client can locally unmount the file system.

22 Creating and mounting file systems Linux IBRIX 9000 clients To mount a file system locally, use the following command on the Linux 9000 client. A Fusion Manager name (fmname) is required only if this 9000 client is registered with multiple Fusion Managers. ibrix_lwmount -f [fmname:]fsname -m mountpoint [-o options] To unmount a file system locally, use one of the following commands on the Linux 9000 client. The first command detaches the specified file system from the client. The second command detaches the file system that is mounted on the specified mountpoint. ibrix_lwumount -f [fmname:]FSNAME ibrix_lwumount -m MOUNTPOINT

Windows IBRIX 9000 clients Use the Windows 9000 client GUI to mount file systems locally. Click the Mount tab on the GUI and select the cluster name from the list (the cluster name is the Fusion Manager name). Then, enter the name of the file system, select a drive, and click Mount. If you are using Remote Desktop to access the client and the drive letter is not displayed, log out and log back in. This is a known limitation of Windows Terminal Services when exposing new drives. To unmount a file system on the Windows 9000 client GUI, click the Umount tab, select the file system, and then click Umount. Limiting file system access for IBRIX 9000 clients By default, all IBRIX 9000 clients can mount a file system after a mountpoint has been created. To limit access to specific IBRIX 9000 clients, create an access entry. When an access entry is in place for a file system (or a subdirectory of the file system), it enters secure mode, and mount access is restricted to clients specified in the access entry. All other clients are denied mount access. Select the file system on the Filesystems top panel, and then select Client Exports in the lower navigator. On the Create Client Export(s) dialog box, select the clients or hostgroups that will be allowed access to the file system or a subdirectory of the file system.

To remove a client access entry, select the affected file system on the GUI, and then select Client Exports from the lower Navigator. Select the access entry from the Client Exports display, and click Delete. On the CLI, use the ibrix_exportfs command to create an access entry: ibrix_exportfs –c –f FSNAME –p CLIENT:/PATHNAME,CLIENT2:/PATHNAME,...

Limiting file system access for IBRIX 9000 clients 23 To see all access entries that have been created, use the following command: ibrix_exportfs –c –l To remove an access entry, use the following command: ibrix_exportfs –c —U –f FSNAME –p CLIENT:/PATHNAME, CLIENT2:/PATHNAME,... Using Export Control When Export Control is enabled on a file system, by default, IBRIX 9000 clients have no access to the file system. Instead, the system administrator grants the clients access by executing the ibrix_mount command. Export Control affects only NFS access for IBRIX 9000 clients. Enabling Export Control does not affect access from a file serving node to a file system. File serving nodes always have RW access. To determine whether Export Control is enabled, run ibrix_fs -i or ibrix_fs -l. The output indicates whether Export Control is enabled. To enable Export Control, include the -C option in the ibrix_fs command: ibrix_fs -C -E -f FSNAME To disable Export Control, execute the ibrix_fs command with the -C and -D options: ibrix_fs -C -D -f FSNAME To mount a file system that has Export Control enabled, include the ibrix_mount -o {RW|RO} option to specify that all clients have either RO or RW access to the file system. The default is RO. In addition, when specifying a hostgroup, the root user can be limited to RO access by adding the root_ro parameter.

24 Creating and mounting file systems 3 Configuring quotas

Quotas can be assigned to individual users or groups, or to a directory tree. Individual quotas limit the amount of storage or the number of files that a user or group can use in a file system. Directory tree quotas limit the amount of storage and the number of files that can be created on a file system located at a specific directory tree. Note the following: • You can assign quotas to a user, group, or directory on the GUI or from the CLI. You can also import quota information from a file. • If a user has a user quota and a group quota for the same file system, the first quota reached takes precedence. • Nested directory quotas are not supported. You cannot configure quotas on a subdirectory differently than the parent directory. • The existing quota configuration can be exported to a file at any time. NOTE: HP recommends that you export the quota configuration and save the resulting file whenever you update quotas on your cluster.

How quotas work Quotas can be set for users, groups, or directories in a file system. A quota is specified by hard and soft storage limits for both the megabytes of storage and the number of files allotted to the user, group, or directory. The hard limit is the maximum storage (in terms of file size and number of files) allotted to the user, group, or directory. The soft limit specifies the number of megabytes or files that, when reached, starts a countdown timer. If the megabytes of storage or number of files are not reduced below the soft limit, the timer runs until either the hard storage limit is reached or the grace period for the timer elapses. (The default grace period is seven days.) When the timer stops, the user, group, or directory for which the quota was set cannot store any more data, and the system issues quota exceeded messages at each write attempt.

NOTE: Quota statistics are updated on a regular basis (at one-minute intervals). At each update, the file and storage usage for each quota-enabled user, group, or directory tree is queried, and the result is distributed to all file serving nodes. Users or groups can temporarily exceed their quota if the allocation policy in effect for a file system causes their data to be written to different file serving nodes during the statistics update interval. In this situation, it is possible for the storage usage visible to each file serving node to be below or at the quota limit while the aggregate storage use exceeds the limit. There is a delay of several minutes between the time a command to update quotas is executed and when the results are displayed by the ibrix_edquota -l command. This is normal behavior. Enabling quotas on a file system and setting grace periods Before you can set quota limits, quotas must be enabled on the file system. You can enable quotas on a file system at any time. To view the current quotas configuration on the GUI, select the file system and then select Quotas from the lower Navigator. The Quotas Summary panel specifies whether quotas are enabled and lists the grace periods for blocks and inodes.

How quotas work 25 To change the quotas configuration, click Modify on the Quota Summary panel.

On the CLI, run the following command to enable quotas on an existing file system: ibrix_fs -q -E -f FSNAME Setting quotas for users, groups, and directories Before configuring quotas, the quota feature must be enabled on the file system and the file system must be mounted.

NOTE: For the purpose of setting quotas, no UID or GID can exceed 2,147,483,647. Setting user quotas to zero removes the quotas.

The Quota Management Wizard can be used to create, modify, or delete quotas for users, groups, and directories in the selected file system. Click Quotas Wizard on the Quota Summary panel to open the wizard. The Welcome dialog box describes the options available in the wizard.

26 Configuring quotas The User Quotas dialog box is used to create, modify, or delete quotas for users. To add a user quota, enter the required information and click Add. Users having quotas are listed in the table at the bottom of the dialog box. To modify quotas for a user, check the box preceding that user. You can then adjust the quotas as needed. To delete quotas for a user, check the box and click Delete.

The Group Quotas dialog box is used to create, modify, or delete quotas for groups. To add a group quota, enter the required information and click Add. The new quota applies to all users in the group. Groups having quotas are listed in the table at the bottom of the dialog box. To modify

Setting quotas for users, groups, and directories 27 quotas for a group, check the box preceding that group. You can then adjust the quotas as needed. To delete quotas for a group, check the box and click Delete. group

The Directory Quotas dialog box is used to create, modify, or delete quotas for directories. To add a directory quota, enter the required information and click Add. The Name (Alias) is a unique identifier for the quota, and cannot include commas. The new quota applies to all users and groups storing data in the directory. Directories having quotas are listed in the table at the bottom of the dialog box. To modify quotas for a directory, check the box preceding that directory. You can then adjust the quotas as needed. To delete quotas for a directory, check the box and click Delete.

28 Configuring quotas summary Configuring quotas from the CLI In the commands, use -M SOFT_MEGABYTES and -m HARD_MEGABYTES to specify soft and hard limits for the megabytes of storage. Use -I SOFT_FILES and -i HARD_FILES to specify soft and hard limits for the number of files allowed. Create a user or group quota: User quota: ibrix_edquota -s -u “USER” -f FSNAME [-M SOFT_MEGABYTES] [-m HARD_MEGABYTES] [-I SOFT_FILES] [-i HARD_FILES] Group quota: ibrix_edquota -s -g “GROUP” -f FSNAME [-M SOFT_MEGABYTES] [-m HARD_MEGABYTES] [-I SOFT_FILES] [-i HARD_FILES] Enclose the user or group name in single or double quotation marks. Create a directory quota: ibrix_edquota -s -d NAME -p PATH -f FSNAME -M SOFT_MEGABYTES -m HARD_MEGABYTES -I SOFT_FILES -i HARD_FILES The -p PATH option specifies the pathname of the directory tree. If the pathname includes a space, enclose the portion of the pathname that includes the space in single quotation marks, and enclose the entire pathname in double quotation marks. For example: -p "/fs48/data/'QUOTA 4'" The -n NAME option specifies a unique name for the directory tree quota. The name cannot contain a comma (,) or colon (:) character.

NOTE: When you create a directory quota, the system also runs ibrix_onlinequotacheck command in DTREE_CREATE mode. If you are creating multiple directory quotas, you can import the quotas from a file. The system then uses batch processing to create the quotas. If you add the quotas individually, you will need to wait for ibrix_onlinequotacheck to finish after creating each quota. Using a quotas file Quota limits can be imported into the cluster from the quotas file, and existing quotas can be exported to the file. See “Format of the quotas file” (page 30) for the format of the file. Importing quotas from a file From the GUI, select the file system, select Quotas from the lower Navigator, and then click Import on the Quota Summary panel.

Using a quotas file 29 From the CLI, use the following command to import quotas from a file, where PATH is the path to the quotas file: ibrix_edquota -t -p PATH -f FSNAME See “Format of the quotas file” (page 30) for information about the format of quota file. Exporting quotas to a file From the GUI, select the file system, select Quotas from the lower Navigator, and then click Export on the Quota Summary panel.

From the CLI, use the following command to export the existing quotas information to a file, where PATH is the pathname of the quotas file: ibrix_edquota -e -p PATH -f FSNAME Format of the quotas file The quotas file contains a line for each user, group, or directory tree assigned a quota. When you add quota entries, the lines must use one of the following formats. The “A” format specifies a user or group ID. The “B” format specifies a user or group name, or a directory tree that has already been assigned an identifier name. The “C” format specifies a directory tree, where the path exists, but the identifier name for the directory tree will not be created until the quotas are imported. A,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},{id} B,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit},"{name}" C,{type},{block_hardlimit},{block_soft-limit},{inode_hardlimit},{inode_softlimit}, "{name}","{path}" The fields in each line are: {type} Either 0 for a user quota; 1 for a group quota; 2 for a directory tree quota. {block_hardlimit} The maximum number of 1K blocks allowed for the user, group, or directory tree. (1 MB = 1024 blocks). {block_soft-limit} The number of 1K blocks that, when reached, starts the countdown timer. {inode_hardlimit} The maximum number of files allowed for the user, group, or directory tree. {inode_softlimit} The number of files that, when reached, starts the countdown timer.

30 Configuring quotas {id} The UID for a user quota or the GID for a group quota. {name} A user name, group name, or directory tree identifier. {path} The full path to the directory tree. The path must already exist.

NOTE: When a quotas file is imported, the quotas are stored in a different, internal format. When a quotas file is exported, it contains lines using the internal format. However, when adding entries, you must use the A, B, or C format. The following is an example of the syntax for a file to import a directory tree quota (2048=2 MB): C,2,2048,1024,0,0,"ba","/fs1/a/aa" C,2,2048,1024,0,0,"bb","/fs1/a/ab" C,2,2048,1024,0,0,"bc","/fs1/a/ac" Using online quota check Online quota checks are used to rescan quota usage, initialize directory tree quotas, and remove directory tree quotas. There are three modes: • FILESYSTEM_SCAN mode. Use this mode in the following scenarios: ◦ You turned quotas off for a user, the user continued to store data in a file system, and you now want to turn quotas back on for this user. ◦ You are setting up quotas for the first time for a user who has previously stored data in a file system. ◦ You renamed a directory on which quotas are set. ◦ You moved a subdirectory into another parent directory that is outside of the directory having the directory tree quota. • DTREE_CREATE mode. After setting quotas on a directory tree, use this mode to take into account the data used under the directory tree. • DTREE_DELETE mode. After deleting a directory tree quota, use this mode to unset quota IDs on all files and folders in that directory.

CAUTION: When ibrix_onlinequotacheck is started in DTREE_DELETE mode, it removes quotas for the specified directory. Be sure not to use this mode on directories that should retain quota information.

To run an online quota check from the GUI, select the file system and then select Online quota check from the lower Navigator. On the Task Summary panel, select Start to open the Start Online quota check dialog box and select the appropriate mode.

Using online quota check 31 The Task Summary panel displays the progress of the scan. If necessary, select Stop to stop the scan. To run an online quota check in FILESYSTEM_SCAN mode from the CLI, use the following command: ibrix_onlinequotacheck –s –S -f FSNAME To run an online quota check in DTREE_CREATE mode, use this command: ibrix_onlinequotacheck -s -c -f FSNAME -p PATH To run an online quota check in DTREE_DELETE mode, use this command: ibrix_onlinequotacheck -s -d -f FSNAME -p PATH The command must be run from a file serving node that has the file system mounted. Configuring email notifications for quota events If you would like to be notified when certain quota events occur, you can set up email notification for those events. On the GUI, select Email Configuration. On the Events Notified by Email panel, select the appropriate events and specify the email addresses to be notified. Deleting quotas To delete quotas from the GUI, select the quota from the appropriate Quota Usage Limits panel and then click Delete. To delete quotas from the CLI, use the following commands. To delete quotas for a user, use the following command: ibrix_edquota -D -u UID [-f FSNAME] To delete quotas for a group, use the following command: ibrix_edquota -D -g GID [-f FSNAME] To delete the entry and quota limits for a directory tree quota, use the following command: ibrix_edquota -D -d NAME -f FSNAME The -d NAME option specifies the name of the directory tree quota.

32 Configuring quotas Troubleshooting quotas Recreated directory does not appear in directory tree quota If you create a directory tree quota on a specific directory and delete the directory (for example, with rmdir/rm -rf) and then recreate it on the same path, the directory does not count as part of the directory tree, even though the path is the same. Consequently, the ibrix_onlinequotacheck command does not report on the directory. Moving directories After moving a directory into or out of a directory containing quotas, run the ibrix_onlinequotacheck command as follows: • After moving a directory from a directory tree with quotas (the source) to a directory without quotas (the destination), take these steps: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree to remove the usage information for the moved directory. 2. Run ibrix_onlinequotacheck in DTREE_DELETE mode on the directory that was moved to delete residual quota information. • After moving a directory from a directory without quotas (the source) to a directory tree with quotas (the destination), take this step: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory tree to add the usage for the moved directory. • After moving a directory from one directory tree with quotas (the source) to another directory tree with quotas (the destination), take these steps: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree to remove the usage information for the moved directory. 2. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory tree to add the usage for the moved directory.

Troubleshooting quotas 33 4 Maintaining file systems

This chapter describes how to extend a file system, rebalance segments, delete a file system or file system component, and check or repair a file system. The chapter also includes file system troubleshooting information. Best practices for file system performance It is important to monitor the space used in the segments making up the file system. If segments are filled to 90% or greater and the segments are actively being used based on the file system allocation policy, performance degradation is likely because of extra housekeeping tasks incurred in the file system. Also, at this point, automatic write behavior changes can cause all new creates to go to the segment with the most available capacity, causing a slowdown. To maintain file system performance, follow these recommendations: • If segments are approaching 85% full, either expand the file system with new segments or clean up the file system. • If only a few segments are between 85% and 90% and other segments are much lower, run a rebalance task. However, if those few segments are at 90% or higher, it is best to adjust the file system allocation policy to exclude the full segments from being used. Then initiate a rebalance task to balance the full segments out onto other segments with more available space. When the rebalance task is complete and all segments are below the 85% threshold, you can reapply the original file system allocation policy. The GUI displays the space used in each segment. Select the file system, and then select Segments from the lower Navigator.

Viewing information about file systems and components The Filesystems top panel on the GUI displays comprehensive information about a file system and its components. This section describes how to view the same information from the command line.

34 Maintaining file systems Viewing physical volume information The following command lists detailed information about physical volumes: ibrix_pv -i For each physical volume, the output includes the following information: # ibrix_pv -i PV_NAME SIZE(MB) VG_NAME LUN_GROUP LV_NAME FILESYSTEM SEGNUM USED% SEGOWNER DEVICE ON SEGOWNER ------d1 3,072 ivg1 ilv1 ifs1 1 99 vm3 /dev/sdb d2 3,072 ivg2 ilv2 ifs1 2 99 vm2 /dev/sdc The following command provides host-specific information about physical volumes: ibrix_pv -l [-h HOSTLIST] The following table lists fields included in the -i and -l commands.

Field Description

PV_NAME Physical volume name. Regular physical volume names begin with the letter d. The names of physical volumes that are part of a mirror device begin with the letter m. Both are numbered sequentially.

SIZE (MB) Physical volume size, in MB.

VG_NAME Name of volume group created on this physical volume, if any.

LUN_GROUP The LUN group, if any.

LV_NAME Logical volume name.

FILESYSTEM File system to which the logical volume belongs.

SEGNUM Number of this segment (logical volume) in the file system.

USED% Percentage of total space in the volume group allocated to logical volumes.

SEGOWNER The owner of the segment.

DEVICE ON The device on which this physical volume is located.

RAID type Not applicable for this release.

RAID host Not applicable for this release.

RAID device Not applicable for this release.

Network host Not applicable for this release.

Network port Not applicable for this release.

Viewing volume group information To display summary information about all volume groups, use the ibrix_vg -l command: ibrix_vg -l The VG_FREE field indicates the amount of group space that is not allocated to any logical volume. The VG_USED field reports the percentage of available space that is allocated to a logical volume. To display detailed information about volume groups, use the ibrix_vg -i command. The -g VGLIST option restricts the output to the specified volume groups. ibrix_vg -i [-g VGLIST] The following table lists the output fields for ibrix_vg -i.

Field Description

VG_NAME Volume group name.

SIZE(MB) Volume group size in MB.

Viewing information about file systems and components 35 Field Description

FREE(MB) Free (unallocated) space, in MB, available on this volume group.

USED% Percentage of total space in the volume group allocated to logical volumes.

FS_NAME File system to which this logical volume belongs.

PV_NAME Name of the physical volume used to create this volume group.

SIZE (MB) Size, in MB, of the physical volume used to create this volume group.

LV_NAME Names of logical volumes created from this volume group.

LV_SIZE Size, in MB, of each logical volume created from this volume group.

GEN Number of times the structure of the file system has changed (for example, new segments were added).

SEGNUM Number of this segment (logical volume) in the file system.

HOSTNAME File serving node that owns this logical volume.

STATE Operational state of the file serving node. See the administration guide for your system for a list of the states.

Viewing logical volume information To view information about logical volumes, use the ibrix_lv -l command. The following table lists the output fields for this command.

Field Description

LV_NAME Logical volume name.

LV_SIZE Logical volume size, in MB.

FS_NAME File system to which this logical volume belongs.

SEG_NUM Number of this segment (logical volume) in the file system.

VG_NAME Name of the volume group created on this physical volume, if any.

OPTIONS Linux lvcreate options that have been set on the volume group.

Viewing file system information To view information about all file systems, use the ibrix_fs -l command. This command also displays information about any file system snapshots. The following table lists the output fields for ibrix_fs -l.

Field Description

FS_NAME File system name.

STATE State of the file system (for example, Mounted).

CAPACITY (GB) Total space available in the file system, in GB.

USED% Amount of space used in the file system.

Files Number of files that can be created in this file system.

FilesUsed% Percentage of total storage used by files and directories.

36 Maintaining file systems Field Description

GEN Number of times the structure of the file system has changed (for example, new segments were added).

NUM_SEGS Number of file system segments.

To view detailed information about file systems, use the ibrix_fs -i command. To view information for all file systems, omit the -f FSLIST argument. ibrix_fs -i [-f FSLIST] The following table lists the file system output fields reported by ibrix_fs -i.

Field Description

Total Segments Number of segments.

STATE State of the file system (for example, Mounted).

Mirrored? Not applicable for this release.

Compatible? Yes indicates that the file system is 32-bit compatible; the maximum number of segments (maxsegs) allowed in the file system is also specified. No indicates a 64-bit file system.

Generation Number of times the structure of the file system has changed (for example, new segments were added).

FS_ID File system ID for NFS access.

FS_NUM Unique IBRIX software internal file system identifier.

EXPORT_CONTROL_ENABLED Yes if enabled; No if not.

QUOTA_ENABLED Yes if enabled; No if not.

RETENTION If data retention is enabled, the retention policy is displayed.

DEFAULT_BLOCKSIZE Default block size, in KB.

CAPACITY Capacity of the file system.

FREE Amount of free space on the file system.

AVAIL Space available for user files.

USED PERCENT Percentage of total storage occupied by user files.

FILES Number of files that can be created in this file system.

FFREE Number of unused file inodes available in this file system.

Prealloc Number of KB a file system preallocates to a file; default: 1,024 KB.

Readahead Number of KB that IBRIX software will pre-fetch; default: 512 KB.

NFS Readahead Number of KB that IBRIX software pre-fetches under NFS; default: 256 KB.

Default policy Allocation policy assigned on this file system. Defined policies are: ROUNDROBIN, STICKY, DIRECTORY, LOCAL, RANDOM, and NONE. See “File allocation policies” (page 240) for information on these policies.

Default start segment The first segment to which an allocation policy is applied in a file system. If a segment is not specified, allocation starts on the segment with the most storage space available.

File replicas NA.

Dir replicas NA.

Mount Options Possible root segment inodes. This value is used internally.

Root Segment Hint Current root segment number, if known. This value is used internally.

Viewing information about file systems and components 37 Field Description

Root Segment Replica(s) Hint Possible segment numbers for root segment replicas. This value is used internally.

Snap FileSystem Policy Snapshot strategy, if defined.

The following table lists the per-segment output fields reported by ibrix_fs -i.

Field Description

SEGMENT Number of segments.

OWNER The host that owns the segment.

LV_NAME Logical volume name.

STATE The current state of the segment (for example, OK or UsageStale).

BLOCK_SIZE Default block size, in KB.

CAPACITY (GB) Size of the segment, in GB.

FREE (GB) Free space on this segment, in GB.

AVAIL (GB) Space available for user files, in GB.

FILES Inodes available on this segment.

FFREE Free inodes available on this segment.

USED% Percentage of total storage occupied by user files.

BACKUP Backup host name.

TYPE Segment type. MIXED means the segment can contain both files and directories.

TIER Tier to which the segment was assigned.

LAST_REPORTED Last time the segment state was reported.

HOST_NAME Host on which the file system is mounted.

MOUNTPOINT Host mountpoint.

PERMISSION File system access privileges: RO or RW.

Root_RO Specifies whether the root user is limited to read-only access, regardless of the access setting.

Lost+found directory When browsing the contents of IBRIX software file systems, you will see a directory named lost+found. This directory is required for file system integrity and should not be deleted. The lost+found directory only exists at the top-level directory of a file system, which is also the mountpoint. Additionally, there are several directories the you can see, at the top-level (mount point) of a file system, that are "internal use only" and should not be deleted or edited. They are: • lost+found • .archiving1 • .audit • .webdav 1There are a few exceptions in the .archiving directory. Some files in this directory are created for user consumption in certain subdirectories of .archiving (described in various places in this user guide, for example validation summary outputs 1–0.sum, and audit log reports), and those specific files can be deleted if desired, but other files should not be deleted.

38 Maintaining file systems Viewing disk space information from a Linux 9000 client Because file systems are distributed among segments on many file serving nodes, disk space utilities such as df must be provided with collated disk space information about those nodes. The Fusion Manager collects this information periodically and collates it for df. IBRIX software includes a disk space utility, ibrix_df, that enables Linux IBRIX 9000 clients to obtain utilization data for a file system. Execute the following command on any Linux 9000 client: ibrix_df The following table lists the output fields for ibrix_df.

Field Description

Name File system name.

CAPACITY Number of blocks in the file system.

FREE Number of unused blocks of storage.

AVAIL Number of blocks available for user files.

USED PERCENT Percentage of total storage occupied by user files.

FILES Number of files that can be created in the file system.

FFREE Number of unused file inodes in the file system.

Extending a file system You can extend a file system from the GUI or the CLI.

NOTE: If a continuous remote replication (CRR) task is running on a file system, the file system cannot be extended until the CRR task is complete. If the file system uses tiers, verify that no tiering task is running before executing the file system expansion commands. If a tiering task is running, the expansion takes priority and the tiering task is terminated.

Select the file system on the Filesystems top panel, and then select Extend on the Summary bottom panel. The Extend Filesystem dialog box allows you to select the storage to be added to the file system. If data tiering is used on the file system, you can also enter the name of the appropriate tier.

Extending a file system 39 On the CLI, use the ibrix_fs command to extend a file system. Segments are added to the file serving nodes in a round-robin manner. If tiering rules are defined for the file system, the -t option is required. Avoid expanding a file system while a tiering job is running. The expansion takes priority and the tiering job is terminated. Extend a file system with the logical volumes (segments) specified in LVLIST: ibrix_fs -e -f FSNAME -s LVLIST [-t TIERNAME] Extend a file system with segments created from the physical volumes in PVLIST: ibrix_fs -e -f FSNAME -p PVLIST [-t TIERNAME] Extend a file system with specific logical volumes on specific file serving nodes: ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2... Extend a file system with the listed tiered segment/owner pairs: ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... -t TIERNAME Rebalancing segments in a file system Segment rebalancing involves redistributing files among segments in a file system to balance segment utilization and server workload. For example, after adding new segments to a file system, you can rebalance all segments to redistribute files evenly among the segments. Usually, you will want to rebalance all segments, possibly as a cron job. In special situations, you might want to rebalance specific segments. Segments marked as bad (that is, segments that cannot be activated for some reason) are not candidates for rebalancing. A file system must be mounted when you rebalance its segments. If necessary, you can evacuate segments (or logical volumes) located on storage that will be removed from the cluster, moving the data on the segments to other segments in the file system. You can evacuate a segment with the GUI or the ibrix_evacuate command. For more information, see the HP IBRIX Storage CLI Reference Guide or the administrator guide for your system. How rebalancing works During a rebalance operation on a file system, files are moved from source segments to destination segments. IBRIX software calculates the average aggregate utilization of the selected source

40 Maintaining file systems segments, and then moves files from sources to destinations to bring each candidate source segment as close as possible to the calculated utilization threshold. The final absolute percent usage in the segments depends on the average file size for the target file system. If you do not specify any sources or destinations for a rebalance task, candidate segments are sorted into sources and destinations and then rebalanced as evenly as possible. If you specify sources, all other candidate segments in the file system are tagged as destinations, and vice versa if you specify destinations. Following the general rule, IBRIX software will calculate the utilization threshold from the sources, and then bring the sources as close as possible to this value by evenly distributing their excess files among all destinations. If you specify sources, only those segments are rebalanced, and the overflow is distributed among all remaining candidate segments. If you specify destinations, all segments except the specified destinations are rebalanced, and the overflow is distributed only to the destinations. If you specify both sources and destinations, only the specified sources are rebalanced, and the overflow is distributed only among the specified destinations. If there is not enough aggregate room in destination segments to hold the files that must be moved from source segments in order to balance the sources, IBRIX software issues an error message and does not move any files. The more restricted the number of destinations, the higher the likelihood of this error. When rebalancing segments, note the following: • To move files out of certain overused segments, specify source segments. • To move files into certain underused segments, specify destination segments. • To move files out of certain segments and place them in certain destinations, specify both source and destination segments. Rebalancing segments on the GUI Select the file system on the GUI and then select Segments from the lower Navigator. The Segments panel shows information for all segments in the file system.

Click Rebalance/Evacuate on the Segments panel to open the Segment Rebalance and Evacuation Wizard. The wizard can rebalance all files in the selected tier or in the file system, or you can select the segments for the operation. Chose the appropriate rebalance option on the Select Mode dialog box.

Rebalancing segments in a file system 41 The Rebalance All dialog box allows you to rebalance all segments in the file system or in the selected tier.

The Rebalance Advanced dialog box allows you to select the source and destination segments for the rebalance operation.

42 Maintaining file systems Rebalancing segments from the CLI To rebalance all segments, use the following command. Include the -a option to run the rebalance operation in analytical mode. ibrix_rebalance -r -f FSNAME To rebalance by specifying specific source segments, use the following command: ibrix_rebalance -r -f FSNAME [[-s SRCSEGMENTLIST] [-S SRCLVLIST]] For example, to rebalance segments 2 and 3 only and to specify them by segment name: ibrix_rebalance -r -f ifs1 -s 2,3 To rebalance segments 1 and 2 only and to specify them by their logical volume names: ibrix_rebalance -r -f ifs1 -S ilv1,ilv2 To rebalance by specifying specific destination segments, use the following command: ibrix_rebalance -r -f FSNAME [[-d DESTSEGMENTLIST] [-D DESTLVLIST]] For example, to rebalance segments 3 and 4 only and to specify them by segment name: ibrix_rebalance -r -f ifs1 -d 3,4 To rebalance segments 3 and 4 only and to specify them by their logical volume names: ibrix_rebalance -r -f ifs1 -D ilv3,ilv4 Tracking the progress of a rebalance task You can use the GUI or CLI to track the progress of a rebalance task. As a rebalance task progresses, usage approaches an average value across segments, excluding bad segments that are not candidates for rebalancing or segments containing files that are in heavy use during the operation. To track the progress of a rebalance task on the GUI, select the file system, and then select Rebalancer from the lower Navigator. The Task Summary displays details about the rebalance task. Also examine Used (%) on the Segments panel for the file system. To track rebalance job progress from the CLI, use the ibrix_fs -i command. The output lists detailed information about the file system. The USED% field shows usage per segments.

Rebalancing segments in a file system 43 Viewing the status of rebalance tasks Use the following commands to view status for jobs on all file systems or only on the file systems specified in FSLIST: ibrix_rebalance -l [-f FSLIST]

ibrix_rebalance -i [-f FSLIST] The first command reports summary information. The second command lists jobs by task ID and file system and indicates whether the job is running or stopped. Jobs that are in the analysis (Coordinator) phase are listed separately from those in the implementation (Worker) phase. Stopping rebalance tasks You can stop running or stalled rebalance tasks. If Fusion Manager cannot stop the task for some reason, you can force the task to stop. Stopping a task poses no risks for the file system. The system completes any file migrations that are in process when you issue the stop command. Depending on when you stop a task, segments might contain more or fewer files than before the operation started. To stop a rebalance task on the GUI, select the file system, and then select Rebalancer from the lower Navigator. Click Stop on the Task Summary to stop the task. To stop a task from the CLI, first execute ibrix_rebalance -i to obtain the TASKID, and then execute the following command: ibrix_rebalance -k -t TASKID [-F] To force the task to stop, include the -F option. Deleting file systems and file system components Deleting a file system Before deleting a file system, unmount it from all file serving nodes and clients. (See “Unmounting a file system” (page 22).) Also delete any exports.

CAUTION: When a file system is deleted from the configuration database, its data becomes inaccessible. To avoid unintended service interruptions, be sure you have specified the correct file system. To delete a file system, use the following command: ibrix_fs -d [—R] f FSLIST For example, to delete file systems ifs1 and ifs2: ibrix_fs -d -f ifs1,ifs2 If data retention is enabled on the file system, include the -R option in the command. For example: ibrix_fs -d -R -f ifs2 Deleting segments, volume groups, and physical volumes When deleting segments, volume groups, or physical volumes, you should be aware of the following: • A segment cannot be deleted until the file system to which it belongs is deleted. • A volume group cannot be deleted until all segments that were created on it are deleted. • A physical volume cannot be deleted until all volume groups created on it are deleted. If you delete physical volumes but do not remove the physical storage from the network, the volumes might be rediscovered when you next perform a discovery scan on the cluster. To delete segments: ibrix_lv -d -s LVLIST

44 Maintaining file systems For example, to delete segments ilv1 and ilv2: ibrix_lv -d -s ilv1,ilv2 To delete volume groups: bin/ibrix_vg -d -g VGLIST For example, to delete volume groups ivg1 and ivg2: ibrix_vg -d -g ivg1,ivg2 To delete physical volumes: ibrix_pv -d -p PVLIST [-h HOSTLIST] For example, to delete physical volumes d1, d2, and d3: ibrix_pv -d -p d[1-3] Deleting file serving nodes and IBRIX 9000 clients Before deleting a file serving node, unmount all file systems from it and migrate any segments that it owns to a different server. Ensure that the file serving node is not serving as a failover standby and is not involved in network interface monitoring. To delete a file serving node, use the following command: ibrix_server -d -h HOSTLIST For example, to delete file serving nodes s1.hp.com and s2.hp.com: ibrix_server -d -h s1.hp.com,s2.hp.com To delete IBRIX 9000 clients, use the following command: ibrix_client -d -h HOSTLIST Checking and repairing file systems The ibrix_fsck command analyzes inconsistencies in a file system.

CAUTION: Do not run ibrix_fsck in corrective mode without the direct guidance of HP Support. If run improperly, the command can cause data loss and file system damage. CAUTION: Do not run e2fsck (or any other off-the-shelf fsck program) on any part of a file system. Doing this can damage the file system. The ibrix_fsck command can detect and repair file system inconsistencies. File system inconsistencies can occur for many reasons, including hardware failure, power failure, switching off the system without proper shutdown, and failed migration. The command runs in four phases and has two running modes: analytical and corrective. You must run the phases in order and you must run all of them: • Phase 0 checks host connectivity and the consistency of segment byte blocks and repairs them in corrective mode. • Phase 1 checks segments and repairs them in corrective mode. Results are stored locally. • Phase 2 checks the file system and repairs it in corrective mode. Results are stored locally. • Phase 3 moves files from lost+found on each segment to the global lost+found directory on the root segment of the file system. If a file system shows evidence of inconsistencies, contact HP Support. A representative will ask you to run ibrix_fsck in analytical mode and, based on the output, will recommend a course of action and assist in running the command in corrective mode. HP strongly recommends that you use corrective mode only with the direct guidance of HP Support. Corrective mode is complex and difficult to run safely. Using it improperly can damage both data and the file system. Analytical mode is completely safe, by contrast.

Checking and repairing file systems 45 NOTE: During an ibrix_fsck run, an INFSCK flag is set on the file system to protect it. If an error occurs during the job, you must explicitly clear the INFSCK flag (see “Clearing the INFSCK flag on a file system” (page 46)), or you will be unable to mount the file system.

Analyzing the integrity of a file system on all segments Observe the following requirements when executing ibrix_fsck: • Unmount the file system for phases 0 and 1 and mount the file system for phases 2 and 3. • Turn off automated failover by executing ibrix_host -m -U -h SERVERNAME. • Unmount all NFS clients and stop NFS on the servers. Use the following procedure to analyze file system integrity: Runs phase 0 in analytic mode: ibrix_fsck -p 0 -f FSNAME [-s LVNAME] [-c] The command can be run on the specified file system or optionally only on the specified segment LVNAME. Run phase 1 in analytic mode: ibrix_fsck -p 1 -f FSNAME [-s LVNAME] [-c] [-B BLOCKSIZE] [-b ALTSUPERBLOCK] The command can be run on file system FSNAME or optionally only on segment LVNAME. This phase can be run with a specified block size and an alternate superblock number. For example: ibrix_fsck -p 1 -f ifs1 -B 4096 -b 12250

NOTE: If phase 1 is run in analytic mode on a mounted file system, false errors can be reported. Run phase 2: ibrix_fsck -p 2 -f FSNAME [-s LVNAME] [-c] [-o "options"] The command can be run on the specified file system or optionally only on segment LVNAME. Use -o to specify any options. Run phase 3: ibrix_fsck -p 3 -f FSNAME [-c] Clearing the INFSCK flag on a file system To clear the INFSCK flag, use the following command: ibrix_fsck -f FSNAME -C Troubleshooting file systems Segment of a file system is accidently deleted When a segment of a file system is accidently deleted without evacuating it first, all files on that segment are gone, including the Express Query metadata database files on that segment even though they are unrelated to the files on that segment. You must recreate the metadata database. To recreate the metadata database:

46 Maintaining file systems 1. Disable the Express Query and auditing feature for the file system, including the removal of any StoreAll REST API (also known as IBRIX Object API) shares. Disable the auditing feature before you disable the Express Query feature. a. To disable auditing, enter the following command: ibrix_fs -A [-f FSNAME] -oa audit_mode=off b. Remove all StoreAll REST API shares created in the file system by entering the following command: ibrix_httpshare -d -f c. To disable the Express Query settings on a file system, enter the following command: ibrix_fs -T -D -f FSNAME 2. To re-enable the Express Query settings on a file system , enter the following command: ibrix_fs -T -E -f FSNAME 3. (Optional) To re-enable auditing, enter the following command: ibrix_fs -A [-f FSNAME] -oa audit_mode=on 4. To recreate your REST API HTTP shares, enter the ibrix_httpshare -a command with the appropriate parameters. See “Using HTTP” (page 108). Express Query re-synchronizes the file system and the database by using the restored database information. This process might take some time. 5. Wait for the metadata resync process to finish. Enter the following command to monitor the resync process for a file system: ibrix_archiving —l The status should be at OK for the file system before you proceed. Refer to the ibrix_archiving section in the HP IBRIX 9000 Storage CLI Reference Guide for information about the other states. 6. Import your previously exported custom metadata and audit logs according to “Importing metadata to a file system” (page 165). ibrix_pv -a discovers too many or too few devices This situation occurs when file serving nodes see devices multiple times. To prevent this, modify the LVM2 filter in /etc/lvm/lvm.conf to filter only on devices used by IBRIX software. This will change the output of lvmdiskscan. By default, the following filter finds all devices: filter = [ "a/.*/" ] The following filter finds all sd devices: filter = [ "a|^/dev/sd.*|", "r|^.*|" ] Contact HP Support if you need assistance. Cannot mount on an IBRIX 9000 client Verify the following: • The file system is mounted and functioning on the file serving nodes. • The mountpoint exists on the 9000 client. If not, create the mountpoint locally on the client. • Software management services have been started on the 9000 client (see “Starting and stopping processes” in the administrator guide for your system).

Troubleshooting file systems 47 NFS clients cannot access an exported file system An exported file system has been unmounted from one or more file serving nodes, causing IBRIX software to automatically disable NFS on those servers. Fix the issue causing the unmount and then remount the file system. User quota usage data is not being updated Restart the quota monitor service to force a read of all quota usage data and update usage counts to the file serving nodes in your cluster. Use the following command: ibrix_qm restart File system alert is displayed after a segment is evacuated When a segment is successfully evacuated, a segment unavailable alert is displayed in the GUI and attempts to mount the file system will fail. There are several options at this point: • Mark the evacuated segment as bad (retired), using the following command. The file system state changes to okay and the file system can now be mounted. However, the operation marking the segment as bad cannot be reversed. ibrix_fs -B -f FSNAME {-n RETIRED_SEGNUMLIST | -s RETIRED_LVLIST}

• Keep the evacuated segment in the file system. Take one of the following steps to enable mounting the file system: ◦ Use the force option (-X) when mounting the file system: ibrix_mount –f myFilesystem –m /myMountpoint –X

◦ Clear the “unavailable segment” flag on the file system with the ibrix_fsck command and then mount the file system normally: ibrix_fsck -f FSNAME -C -s LVNAME_OF_EVACUATED_SEG

SegmentNotAvailable is reported When writes to a segment do not succeed, the segment status may change to SegmentNotAvailable on the GUI and an alert message may be generated. To correct this situation, take the following steps: 1. Identify the file serving node that owns the segment. This information is reported on the Filesystem Segments panel on the GUI. 2. Fail over the file serving node to its standby. See the administration guide for your system for more information about this procedure. 3. Reboot the file serving node. 4. When the file serving node is up, verify that the segment, or LUN, is available. If the segment is still not available, contact HP Support. SegmentRejected is reported This alert is generated by a client call for a segment that is no longer accessible by the segment owner or file serving node specified in the client's segment map. The alert is logged to the Iad.log and messages files. It is usually an indication of an out-of-date or stale segment map for the affected file system and is caused by a network condition. Other possible causes are rebooting the node, unmounting the file system on the node, segment migrations, and, in a failover scenario, stale IAD, an unresponsive kernel, or a network RPC condition. To troubleshoot this alert, check network connectivity among the nodes, ensuring that the network is optimal and any recent network conditions have been resolved. From the file system perspective,

48 Maintaining file systems verify segment maps by comparing the file system generation numbers and the ownership for those segments being rejected by the clients. Use the following commands to compare the file system generation number on the local file serving nodes and the clients logging the error. /usr/local/ibrix/bin/rtool enumseg For example: rtool enumseg ibfs1 3 segnum=3 of 4 ------fsid ...... 7b3ea891-5518-4a5e-9b08-daf9f9f4c027 fsname ...... ibfs1 device_name ...... /dev/ivg3/ilv3 host_id ...... 1e9e3a6e-74e4-4509-a843-c0abb6fec3a6 host_name ...... ib50-87 <-- Verify owner of segment ref_counter ...... 1038 state_flags ...... SEGMENT_LOCAL SEGMENT_PREFERED SEGMENT_DHB For example: rtool enumfs ibfs1 1:------fsname ...... ibfs1 fsid ...... 7b3ea891-5518-4a5e-9b08-daf9f9f4c027 fsnum ...... 1 fs_flags...... operational total_number_of_segments ...... 4 mounted ...... TRUE ref_counter ...... 6 generation ...... 26 <–– FS generation number for comparison alloc_policy...... RANDOM dir_alloc_policy...... NONE cur_segment...... 0 sup_ap_on...... NONE local_segments ...... 3 quota ...... usr,grp,dir f_blocks ...... 0047582040 4K-blocks (==0190328160 1K-blocks) f_bfree ...... 0044000311 4K-blocks (==0176001244 1K-blocks) f_bused ...... 0003581729 4K-blocks (==0014326916 1K-blocks) f_bavail ...... 0043872867 4K-blocks (==0175491468 1K-blocks) f_files ...... 26214400 f_ffree ...... 26212193 used files (f_files - f_free)... 2207 FS statistics for 0.0 seconds : n_reads=0, kb_read=0, n_writes=0, kb_written=0, n_creates=0, n_removes=0 Use the output to determine whether the FS generation number is in sync and whether the file serving nodes agree on the ownership of the rejected segments. In the rtool enumseg output, check the state_flags field for SEGMENT_IN_MIGRATION, which indicates that the segment is stuck in migration because of a failover. Typically, if the segment has a healthy state flag on the file serving node that owns the segment and all file serving nodes agree on the owner of the segment, this is not a file system or file serving

Troubleshooting file systems 49 node issue. If a state flag is stale or indicates that a segment is in migration, call HP Support for a recovery procedure. Otherwise, the alert indicates a file system generation mismatch. Take the following steps to resolve this situation: 1. From the active Fusion Manager, run the following command to propagate a new file system segment map throughout the cluster. This step takes a few minutes. ibrix_dbck -I -f 2. If problems persist, try restarting the client's IAD: /usr/local/ibrix/init/ibrix_iad restart ibrix_fs -c failed with "Bad magic number in super-block" If a file system creation command fails with an error such as the following, the command may have failed to preformat the LUN. # ibrix_fs -c -f fs1 -s seg1_4 Calculated owner for seg1_4 : glory22 failed command (/usr/local/ibrix/bin/tuneibfs -F 3e2a9657-fc8b-46b2-96b0-1dc27e8002f3 -H glory2 -G 1 -N 1 -S fs1 -R 1 /dev/vg1_4/seg1_4 2>&1) status (1) output: (/usr/local/ibrix/bin/tuneibfs: Bad magic number in super-block while trying to open /dev/vg1_4/seg1_4 Couldn't find valid filesystem superblock. /usr/local/ibrix/bin/tuneibfs 5.3.461 Rpc Version:5 Rpc Ports base=IBRIX_PORTS_BASE (Using EXT2FS Library version 1.32.1) [ipfs1_open] reading superblock from blk 1 )

Iad error on host glory2 To work around the problem, recreate the segment on the failing LUN. To identify the LUN associated with the failure, run a command such as the following on the first server in the system: # ibrix_pv -l -h glory2 PV_NAME SIZE(MB) VG_NAME DEVICE RAIDTYPE RAIDHOST RAIDDEVICE ------d1 131070 vg1_1 /dev/mxso/dev4a d2 131070 vg1_2 /dev/mxso/dev5a d3 131070 vg1_3 /dev/mxso/dev6a d5 23551 vg1_5 /dev/mxso/dev8a d6 131070 vg1_4 /dev/mxso/dev7a The Device column identifies the LUN number. In this example, the volume group vg1_4 is created from LUN 7. Recreate the segment and then run the file system creation command again.

50 Maintaining file systems 5 Using NFS

To allow NFS clients to access an IBRIX file system, the file system must be exported. You can export a file system using the GUI or CLI. By default, IBRIX file systems and directories follow POSIX semantics and file names are case-sensitive for Linux/NFS users. If you prefer to use Windows semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive. Exporting a file system Exporting a file system makes local directories available for NFS clients to mount. The Fusion Manager manages the table of exported file systems and distributes the information to the /etc/ exports files on the file serving nodes. All entries are automatically re-exported to NFS clients and to the file serving node standbys unless you specify otherwise. On the exporting file serving node, configure the number of NFS server threads based on the expected workload. The default is 8 threads. If the node will service many clients, you can increase the value to 16 or 64. To configure server threads, use the following command to change the default value of RPCNFSDCOUNT in the /etc/sysconfig/nfs file from 8 to 16 or 64. ibrix_host_tune –C –h HOSTS –o nfsdCount=64 A file system must be mounted before it can be exported.

NOTE: When configuring options for an NFS export, do not use the no_subtree_check option. This option is not compatible with the IBRIX software.

Export a file system using the GUI Use the Add a New File Share Wizard to export a file system. Select File Shares from the Navigator, and click Add on the File Shares panel to open the wizard. (You can also open the wizard by first selecting a file system on the Filesystems panel, selecting NFS Exports from the lower Navigator, and then clicking Add on the NFS Exports panel.) On the File Share window, select the file system to be exported, select NFS as the file sharing protocol, and enter the export path.

Exporting a file system 51 Use the Settings window to specify the clients allowed to access the share. Also select the permission and privilege levels for the clients, and specify whether the export should be available from a backup server.

The Advanced Settings window allows you to set NFS options on the share.

52 Using NFS On the Host Servers window, select the servers that will host the NFS share. By default, the share is hosted by all servers that have mounted the file system.

The Summary window shows the configuration of the share. You can go back and revise the configuration if necessary. When you click Finish, the export is created and appears on the File Shares panel.

Exporting a file system 53 Export a file system using the CLI To export a file system from the CLI, use the ibrix_exportfs command: ibrix_exportfs -f FSNAME -h HOSTNAME -p CLIENT1:PATHNAME1,CLIENT2:PATHNAME2,.. [-o "OPTIONS"] [-b] The options are as follows:

Option Description –f FSNAME The file system to be exported. -h HOSTNAME The file serving node containing the file system to be exported. -p CLIENT1:PATHNAME1, The clients that will access the file system can be a single file serving node, file CLIENT2:PATHNAME2,.. serving nodes represented by a wildcard, or the world (:/PATHNAME). Note that world access omits the client specification but not the colon (for example, :/usr/ src). -o "OPTIONS" The default Linux exportfs mount options are used unless specific options are provided. The standard NFS export options are supported. Options must be enclosed in double quotation marks (for example, -o "ro"). Do not enter an FSID= or sync option; they are provided automatically. -b By default, the file system is exported to the NFS client’s standby. This option excludes the standby for the file serving node from the export.

For example, to provide NFS clients *.hp.com with read-only access to file system ifs1 at the directory /usr/src on file serving node s1.hp.com: ibrix_exportfs -f ifs1 -h s1.hp.com -p *.hp.com:/usr/src -o "ro" To provide world read-only access to file system ifs1 located at /usr/src on file serving node s1.hp.com: ibrix_exportfs -f ifs1 -h s1.hp.com -p :/usr/src -o "ro" Unexporting a file system A file system should be unexported before it is unmounted. To unexport a file system: • On the GUI, select the file system, select NFS Exports from the lower Navigator, and then select Unexport. • On the CLI, enter the following command: ibrix_exportfs -U -h HOSTNAME -p CLIENT:PATHNAME [-b]

Using case-insensitive file systems By default, IBRIX file systems and directories follow POSIX semantics and file names are case-sensitive for Linux/NFS users. (File names are always case-insensitive for Windows clients.) If you prefer to use Windows semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive. Doing this prevents a Linux/NFS user from creating two files that differ only in case (such as foo and FOO). If Windows users are accessing the directory, two files with the same name but different case might be confusing, and the Windows users may be able to access only one of the files.

CAUTION: Caution is advised when using this feature. It breaks POSIX semantics and can cause problems for Linux utilities and applications.

54 Using NFS Before enabling the case-insensitive feature, be sure the following requirements are met: • The file system or directory must be created under the IBRIX File Serving Software 6.0 or later release. • The file system must be mounted. Setting case insensitivity for all users (NFS/Linux/Windows) The case-insensitive setting applies to all users of the file system or directory. Select the file system on the GUI, expand Active Tasks in the lower Navigator, and select Case Insensitivity On the Task Summary bottom panel, click New to open the New Case Insensitivity Task dialog box. Select the appropriate action to change case insensitivity.

NOTE: When specifying a directory path, the best practice is to change case insensitivity at the root of an SMB share and to avoid mixed case insensitivity in a given share.

To set case insensitivity from the CLI, use the following command: ibrix_caseinsensitive -s -f FSNAME -c [ON|OFF] -p PATH Viewing the current setting for case insensitivity Select Report Current Case Insensitivity Setting on the New Case Insensitivity Task dialog box to view the current setting for a file system or directory. Click Perform Recursively to see the status for all descendent directories of the specified file system or directory. From the CLI, use the following command to determine whether case-insensitivity is set on a file system or directory: ibrix_caseinsensitive -i -f FSNAME -p PATH [-r] The -r option includes all descendent directories of the specified path. Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows) When you set the directory tree to be case insensitive OFF, the directory and all recursive subdirectories are again case sensitive, restoring the POSIX semantics for Linux users.

Using case-insensitive file systems 55 Log files A new task is created when you change case insensitivity or query its status recursively. A log file is created for each task and an ID is assigned to the task. The log file is placed in the directory /usr/local/ibrix/log/case_insensitive on the server specified as the coordinating server for the task. Check that server for the log file.

NOTE: To verify the coordinating server, select File System > Inactive Tasks. Then select the task ID from the display and select Details. The log file names have the format IDtask.log, such as ID26.log. The following sample log file is for a query reporting case insensitivity: 0:0:26275:Reporting Case Insensitive status for the following directories 1:0:/fs_test1/samename-T: TRUE 2:0:/fs_test1/samename-T/samename: TRUE 2:0:DONE The next sample log file is for a change in case insensitivity: 0:0:31849:Case Insensitivity is turned ON for the following directories 1:0:/fs_test2/samename-true 2:0:/fs_test2/samename-true/samename 3:0:/fs_test2/samename-true/samename/samename-snap 3:0:DONE The first line of the output contains the PID for the process and reports the action taken. The first column specifies the number of directories visited. The second column specifies the number of errors found. The third column reports either the results of the query or the directories where case insensitivity was turned on or off.

Displaying and terminating a case insensitivity task To display a task, use the following command: # ibrix_task -l For example: # ibrix_task -l TASK ID TYPE FILE SYSTEM SUBMITTED BY TASK STATUS IS COMPLETED? EXIT STATUS STARTED AT ENDED AT ------

------caseins_237 caseins fs_test1 root from Local Host STARTING No Jun 17, 2011

11:31:38 To terminate a task, run the following command and specify the task ID: # ibrix_task -k -n For example: # ibrix_task -k -n caseins_237 Case insensitivity and operations affecting directories A newly created directory retains the case-insensitive setting of its parent directory. When you use commands and utilities that create a new directory, that directory has the case-insensitive setting of its parent. This situation applies to the following: • Windows or Mac copy and paste • tar/untar • compress/uncompress • cp -R • rsync

56 Using NFS • Remote replication • xcopy • robocopy • Restoring directories and folders from snapshots The case-insensitive setting of the source directories is not retained on the destination directories. Instead, the setting for the destination file system is applied. However, if you use a command such as the Linux mv command, a Windows drag and drop operation, or a Mac uncompress operation, a new directory is not created, and the affected directory retains its original case-insensitive setting.

Using case-insensitive file systems 57 6 Configuring authentication for SMB, FTP, and HTTP

IBRIX software supports several services for authenticating users accessing shares on IBRIX file systems: • Active Directory (supported for SMB, FTP, and HTTP) • Active Directory with LDAP ID mapping as a secondary lookup source (supported for SMB) • LDAP (supported for SMB) • Local Users and Groups (supported for SMB, FTP, and HTTP) Local Users and Groups can be used with Active Directory or LDAP.

NOTE: Active Directory and LDAP cannot be used together. You can configure authentication from the GUI or CLI. When you configure authentication with the GUI, the selected authentication services are configured on all servers. The CLI commands allow you to configure authentication differently on different servers. Using Active Directory with LDAP ID mapping When LDAP ID mapping is a secondary lookup method, the system reads SMB client UIDs and GIDs from LDAP if it cannot locate the needed ID in an AD entry. The name in LDAP must match the name in AD without respect for case or pre-appended domain. If the user configuration differs in LDAP and Windows AD, the LDAP ID mapping feature uses the AD configuration. For example, the following AD configuration specifies that the primary group for user1 is Domain Users, but in LDAP, the primary group is group1.

AD configuration LDAP Configuration user: user1 uid: user1 primary group: Domain Users uidNumber: 1010 uid: not specified gidNumber: 1001 (group1) UNIX gid: not specified cn: Domain Users gidNumber: 1111

The Linux id command returns the primary group specified in LDAP: user: user1 primary group: group1 (1001) LDAP ID mapping uses AD as the primary source for identifying the primary group and all supplemental groups. If AD does not specify a UNIX GID for a user, LDAP ID mapping looks up the GID for the primary group assigned in AD. In the example, the primary group assigned in AD is Domain Users, and LDAP ID mapping looks up the GID of that group in LDAP. The lookup operation returns: user: user1 primary group: Domain Users (1111)

AD does not force the supplied primary group to match the supplied UNIX GID. The supplemental groups assigned in AD do not need to match the members assigned in LDAP. LDAP ID mapping uses the members list assigned in AD and ignores the members list configured in LDAP.

58 Configuring authentication for SMB, FTP, and HTTP Using LDAP as the primary authentication method Requirements for LDAP users and groups IBRIX supports only OpenLDAP. Configuring LDAP for IBRIX software To configure LDAP, complete the following steps: 1. Update a configuration file template that ships as part of the IBRIX LDAP software. This updated configuration file is then passed to a configuration utility, which uses LDAP commands to modify the remote enterprise's OpenLDAP server. 2. Configure LDAP authentication on all the cluster nodes by using Fusion Manager. 3. Update the appropriate configuration template with information specific to the OpenLDAP server being configured.

Update the template on the remote LDAP server The IBRIX LDAP client ships with three configuration templates, each corresponding to a supported OpenLDAP server schema: • customized-schema-template.conf • samba-schema-template.conf • posix-schema-template.conf Pick the schema your server supports. If your server supports both Posix and Samba schemas, pick the schema most appropriate for your environment. Choose any one of the three supported schema templates to proceed. Make a copy of the template corresponding to the schema your LDAP server supports, and update the copy with your configuration information. Customized template. If the OpenLDAP server has a customized or a special schema, you must provide information to help map between the standard schema attribute and class names to the new names that are extant on the OpenLDAP server. This situation is not a common one. Use this template only if your OpenLDAP server has overridden the standardized Posix or Samba schema with customized extensions. Provide values (equivalent names) for all virtual attributes in the configuration. For example: mandatory; virtual; uid; your-schema-equivalent-of-uid optional; virtual; homeDirectory; your-schema-equivalent-of-homeDirectory Samba template. Enter the required attributes for Samba/POSIX templates. You can use the default values specified in the “Map (mandatory) variables” and “Map (Optional) variables” sections of the template. POSIX template. Enter the required attributes for Samba/POSIX templates. Also remove or comment out the following virtual attributes: # mandatory; virtual; SID;sambaSID # mandatory; virtual; PrimaryGroupSID;sambaPrimaryGroupSID # mandatory; virtual; sambaGroupMapping;sambaGroupMapping

Using LDAP as the primary authentication method 59 Required attributes for Samba/POSIX templates

Nonvirtual attribute Value Description name

VERSION Any arbitrary string Helps identify the configuration version uploaded. Potentially used for reports, audit history, and troubleshooting.

LDAPServerHost IP Address string A FQDN or IP. Typically, it is a front-ended switch or an IP LDAP proxy/balancer name/address for multiple backend high-availability LDAP servers.

LdapConfigurationOU Writable OU name string The LDAP OU (organizational unit) to which configuration entries can be written. This OU must exist on the server and must be readable and writable using LDAPWriteDN.

LdapWriteDN DN name string Limited write DN credentials. HP recommends that you do not use cn=Manager credentials. Instead, use an account DN with very restricted write permissions to the LdapConfigurationOU and beneath.

LDAPWritePassword Unencrypted password string. Password for the LdapWriteDN account. LDAP encrypts the string on storage.

schematype Samba, posix, or user defined Supported schema for the OpenLDAP server. schema

Run the configuration script on the remote LDAP server The IBRIX gen_ldap-lwtools.sh script performs the configuration based on the copy of the chosen schema template (UserConf.conf in the examples). Run the following command to validate your changes: sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf –v If the configuration looks okay, run the command with added security by removing all temporary files: sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf -rm If you need to troubleshoot the configuration, run the command as follows: sh /opt/likewise/bin/gen_ldap-lwtools.sh UserConf.conf

Configure LDAP authentication on the cluster nodes You can configure LDAP authentication from the GUI, as described in “Configuring authentication from the GUI” (page 60) (recommended), or by using the ibrix_ldapconfig command (see “Configuring LDAP” (page 69). Configuring authentication from the GUI Use the Authentication Wizard to perform the initial configuration or to modify it at a later time. Select Cluster Configuration > File Sharing Authentication from the Navigator to open the File Sharing Authentication Settings panel. This panel shows the current authentication configuration on each server.

60 Configuring authentication for SMB, FTP, and HTTP Click Authentication Wizard to start the wizard. On the Configure Options page, select the authentication service to be applied to the servers in the cluster.

NOTE: SMB in the GUI has not been rebranded to SMB yet. CIFS is just a different name for SMB.

The wizard displays the configuration pages corresponding to the option you selected. • Active Directory. See “Active Directory” (page 62). • LDAP. See “LDAP” (page 63). • LDAP ID Mapping. See “LDAP ID mapping” (page 62). • Local Groups. See “Local Groups” (page 65).

Configuring authentication from the GUI 61 • Local Users. See “Local Users” (page 66). • Share Administrators. See “Windows Share Administrators” (page 68). • Summary. See “Summary” (page 68). Active Directory Enter your domain name, the Auth Proxy username (an AD domain user with privileges to join the specified domain; typically a Domain Administrator), and the password for that user. These credentials are used only to join the domain and do not persist on the cluster nodes. Optionally, you can enable Linux static user mapping; for more information see “Linux static user mapping with Active Directory” (page 87).

NOTE: When you successfully configure Active Directory authentication, the machine is part of the domain until you remove it from the domain, either with the ibrix_auth -n command or with Windows tools. Because Active Directory authentication is a one-time event, it is not necessary to update authentication if you change the proxy user information.

If you want to use LDAP ID mapping as a secondary lookup for Active Directory, select Enabled with LDAP ID Mapping and AD in the Linux Static User Mapping field. When you click Next, the LDAP ID Mapping dialog box appears. LDAP ID mapping If the system cannot locate a UID/GID in Active Directory, it searches for the UID/GID in LDAP. On the LDAP ID Mapping dialog box, specify the appropriate search parameters.

62 Configuring authentication for SMB, FTP, and HTTP Enter the following information on the dialog box:

LDAP Server Host Enter the server name or IP address of the LDAP server host. Port Enter the LDAP server port (TCP port 389 for unencrypted or TLS encrypted; 636 for SSL encrypted). Base of Search Enter the LDAP base for searches. This is normally the root suffix of the directory, but you can provide a base lower down the tree for business rules enforcement, ACLs, or performance reasons. For example, ou=people,cd=enx,dc=net. Bind DN Enter the LDAP user account used to authenticate to the LDAP server to read data. This account must have privileges to read the entire directory. Write credentials are not required. For example, scn=hp9000-readonly-user,dc=entx,dc=net. Password Enter the password for the LDAP user account. Max Entries Enter the maximum number of entries to return from the search (the default is 10). Enter 0 (zero) for no limit. Max Wait Time Enter the local maximum search time-out value in seconds. This value determines how long the client will wait for search results. LDAP Scope Select the level of entries to search: • base: search the base level entry only • sub: search the base level entry and all entries in sub-levels below the base entry • one: search all entries in the first level below the base entry, excluding the base entry

Namesearch Case If LDAP searches should be case sensitive, check this box. Sensitivity

LDAP Enter the server name or IP address of the LDAP server host and the password for the LDAP user account.

NOTE: LDAP cannot be used with Active Directory.

Configuring authentication from the GUI 63 Enter the following information in the remaining fields:

Bind DN Enter the LDAP user account used to authenticate to the LDAP server to read data, such as cn=hp9000-readonly-user,dc=entx,dc=net. This account must have privileges to read the entire directory. Write credentials are not required. Write OU Enter the OU (organizational unit) on the LDAP server to which configuration entries can be written. This OU must be pre-provisioned on the remote LDAP server. The previous schema configuration step would have seeded this OU with values that will now be read. The LDAPBindDN credentials must be able to read (but not write) from the LDAPWriteOU. For example, ou=9000Config,ou=configuration,dc=entx,dc=net. Base of Search This is normally the root suffix of the directory, but you can provide a base lower down the tree for business rules enforcement, ACLs, or performance reasons. For example, ou=people,cd=enx,dc=net. NetBIOS Name Enter any string that identifies the IBRIX host, such as IBRIX.

If your LDAP configuration requires a certificate for secure access, click Edit to open the LDAP dialog box. You can enter a TLS or SSL certificate. When no certificate is used, the Enable SSL field shows Neither TLS or SSL.

64 Configuring authentication for SMB, FTP, and HTTP NOTE: If LDAP is the primary authentication service, Windows clients such as Explorer or MMC plug-ins cannot be used to add new users.

Local Groups Specify local groups allowed to access shares. On the Local Groups page, enter the group name and, optionally, the GID and RID. If you do not assign a GID and RID, they are generated automatically. Click Add to add the group to the list of local groups. Repeat this process to add other local groups. When naming local groups, you should be aware of the following: • Group names must be unique. The new name cannot already be used by another user or group. • The following names cannot be used: administrator, guest, root.

Configuring authentication from the GUI 65 NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as Explorer or MMC plug-ins cannot be used to add new users.

Local Users Specify local users allowed to access shares. On the Local Users page, enter a user name and password. Click Add to add the user to the Local Users list. When naming local users, you should be aware of the following: • User names must be unique. The new name cannot already be used by another user or group. • The following names cannot be used: administrator, guest, root.

66 Configuring authentication for SMB, FTP, and HTTP To provide account information for the user, click Advanced. The default home directory is /home/ and the default shell program is /bin/false.

Configuring authentication from the GUI 67 NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as Explorer or MMC plug-ins cannot be used to add new users.

Windows Share Administrators If you will be using the Windows Share Management MMC plug-in to manage SMB shares, enter your share administrators on this page. You can skip this page if you will be managing shares entirely from the IBRIX Management Console.

To add an Active Directory or LDAP share administrator, enter the administrator name (such as domain\user1 or domain\group1) and click Add to add the administrator to the Windows Share Administrators list. To add an existing Local User as a share administrator, select the user and click Add. Summary The Summary page shows the authentication configuration. You can go back and revise the configuration if necessary. When you click Finish, authentication is configured, and the details appear on the File Sharing Authentication panel. Viewing or changing authentication settings Expand File Sharing Authentication in the lower Navigator, and then select an authentication service to display the current configuration for that service. On each panel, you can start the Authentication Wizard and modify the configuration if necessary.

68 Configuring authentication for SMB, FTP, and HTTP You cannot change the UID or RID for a Local User account. If it is necessary to change a UID or RID, first delete the account and then recreate it with the new UID or RID. The Local Users and Local Groups panels allow you to delete the selected user or group. Configuring authentication from the CLI You can configure Active Directory, LDAP, LDAP ID mapping, or Local Users and Groups. Configuring Active Directory To configure Active Directory authentication, use the following command: ibrix_auth -n DOMAIN_NAME –A AUTH_PROXY_USER_NAME@domain_name [-P AUTH_PROXY_PASSWORD] [-S SETTINGLIST] [-h HOSTLIST] RFC2307 is the protocol that enables Linux static user mapping with Active Directory. To enable RFC2307 support, use the following command: ibrix_cifsconfig -t [-S SETTINGLIST] [-h HOSTLIST] Enable RFC2307 in the SETTINGLIST as follows: rfc2307_support=rfc2307 For example: ibrix_cifsconfig -t -S "rfc2307_support=rfc2307" To disable RFC2307, set rfc2307_support to unprovisioned. For example: ibrix_cifsconfig -t -S "rfc2307_support=unprovisioned"

IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S command, use the following command to restart the SMB services on all nodes affected by the change. ibrix_server –s –t cifs –c restart [–h SERVERLIST] Clients will experience a temporary interruption in service during the restart.

Configuring LDAP Use the ibrix_ldapconfig command to configure LDAP as the authentication service for SMB shares.

IMPORTANT: Before using ibrix_ldapconfig to configure LDAP on the cluster nodes, you must configure the remote LDAP server. For more information, see “Configuring LDAP for IBRIX software” (page 59).

Add an LDAP configuration and enable LDAP:

Configuring authentication from the CLI 69 ibrix_ldapconfig -a -h LDAPSERVERHOST [-P LDAPSERVERPORT] -b LDAPBINDDN -p LDAPBINDDNPASSWORD -w LDAPWRITEOU -B LDAPBASEOFSEARCH -n NETBIOS -E ENABLESSL [-f CERTFILEPATH] [-c CERTFILECONTENTS] The options are:

-h LDAPSERVERHOST The LDAP server host (server name or IP address). -P LDAPSERVERPORT The LDAP server port. -b LDAPBINDDN The LDAP bind Distinguished Name. For example: cn=hp9000-readonly-user,dc=entx,dc=net. -p LDAPBINDDNPASSWORD The LDAP bind password. -w LDAPWRITEOU The LDAP write Organizational Unit, or OU (for example, ou=9000Config,,ou=configuration,dc=entx,dc=net). -B LDAPBASEOFSEARCH The LDAP base for searches (for example, ou=people,cd=enx,dc=net). -n NETBIOS The NetBIOS name, such as IBRIX. -E ENABLESSL The type of certificate required. Enter 0 for no certificate, 1 for TLS, or 2 for SSL. -f CERTFILEPATH The path to the TLS or SSL certificate file, such as /usr/local/ibrix/ldap/ key.pem. -c CERTFILECONTENTS The contents of the certificate file. Copy the contents and paste them between quotes.

Modify an LDAP configuration: ibrix_ldapconfig -m -h LDAPSERVERHOST [-P LDAPSERVERPORT] [e|D] [-b LDAPBINDDN] [-p LDAPBINDDNPASSWORD] [-w LDAPWRITEOU] [-B LDAPBASEOFSEARCH] [-n NETBIOS] [-E ENABLESSL] [-f CERTFILEPATH]|[-c CERTFILECONTENTS] The -f and -c arguments are mutually exclusive. Provide one or the other but not both. View the LDAP configuration: ibrix_ldapconfig -i Delete LDAP settings for an LDAP server host: ibrix_ldapconfig -d -h LDAPSERVERHOST Enable LDAP: ibrix_ldapconfig -e -h LDAPSERVERHOST Disable LDAP: ibrix_ldapconfig -D -h LDAPSERVERHOST Configuring LDAP ID mapping Use the ibrix_ldapidmapping command to configure LDAP ID mapping as a secondary lookup source for Active Directory. LDAP ID mapping can be used only for SMB shares. Add an LDAP ID mapping: ibrix_ldapidmapping -a -h LDAPSERVERHOST -B LDAPBASEOFSEARCH [-P LDAPSERVERPORT] [-b LDAPBINDDN] [-p LDAPBINDDNPASSWORD] [-m MAXWAITTIME] [-M MAXENTRIES] [-n] [-s] [-o] [-u] This command automatically enables LDAP RFC 2307 ID Mapping. The options are:

-h LDAPSERVERHOST The LDAP server host (server name or IP address). -B LDAPBASEOFSEARCH The LDAP base for searches (for example, ou=people,cd=enx,dc=net). -P LDAPSERVERPORT The LDAP server port (TCP port 389).

70 Configuring authentication for SMB, FTP, and HTTP -b LDAPBINDDN The LDAP bind Distinguished Name (the default is anonymous). For example: cn=hp9000-readonly-user,dc=entx,dc=net. -p LDAPBINDDNPASSWORD The LDAP bind password. -m MAXWAITTIME The maximum amount of time to allow the search to run. -M MAXENTRIES The maximum number of entries (the default is 10). -n Case sensitivity for name searches (the default is false, or case-insensitive). -s Search the LDAP scope base (search the base level entry only). -o LDAP scope one (search all entries in the first level below the base entry, excluding the base entry). -u LDAP scope sub (search the base-level entries and all entries below the base level).

Display information for LDAP ID mapping: ibrix_ldapidmapping -i Enable an existing LDAP ID mapping: ibrix_ldapidmapping -e -h LDAPSERVERHOST Disable an existing LDAP ID mapping: ibrix_ldapidmapping -d -h LDAPSERVERHOST Configuring Local Users and Groups authentication Use ibrix_auth to configure Local Users authentication. Use ibrix_localusers and ibrix_localgroups to manage user and group accounts. Configure Local Users authentication: ibrix_auth -N [-h HOSTLIST] Be sure to create a local user account for each user that will be accessing SMB, FTP, or HTTP shares, and create at least one local group account for the users. The account information is stored internally in the cluster. Add a Local User account: ibrix_localusers -a -u USERNAME -g DEFAULTGROUP -p PASSWORD [-h HOMEDIR] [-s SHELL] [-i USERINFO] [-U USERID] [-S RID] [-G GROUPLIST] Modify a Local User account: ibrix_localusers -m -u USERNAME [-g DEFAULTGROUP] [-p PASSWORD] [-h HOMEDIR] [-s SHELL] [-i USERINFO] [-G GROUPLIST] View information for all Local User accounts: ibrix_localusers -L View information for a specific Local User account: ibrix_localusers -l -u USERNAME Delete a Local User account: ibrix_localusers -d -u USERNAME Add a Local Group account: ibrix_localgroups -a -g GROUPNAME [-G GROUPID] [-S RID] Modify a Local Group account: ibrix_localgroups -m -g GROUPNAME [-G GROUPID] [-S RID] View information about all Local Group accounts: ibrix_localgroups -L View information for a specific Local Group account:

Configuring authentication from the CLI 71 ibrix_localgroups -l -g GROUPNAME Delete a Local Group account: ibrix_localgroups -d -g GROUPNAME

72 Configuring authentication for SMB, FTP, and HTTP 7 Using SMB

The SMB server implementation allows you to create file shares for data stored on the cluster. The SMB server provides a true Windows experience for Windows clients. A user accessing a file share on an IBRIX system will see the same behavior as on a Windows server.

IMPORTANT: SMB and IBRIX Windows clients cannot be used together because of incompatible AD user to UID mapping. You can use either SMB or IBRIX Windows clients, but not both at the same time. IMPORTANT: Before configuring SMB, select an authentication method. See “Configuring authentication for SMB, FTP, and HTTP” (page 58) for more information. Configuring file serving nodes for SMB To enable file serving nodes to provide SMB services, you will need to configure the resolv.conf file. On each node, the /etc/resolv.conf file must include a DNS server that can resolve SRV records for your domain. For example: # cat /etc/resolv.conf

search mycompany.com nameserver 192.168.100.132 To verify that a file serving node can resolve SRV records for your AD domain, run the Linux dig command. (In the following example, the Active Directory domain name is mydomain.com.) % dig SRV _ldap._tcp.mydomain.com In the output, verify that the ANSWER SECTION contains a line with the name of a domain controller in the Active Directory domain. Following is some sample output: ; <<>> DiG 9.3.4-P1 <<>> SRV _ldap._tcp.mydomain.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56968 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2

;; QUESTION SECTION: ;_ldap._tcp.mydomain.com. IN SRV

;; ANSWER SECTION: _ldap._tcp.mydomain.com. 600 IN SRV 0 100 389 adctrlr.mydomain.com.

;; ADDITIONAL SECTION: adctrlr.mydomain.com. 3600 IN A 192.168.11.11

;; Query time: 0 msec ;; SERVER: 192.168.100.132 #53(192.168.100.132) ;; WHEN: Tue Mar 16 09:56:02 2010 ;; MSG SIZE rcvd: 113 For more information, see the Linux resolv.conf(5) man page. Starting or stopping the SMB service and viewing SMB statistics

IMPORTANT: You will need to start the SMB service initially on the file serving nodes. Subsequently, the service is started automatically when a node is rebooted.

NOTE: CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for SMB. Use the SMB panel on the GUI to start, stop, or restart the SMB service on a particular server, or to view SMB activity statistics for the server. Select Servers from the Navigator and then select the

Configuring file serving nodes for SMB 73 appropriate server. Select CIFS in the lower Navigator to display the CIFS panel, which shows SMB activity statistics on the server. You can start, stop, or restart the SMB service by clicking the appropriate button.

NOTE: Click CIFS Settings to configure SMB signing on this server. See “Configuring SMB signing ” (page 80) for more information. To start, stop, or restart the SMB service from the CLI, use the following command: ibrix_server –s –t cifs –c {start|stop|restart} Monitoring SMB services The ibrix_cifsmonitor command configures monitoring for the following SMB services: • lwreg • dcerpc • eventlog • lsass • lwio • netlogin • srvsvc If the monitor finds that a service is not running, it attempts to restart the service. If the service cannot be restarted, that particular service is not monitored. The command can be used for the following tasks. Start the SMB monitoring daemon and enable monitoring: ibrix_cifsmonitor –m [–h HOSTLIST] Display the health status of the SMB services: ibrix_cifsmonitor –l

74 Using SMB The command output reports status as follows:

Health Status Condition Up All monitored SMB services are up and running Degraded The lwio service is running but one or more of the other services are down Down The lwio service is down and one or more of the other services are down Not Monitored Monitoring is disabled N/A The active Fusion Manager could not communicate with other file serving nodes in the cluster

Disable monitoring and stop the SMB monitoring daemon: ibrix_cifsmonitor –u [–h HOSTLIST] Restart SMB service monitoring: ibrix_cifsmonitor –c [–h HOSTLIST] SMB shares Windows clients access file systems through SMB shares. You can use the IBRIX GUI or CLI to manage shares, or you can use the Microsoft Management Console interface. The SMB service must be running when you add shares. When working with SMB shares, you should be aware of the following: • The permissions on the directory exporting an SMB share govern the access rights that are given to the Everyone user as well as to the owner and group of the share. Consequently, the Everyone user may have more access rights than necessary. The administrator should set ACLs on the SMB share to ensure that users have only the appropriate access rights. Alternatively, permissions can be set more restrictively on the directory exporting the SMB share. • When the cluster and Windows clients are not joined in a domain, local users are not visible when you attempt to add ACLs on files and folders in an SMB share. • A directory tree on an SMB share cannot be copied if there are more than 50 ACLs on the share. Also, because of technical constraints in the SMB service, you cannot create subfolders in a directory on an SMB share having more than 50 ACLs. • When configuring an SMB share, you can specify IP addresses or ranges that should be allowed or denied access to the share. However, if your network includes packet filters, a NAT gateway, or routers, this feature cannot be used because the client IP addresses are modified while in transit. • You can use an SMB share as a DFS target. However, the SMB share does not support DFS load balancing or DFS replication. • With the release of version 6.2, SMB shares support Large MTU, which provides a 1 MB buffer for reads and writes. On the client, you must enable Large MTU in the registry to enable support for Large MTU on the SMB server. • SMB shares support Alternate Data Streams. SMB clients with files containing the Alternate Data Streams type '$DATA' can be written to SMB shares. The files are stored on the X9000 file system in a special format and should only be handled by SMB clients. If files are handled over a different protocol or directly on the X9000 server via PowerShell, the Alternate Data Streams could be lost. • HP-SMB supports the following subset of Windows LSASS Local Authentication Provider Privileges: ◦ SE_BACKUP_PRIVILEGE ◦ SE_CHANGE_NOTIFY_PRIVILEGE (Bypass traverse checking)

SMB shares 75 ◦ SE_MACHINE_ACCOUNT_PRIVILEGE ◦ SE_MACHINE_VOLUME_PRIVILEGE ◦ SE_RESTORE_PRIVILEGE ◦ SE_TAKE_OWNERSHIP_PRIVILEGE See the Microsoft documentation for more information about these privileges.

Configuring SMB shares with the GUI Use the Add New File Share Wizard to configure SMB shares. You can then view or modify the configuration as necessary. On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click Add to start the Add New File Share Wizard. On the File Share page, select CIFS as the File Sharing Protocol. Select the file system, which must be mounted, and enter a name, directory path, and description for the share. Note the following: • Do not include any of the following special characters in a share name. If the name contains any of these special characters, the share might not be set up properly on all nodes in the cluster. ' & ( [ { $ ` , / \

• Do not include any of the following special characters in the share description. If a description contains any of these special characters, the description might not propagate correctly to all nodes in the cluster. * % + & `

On the Permissions page, specify permissions for users and groups allowed to access the share.

76 Using SMB Click Add to open the New User/Group Permission Entry dialog box, where you can configure permissions for a specific user or group. The completed entries appear in the User/Group Entries list on the Permissions page.

On the Client Filtering page, specify IP addresses or ranges that should be allowed or denied access to the share.

NOTE: This feature cannot be used if your network includes packet filters, a NAT gateway, or routers.

SMB shares 77 Click Add to open the New Client UP Address Entry dialog box, where you can allow or deny access to a specific IP address or a range of addresses. Enter a single IP address, or include a bitmask to specify entire subnets of IP addresses, such as 10.10.3.2/25. The valid range for the bitmask is 1–32. The completed entry appears on the Client IP Filters list on the Client Filtering page.

On the Advanced Settings page, enable or disable Access Based Enumeration and specify the default create mode for files and directories created in the share. The Access Based Enumeration option allows users to see only the files and folders to which they have access on the file share.

78 Using SMB On the Host Servers page, select the servers that will host the share.

SMB shares 79 Configuring SMB signing The SMB signing feature specifies whether clients must support SMB signing to access SMB shares. You can apply the setting to all servers, or to a specific server. To apply the same setting to all server, select File Shares from the Navigator and click Settings on the File Shares panel. To apply a setting to a specific server, select that server on the GUI, select CIFS from the lower Navigator, and click Settings. The dialog is the same for both selection methods.

When configuring SMB signing, note the following: • SMB2 is always enabled. • Use the Required check box to specify whether SMB signing (with either SMB1 or SMB2) is required. • The Disabled check box applies only to SMB1. Use this check box to enable or disable SMB signing with SMB1. You should also be aware of the following: • The File Share Settings dialog box does not display whether SMB signing is currently enabled or disabled. Use the following command to view the current setting for SMB signing: ibrix_cifsconfig -i

• SMB signing must not be required to support connections from 10.5 and 10.6 Mac clients. • It is possible to configure SMB signing differently on individual servers. Backup SMB servers should have the same settings to ensure that clients can connect after a failover. • The SMB signing settings specified here are not affected by Windows domain group policy settings when joined to a Windows domain.

Configuring SMB signing from the CLI To configure SMB signing from the command line, use the following command: ibrix_cifsconfig -t -S SETTINGLIST You can specify the following values in the SETTINGLIST: smb signing enabled smb signing required Use commas to separate the settings, and enclose the list in quotation marks. For example, the following command sets SMB signing to enabled and required: ibrix_cifsconfig –t –S “smb signing enabled=1,smb signing required=1"

80 Using SMB To disable SMB signing, enter settingname= with no value. For example: ibrix_cifsconfig –t –S “smb signing enabled=,smb signing required="

IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S command, use the following command to restart the SMB services on all nodes affected by the change. ibrix_server –s –t cifs –c restart [–h SERVERLIST] Clients will experience a temporary interruption in service during the restart.

Managing SMB shares with the GUI To view existing SMB shares on the GUI, select File Shares > CIFS from the Navigator. The CIFS Shares panel shows the file system being shared, the hosts (or servers) providing access, the name of the share, the export path, and the options applied to the share.

NOTE: When externally managed appears in the option list for a share, that share is being managed with the Microsoft Management Console interface. The GUI or CLI cannot be used to change the permissions for the share.

On the CIFS Shares panel, click Add or Modify to open the File Shares wizard, where you can create a new share or modify the selected share. Click Delete to remove the selected share. Click CIFS Settings to configure global file share settings; see “Configuring SMB signing ” (page 80)) for more information. You can also view SMB shares for a specific file system. Select that file system on the GUI, and then select CIFS Shares from the lower Navigator.

SMB shares 81 Configuring and managing SMB shares with the CLI Adding, modifying, or deleting shares Use the ibrix_cifs command to add, modify, or delete shares. For detailed information, see the HP IBRIX 9000 Storage CLI Reference Guide.

NOTE: Be sure to use the ibrix_cifs command located in /bin. The ibrix_cifs command located in /usr/local/bin/init is used internally by IBRIX software and should not be run directly. Add a share: ibrix_cifs -a –f FSNAME –s SHARENAME -p SHAREPATH [-D SHAREDESCRIPTION] [-S SETTINGLIST] [-A ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F FILEMODE] [-M DIRMODE] [-h HOSTLIST] Use the -A ALLOWCLIENTIPSLIST or –E DENYCLIENTIPSLIST options to list client IP addresses allowed or denied access to the share. Use commas to separate the IP addresses, and enclose the list in quotes. You can include an optional bitmask to specify entire subnets of IP addresses (for example, ibrix_cifs -A “192.186.0.1,102.186.0.2/16”). The default is "", which allows (or denies) all IP addresses. The -F FILEMODE and -M DIRMODE options specify the default mode for newly created files or directories, in the same manner as the Linux chmod command. The range of values is 0000–0777. The default is 0700. To see the valid settings for the -S option, use the following command: ibrix_cifs -L View share information: ibrix_cifs -i [-h HOSTLIST] Modify a share: ibrix_cifs -m -s SHARENAME [-D SHAREDESCRIPTION] [-S SETTINGLIST] [-A ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F FILEMODE] [-M DIRMODE] [-h HOSTLIST] Delete a share: ibrix_cifs –d -s SHARENAME [-h HOSTLIST]

82 Using SMB Managing user and group permissions Use the ibrix_cifsperms command to manage share-level permissions for users and groups. Add a user or group to a share and assign share-level permissions: ibrix_cifsperms -a -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h HOSTLIST] For -t TYPE, specify either allow or deny. For -p PERMISSION, specify one of the following: • fullcontrol • change • read For example, the following command gives everyone read permission on share1: ibrix_cifsperms -a -s share1 -u Everyone -t allow -p read Modify share-level permissions for a user or group: ibrix_cifsperms -m -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h HOSTLIST] Delete share-level permissions for a user or group: ibrix_cifsperms -d -s SHARENAME [-u USERNAME] [-t TYPE] [-h HOSTLIST] Display share-level permissions: ibrix_cifsperms -i -s SHARENAME [-t TYPE] [-h HOSTLIST] Managing SMB shares with Microsoft Management Console The Microsoft Management Console (MMC) can be used to add, view, or delete SMB shares. Administrators running MMC must have IBRIX software share management privileges.

NOTE: To use MMC to manage SMB shares, you must be authenticated as a user with share modification permissions. NOTE: If you will be adding users with the MMC, the primary authentication method must be Active Directory. NOTE: The permissions for SMB shares managed with the MMC cannot be changed with the IBRIX Management Console GUI or CLI.

Connecting to cluster nodes When connecting to cluster nodes, use the procedure corresponding to the Windows operating system on your machine. Windows XP, Windows 2003 R2: Complete the following steps: 1. Open the Start menu, select Run, and specify mmc as the program to open. 2. On the Console Root window, select File > Add/Remove Snap-in. 3. On the Add/Remove Snap-in window, click Add. 4. On the Add Standalone Snap-in window, select Shared Folders and click Add. 5. On the Shared Folders window, select Another computer as the computer to be managed, enter or browse to the computer name, and click Finish.

SMB shares 83 6. Click Close > OK to exit the dialogs. 7. Expand Shared Folders (\\

). 8. Select Shares and manage the shares as needed. Windows Vista, Windows 2008, Windows 7: Complete the following steps: 1. Open the Start menu and enter mmc in the Start Search box. You can also enter mmc in a DOS cmd window. 2. On the User Account Control window, click Continue. 3. On the Console 1 window, select File > Add/Remove Snap-in. 4. On the Add or Remove Snap-ins window, select Shared Folders and click Add. 5. On the Shared Folders window, select Another computer as the computer to be managed, enter or browse to the computer name, and click Finish.

84 Using SMB 6. Click OK to exit the Add or Remove Snap-ins window. 7. Expand Shared Folders (\\

). 8. Select Shares and manage the shares as needed.

Saving MMC settings You can save your MMC settings to use when managing shares on this server in later sessions. Complete these steps: 1. On the MMC, select File > Save As. 2. Enter a name for the file. The name must have the suffix .msc. 3. Select Desktop as the location to save the file, and click Save. 4. Select File > Exit.

Granting share management privileges Use the following command to grant administrators IBRIX software share management privileges. The users you specify must already exist. Be sure to enclose the user names in square brackets. ibrix_auth -t -S 'share admins=[domainname\username,domainname\username]' The following example gives share management privileges to a single user: ibrix_auth -t -S 'share admins=[domain\user1]' If you specify multiple administrators, use commas to separate the users. For example: ibrix_auth -t -S 'share admins=[domain\user1, domain\user2, domain\user3]'

Adding SMB shares SMB shares can be added with the MMC, using the share management plug-in. When adding shares, you should be aware of the following: • The share path must include the IBRIX file system name. For example, if the file system is named data, you could specify C:\data1\folder1.

SMB shares 85 NOTE: The Browse button cannot be used to locate the file system.

• The directory to be shared will be created if it does not already exist. • The permissions on the shared directory will be set to 777. It is not possible to change the permissions on the share. • Do not include any of the following special characters in a share name. If the name contains any of these special characters, the share might not be set up properly on all nodes in the cluster. ' & ( [ { $ ` , / \

• Do not include any of the following special characters in the share description. If a description contains any of these special characters, the description might not propagate correctly to all nodes in the cluster. * % + & `

• The management console GUI or CLI cannot be used to alter the permissions for shares created or managed with Windows Share Management. The permissions for these shares are marked as “externally managed” on the GUI and CLI. Open the MMC with the Shared Folders snap-in that you created earlier. On the Select Computer dialog box, enter the IP address of a server that will host the share.

The Computer Management window shows the shares currently available from server.

86 Using SMB To add a new share, select Shares > New Share and run the Create A Shared Folder Wizard. On the Folder Path panel, enter the path to the share, being sure to include the file system name.

When you complete the wizard, the new share appears on the Computer Management window.

Deleting SMB shares To delete an SMB share, select the share on the Computer Management window, right-click, and select Delete. Linux static user mapping with Active Directory Linux static user mapping (also called UID/GID mapping or RFC2307 support) allows you to use LDAP as a Network Information Service. Linux static user mapping must be enabled when you configure Active Directory for user authentication (see “Configuring authentication for SMB, FTP, and HTTP” (page 58)). If you configure LDAP ID mapping as the secondary authentication service, authentication uses the IDs assigned in AD if they exist. If an ID is not found in an AD entry, authentication looks in LDAP for a user or group of the same name and uses the corresponding ID assigned in LDAP. The primary group and all supplemental groups are still determined by the AD configuration.

Linux static user mapping with Active Directory 87 You can also assign UIDs, GIDs, and other POSIX attributes such as the home directory, primary group and shell to users and groups in Active Directory. To add static entries to Active Directory, complete these steps: • Configure Active Directory. • Assign POSIX attributes to users and groups in Active Directory.

NOTE: Mapping UID 0 and GID 0 to any AD user or group is not compatible with SMB static mapping.

Configuring Active Directory Your Windows Domain Controller machines must be running Windows Server 2003 R2 or Windows Server 2008 R2. Configure the Active Directory domain as follows: • Install Identity Management for UNIX. • Activate the Active Directory Schema MMC snap-in. • Add the uidNumber and gidNumber attributes to the partial-attribute-set of the AD global catalog. You can perform these procedures from any domain controller. However, the account used to add attributes to the partial-attribute-set must be a member of the Schema Admins group.

Installing Identity Management for UNIX To install Identity Management for UNIX on a domain controller running Windows Server 2003 R2, see the following Microsoft TechNet Article: http://technet.microsoft.com/en-us/library/cc778455(WS.10).aspx To install Identity Management for UNIX on a domain controller running Windows Server 2008 R2, see the following Microsoft TechNet article: http://technet.microsoft.com/en-us/library/cc731178.aspx

Activating the Active Directory Schema MMC snap-in Use the Active Directory Schema MMC snap-in to add the attributes. To activate the snap-in, complete the following steps: 1. Click Start, click Run, type mmc, and then click OK. 2. On the MMC Console menu, click Add/Remove Snap-in. 3. Click Add, and then click Active Directory Schema. 4. Click Add, click Close, and then click OK.

Adding uidNumber and gidNumber attributes to the partial-attribute-set To make modifications using the Active Directory Schema MMC snap-in, complete these steps: 1. Click the Attributes folder in the snap-in. 2. In the right panel, scroll to the desired attribute, right-click the attribute, and then click Properties. Select Replicate this attribute to the Global Catalog, and click OK. The following dialog box shows the properties for the uidNumber attribute:

88 Using SMB The next dialog box shows the properties for the gidNumber attribute.

The following article provides more information about modifying attributes in the Active Directory global catalog: http://support.microsoft.com/kb/248717 Assigning attributes To set POSIX attributes for users and groups, start the Active Directory Users and GUI on the Domain Controller. Open the Administrator Properties dialog box, and go to the UNIX

Linux static user mapping with Active Directory 89 Attributes tab. For users, you can set the UID, login shell, home directory, and primary group. For groups, set the GID.

Synchronizing Active Directory 2008 with the NTP server used by the cluster It is important to synchronize Active Directory with the NTP server used by the IBRIX cluster. Run the following commands on the PDC: net stop w32time w32tm /config /syncfromflags:manual /manualpeerlist:"" w32tm /config /reliable:yes net start w32time To check the configuration, run the following command: w32tm /query /configuration Consolidating SMB servers with common share names If your SMB servers previously used the same share names, you can consolidate the servers without changing the share name requested on the client side. For example, you might have three SMB servers, SRV1, SRV2, and SRV3, that each have a share named DATA. SRV3 points to a shared drive that has the same path as \\SRV1\DATA; however, users accessing SRV3 have different permissions on the share. To consolidate the three servers, we will take these steps: 1. Assign Vhost names SRV1, SRV2, and SRV3. 2. Create virtual interfaces (VIF) for the IP addresses used by the servers. For example, Vhost SRV1 has VIF 99.10.10.101 and Vhost SRV2 has VIF 99.10.10.102. 3. Map the old share names to new share names. For example, map \\SRV1\DATA to new share srv1-DATA, map \\SRV2\DATA to new share srv2-DATA, and map \\SRV3\DATA to srv3-DATA.

90 Using SMB 4. Create the new shares on the cluster storage and assign each share the appropriate path. For example, assign srv1-DATA to /srv1/data, and assign srv2-DATA to /srv2/data. Because SRV3 originally pointed to the same share as SRV1, we will assign the share srv3-DATA the same path as srv1-DATA, but set the permissions differently. 5. Optionally, create a share having the original share name, DATA in our example. Assign a path such as /ERROR/DATA and place a file in it named SHARE_MAP_FAILED. Doing this ensures that if a user configuration error occurs or the map fails, clients will not gain access to the wrong shares. The file name notifies the user that their access has failed. When this configuration is in place, a client request to access share \\srv1\data will be translated to share srv1-DATA at /srv1/data on the file system. Client requests for \\srv3\data will also be translated to /srv1/data, but the clients will have different permissions. The client requests for \\srv2\data will be translated to share srv2-DATA at /srv2/data. Client utilities such as net use will report the requested share name, not the new share name. Mapping old share names to new share names Mappings are defined in the /etc/likewise/vhostmap file. Use a text editor to create and update the file. Each line in the file contains a mapping in the following format: VIF (or VhostName)|oldShareName|newShareName If you enter a VhostName, it will be changed to a VIF internally. The oldShareName is the user-requested share name from the client that needs to be translated into a unique name. This unique name (the newShareName) is used when establishing a mount point for the share. Following are some entries from a vhostmap file: 99.30.8.23|salesd|q1salesd 99.30.8.24|salesd|q2salesd salesSrv|salesq|q3salesd When editing the /etc/likewise/vhostmap file, note the following: • All VIF|oldShareName pairs must be unique. • The following characters cannot be used in a share name: “ / \ | [ ] < > + : ; , ? * = • Share names are case insensitive, and must be unique with respect to case. • The oldShareName and newShareName do not need to exist when creating the file; however, they must exist for a connection to be established to the share. • If a client specifies a share name that is not in the file, the share name will not be translated. • Care should be used when assigning share names longer than 12 characters. Some clients impose a limit of 12 characters for a share name. • Verify that the IP addresses specified in the file are legal and that Vhost names can be resolved to an IP address. IP addresses must be IP4 format, which limits the addresses to 15 characters.

IMPORTANT: When you update the vhostmap file, the changes take effect a few minutes after the map is saved. If a client attempts a connection before the changes are in effect, the previous map settings will be used. To avoid any delays, make your changes to the file when the SMB service is down. After creating or updating the vhostmap file, copy the file manually to the other servers in the cluster.

Consolidating SMB servers with common share names 91 SMB clients SMB clients access shares on the IBRIX software cluster in the same way they access shares on a Windows server. Viewing quota information When user or group quotas are set on a file system exported as an SMB share, users accessing the share can see the quota information on the Quotas tab of the Properties dialog box. Users cannot modify quota settings from the client end.

SMB users cannot view directory tree quotas. Differences in locking behavior When SMB clients access a share from different servers, as in the IBRIX software environment, the behavior of byte-range locks differs from the standard Windows behavior, where clients access a share from the same server. You should be aware of the following: • Zero-length byte-range locks acquired on one file serving node are not observed on other file serving nodes. • Byte-range locks acquired on one file serving node are not enforced as mandatory on other file serving nodes. • If a shared byte-range lock is acquired on a file opened with write-only access on one file serving node, that byte-range lock will not be observed on other file serving nodes. ("Write-only access" means the file was opened with GENERIC_WRITE but not GENERIC_READ access.) • If an exclusive byte-range lock is acquired on a file opened with read-only access on one file serving node, that byte-range lock will not be observed on other file serving nodes. ("Read-only access" means the file was opened with GENERIC_READ but not GENERIC_WRITE access.) SMB shadow copy Users who have accidently lost or changed a file can use the SMB shadow copy feature to retrieve or copy the previous version of the file from a file system snapshot. IBRIX software supports SMB shadow copy operations as follows.

92 Using SMB Access Control Lists (ACLs) IBRIX SMB shadow copy behaves in the same manner as Windows shadow copy with respect to ACL restoration. When a user restores a deleted file or folder using SMB shadow copy, the ACLs applied on the individual files or folders are not restored. Instead, the files and folders inherit the permissions from the root of the share or from the parent directory where they were restored. When a user restores on an existing file or folder by restoring it with SMB shadow copy, the ACLs applied on the individual file or folder are not restored. The ACLS applied on the individual file or folder remain as they were before the restore.

Restore operations If a file has been deleted from a directory that has Previous Versions, the user can recover a previous version of the file by performing a Restore of the parent directory. However, the Properties of the restored file will no longer list those Previous Versions. This condition is due to the IBRIX snapshot infrastructure; after a file is deleted, a new file in the same location is a new inode and will not have snapshots until a new snapshot is subsequently created. However, all pre-existing previous versions of the file continue to be available from the Previous Versions of the parent directory. For example, folder Fold1 contains files f1 and f2. There are two snapshots of the folder at timestamps T1 and T2, and the Properties of Fold1 show Previous Versions T1 and T2. The Properties of files f1 and f2 also show Previous Versions T1 and T2 as long as these files have never been deleted. If the file f1 is now deleted, you can restore its latest saved version from Previous Version T2 on Fold1. From that point on, the Previous Versions of \Fold1\f1 no longer show timestamps T1 and T2. However, the Previous Versions of \Fold1 continue to show T1 and T2, and the T1 and T2 versions of file f1 continue to be available from the folder.

Windows Clients Behavior Users should have full access on files and folders to restore them with SMB shadow copy. If the user does not have adequate permission, an error appears and the user is prompted to skip that file or folder when the failover is complete.

After the user skips the file or folder, the restore operation may or may not continue depending on the Windows client being used. For Windows Vista, the restore operation continues by skipping the folder or file. For other Windows clients (Windows 2003, XP, 2008), the operation stops abruptly or gives an error message. Testing has shown that Windows Vista is an ideal client for

SMB clients 93 SMB shadow copy support. IBRIX software does not have any control over the behavior of other clients.

NOTE: HP recommends that the share root is not at the same level as the file system root, and is instead a subdirectory of the file system root. This configuration reduces access and other permissions-related issues, as there are many system files (such as lost+found, quota subsystem files, and so on) at the root of the file system.

SMB shadow copy restore during node failover If a node fails over while an SMB shadow copy restore is in progress, the user may see a disruption in the restore operation.

After the failover is complete, the user must skip the file that could not be accessed. The restore operation then proceeds. The file will not be restored and can be manually copied later, or the user can cancel the restore operation and then restart it. Permissions in a cross-protocol SMB environment The manner in which the SMB server handles permissions affects the use of files by both Windows and Linux clients. Following are some considerations. How the SMB server handles UIDs and GIDs The SMB server provides a true Windows experience for Windows users. Consequently, it must be closely aligned with Windows in the way it handles permissions and ownership on files. Windows uses ACLs to control permissions on files. The SMB server puts a bit-for-bit copy of the ACLs on the Linux server (in the files on the IBRIX file system), and validates file access through these permissions. ACLs are tied to Security Identifiers (SIDs) that uniquely identify users in the Windows environment, and which are also stored on the file in the Linux server as a part of the ACLs. SIDs are obtained from the authenticating authority for the Windows client (in IBRIX software, an Active Directory server). However, Linux does not understand Windows-style SIDs; instead, it has its own permissions control scheme based on UID/GID and permissions bits (mode bits, sticky bits). Since this is the native permissions scheme for Linux, the SMB server must make use of it to access files on behalf of a Windows client; it does this by mapping the SID to a UID/GID and impersonating that UID/GID when accessing files on the Linux file system. From a Windows standpoint, all of the security for the IBRIX software-resident files is self-consistent; Windows clients understand ACLs and SIDs, and understand how they work together to control

94 Using SMB access to and security for Windows clients. The SMB server maintains the ACLs as requested by the Windows clients, and emulates the inheritance of ACLs identically to the way Windows servers maintain inheritance. This creates a true Windows experience around accessing files from a Windows client. This mechanism works well for pure Linux environments, but (like the SMB server) Linux applications do not understand any permissions mechanisms other than their own. Note that a Linux application can also use POSIX ACLs to control access to a file; POSIX ACLs are honored by the SMB server, but will not be inherited or propagated. The SMB server also does not map POSIX ACLs to be compatible with Windows ACLs on a file. These permission mechanisms have some ramifications for setting up shares, and for cross-protocol access to files on an IBRIX system. The details of these ramifications follow. Permissions, UIDs/GIDs, and ACLs The SMB server does not attempt to maintain two permission/access schemes on the same file. The SMB server is concerned with maintaining ACLs, so performs ACL inheritance and honors ACLS. The UID/GIDs and permission bits for files on a directory tree are peripheral to this activity, and are used only as much as necessary to obtain access to files on behalf of a Windows client. The various cases the SMB server can encounter while accessing files and directories, and what it does with UID/GID and permission bits in that access, are considered in the following sections.

Pre-existing directories and files A pre-existing Linux directory will not have ACLs associated with it. In this case, the SMB server will use the permission bits and the mapped UID/GID of the SMB user to determine whether it has access to the directory contents. If the directory is written by the SMB server, the inherited ACLS from the directory tree above that directory (if there are any) will be written into the directory so future SMB access will have the ACLs to guide it. Pre-existing files are treated like pre-existing directories. The SMB server uses the UID/GID of the SMB user and the permission bits to determine the access to the file. If the file is written to, the ACLs inherited from the containing directory for the file are applied to the file using the standard Windows ACL inheritance rules.

Working with pre-existing files and directories Pre-existing file treatment has ramifications for cross-protocol environments. If, for example, files are deposited into a directory tree using NFS and then accessed using SMB clients, the directory tree will not have ACLs associated with it, and access to the files will be moderated by the NFS UID/GID and permissions bits. If those files are then modified by an SMB client, they will take on the UID/GID of the SMB client (the new owner) and the NFS clients may lose access to those files.

New directories and files New directories created in a tree by the Windows client inherit the ACLs of the parent directory. They ACLs are created with the UID/GID of the Windows user (the UID/GID that the SID for the Windows user is mapped to) and they have a Linux permission bit mask of 700. This translates to Linux applications (which do not understand the Windows ACL) having owner and group (users with the same group ID) with read, write, execute permissions, and everyone else having just read and execute permissions. New files are handled the same way as directories. The files inherit the ACLs of the parent directory according to the Windows rules for ACL inheritance, and they are created with a UID/GID of the Windows user as mapped from the SID. They are assigned a permissions mask of 700.

Permissions in a cross-protocol SMB environment 95 Working with new files and directories The inheritance rules of Windows assume that all directories are created on a Windows machine, where they inherit ACLs from their parent; the top level of a directory tree (the root of the file system) is assigned ACLs by the file system formatting process from the defaults for the system. This process is not in place on file serving nodes. Instead, when you create a share on a node, the share does not have any inherited ACLs from the root of the file system in which it is created. This leads to strange behavior when a Windows client attempts to use permissions to control access to a file in such a directory. The usual CREATOR/OWNER and EVERYBODY ACLs (which are a part of the typical Windows ACLS inheritance ACL set) do not exist on the containing directory for the share, and are not inherited downward into the share directory tree. For true Windows-like behavior, the creator of a share must access the root of the share and set the desired ACLs on it manually (using Windows Explorer or a command line tool such as ICACLS). This process is somewhat unnatural for Linux administrators, but should be fairly normal for Windows administrators. Generally, the administrator will need to create a CREATOR/OWNER ACL that is inheritable on the share directory, and then create an inheritable ACL that controls default access to the files in the directory tree. Changing the way SMB inherits permissions on files accessed from Linux applications To prevent the SMB server from modifying file permissions on directory trees that a user wants to access from Linux applications (so keeping permissions other than 700 on a file in the directory tree), a user can set the setgid bit in the Linux permissions mask on the directory tree. When the setgid bit is set, the SMB server honors that bit, and any new files in the directory inherit the parent directory permission bits and group that created the directory. This maintains group access for new files created in that directory tree until setgid is turned off in the tree. That is, Linux-style permissions semantics are kept on the files in that tree, allowing SMB users to modify files in the directory while NFS users maintain their access though their normal group permissions. For example, if a user wants all files in a particular tree to be accessible by a set of Linux users (say, through NFS), the user should set the setgid bit (through local Linux mechanisms) on the top level directory for a share (in addition to setting the desired group permissions, for example 770). Once that is done, new files in the directory will be accessible to the group that creates the directory and the permission bits on files in that directory tree will not be modified by the SMB server. Files that existed in the directory before the setgid bit was set are not affected by the change in the containing directory; the user must manually set the group and permissions on files that already existed in the directory tree. This capability can be used to facilitate cross-protocol sharing of files. Note that this does not affect the permissions inheritance and settings on the SMB client side. Using this mechanism, a Windows user can set the files to be inaccessible to the SMB users of the directory tree while opening them up to the Linux users of the directory tree. Troubleshooting SMB Changes to user permissions do not take effect immediately The SMB implementation maintains an authentication cache that is set to four hours. If a user is authenticated to a share, and the user's permissions are then changed, the old permissions will remain in effect until the cache expires, at four hours after the authentication. The next time the user is encountered, the new, correct value will be read and written to the cache for the next four hours. This is not a common occurrence. However, to avoid the situation, use the following guidelines when changing user permissions: • After a user is authenticated to a share, wait four hours before modifying the user's permissions. • Conversely, it is safe to modify the permissions of a user who has not been authenticated in the previous four hours.

96 Using SMB Robocopy errors occur during node failover or failback If Robocopy is in use on a client while a file serving node is failed over or failed back, the application repeatedly retries to access the file and reports the error The process cannot access the file because it is being used by another process. These errors occur for 15 to 20 minutes. The client's copy will then continue without error if the retry timeout has not expired. To work around this situation, take one of these steps: • Stop and restart processes on the affected file serving node: # /opt/likewise/bin/lwsm stop lwreg && /etc/init.d/lwsmd stop # /etc/init.d/lwsmd start && /opt/likewise/bin/lwsm start srvsvc

• Power down the file serving node before failing it over, and do failback operations only during off hours. The following xcopy and robocopy options are recommended for copying files from a client to a highly available SMB server: xcopy: include the option /C; in general, /S /I /Y /C are good baseline options. robocopy: include the option /ZB; in general, /S /E /COPYALL /ZB are good baseline options. Copy operations interrupted by node failback If a node failback occurs while xcopy or robocopy is copying files to an SMB share, the copy operation might be interrupted and need to be restarted. Active Directory users cannot access SMB shares If any AD user is set to UID 0 in Active Directory, you will not be able to connect to SMB shares and errors will be reported. Be sure to assign a UID other than 0 to your AD users. UID for SMB Guest account conflicts with another user If the UID for the Guest account conflicts with another user, you can delete the Guest account and recreate it with another UID. Use the following command to delete the Guest account, and enter yes when you are prompted to confirm the operation: /opt/likewise/bin/lw-del-user Guest Recreate the Guest account, specifying a new UID: /opt/likewise/bin/lw-add-user –-force --uid Guest To have the system generate the UID, omit the --uid option.

Troubleshooting SMB 97 8 Using FTP

The FTP feature allows you to create FTP file shares for data stored on the cluster. Clients access the FTP shares using standard FTP and FTPS protocol services.

IMPORTANT: Before configuring FTP, select an authentication method (either Local Users or Active Directory). See “Configuring authentication for SMB, FTP, and HTTP” (page 58) for more information. An FTP configuration consists of one or more configuration profiles and one or more FTP shares. A configuration profile defines global FTP parameters and specifies the file serving nodes on which the parameters are applied. The vsftpd service starts on these nodes when the cluster services start. Only one configuration profile can be in effect on a particular node. An FTP share defines parameters such as access permissions and lists the file system to be accessed through the share. Each share is associated with a specific configuration profile. The share parameters are added to the profile's global parameters on the file serving nodes specified in the configuration profile. You can create multiple shares having the same physical path, but with different sets of properties, and then assign users to the appropriate share. Be sure to use a different IP address or port for each share. You can configure and manage FTP from the GUI or CLI. Best practices for configuring FTP When configuring FTP, follow these best practices: • If an SSL certificate will be required for FTPS access, add the SSL certificate to the cluster before creating the shares. See “Managing SSL certificates” (page 123) for information about creating certificates in the format required by IBRIX software and then adding them to the cluster. • When configuring a share on a file system, the file system must be mounted. • If the directory path to the share includes a subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute permissions to it. (IBRIX software does not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the share path.) • For High Availability, when specifying IP addresses for accessing a share, use IP addresses for VIFs having VIF backups. See the administrator guide for your system for information about creating VIFs. • The allowed ports are 21 (FTP) and 990 (FTPS). • Uploads and downloads to an anonymous share (anonymous=true) can only be done by nonusers or by ftp user, which is the default user of an anonymous share. For information about commands for anonymous shares, see “FTP and FTPS commands for anonymous shares” (page 105). Managing FTP from the GUI Use the Add New File Share Wizard to configure FTP. You can then view or modify the configuration as necessary. Configuring FTP On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click Add to start the Add New File Share Wizard.

98 Using FTP On the File Share page, select FTP as the File Sharing Protocol. Select the file system, which must be mounted, and enter the default directory path for the share. If the directory path includes a subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute permissions to it.

NOTE: IBRIX software does not create the subdirectory if it does not exist, and for anonymous shares only, adds a /pub/ directory to the share path instead. All files uploaded through the anonymous user will then be placed in that directory. The /pub/ directory is not created for a non-anonymous share.

On the Config Profile page, select an existing configuration profile or create a new profile, specifying a name and defining the appropriate parameters.

Managing FTP from the GUI 99 On the Host Servers page, select the servers that will host the configuration profile.

100 Using FTP On the Settings page, configure the FTP parameters that apply to the share. The parameters are added to the file serving nodes hosting the configuration profile. Also enter the IP addresses and ports that clients will use to access the share. For High Availability, specify the IP address of a VIF having a VIF backup.

NOTE: The allowed ports are 21 (FTP) and 990 (FTPS). NOTE: If you need to allow NAT connections to the share, use the Modify FTP Share dialog box after the share is created.

On the Users page, specify the users to be given access to the share.

IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient access permissions at the file system level for the directories exposed as shares.

Managing FTP from the GUI 101 To define permissions for a user, click Add to open the Add User to Share dialog box.

Managing the FTP configuration Select File Shares > FTP from the Navigator to display the current FTP configuration. The FTP Config Profiles panel lists the profiles that have been created. The Shares panel shows the FTP shares associated with the selected profile.

102 Using FTP Use the buttons on the panels to modify or delete the selected configuration profile or share. You can also add another FTP share to the selected configuration profile. Use the Modify FTP Share dialog box if you need to allow NAT connections on the share. Managing FTP from the CLI FTP is managed with the ibrix_ftpconfig and ibrix_ftpshare commands. For detailed information, see the HP IBRIX 9000 Storage CLI Reference Guide. Configuring FTP To configure FTP, first add a configuration profile, and then add an FTP share: Add a configuration profile: ibrix_ftpconfig –a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as “passive_enable=TRUE,maxclients=200”. To see a list of available settings for the profile, use the following command: ibrix_ftpconfig –L Add an FTP share: ibrix_ftpshare -a SHARENAME –c PROFILENAME -f FSNAME -p dirpath -I IP-Address:Port [–u USERLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as “browseable=true,readonly=true”. For the -I option, use a semicolon to separate the IP address:port settings and enclose the settings in quotation marks, such as “ip1:port1;ip2:port2;...”. To list the available settings for the share, use the following command: ibrix_ftpshare –L Managing the FTP configuration Use the following commands to view, modify, or delete the FTP configuration. In the commands, use -v 1 to display detailed information. View configuration profiles: ibrix_ftpconfig –i -h HOSTLIST [–v level]

Managing FTP from the CLI 103 Modify a configuration profile: ibrix_ftpshare -m SHARENAME –c PROFILENAME [-f FSNAME -p dirpath] -I IP-Address:Port [–u USERLIST] [-S SETTINGLIST] Delete a configuration profile: ibrix_ftpconfig –d PROFILENAME View an FTP share: ibrix_ftpshare -i SHARENAME –c PROFILENAME [–v level] List FTP shares associated with a specific profile: ibrix_ftpshare -l –c PROFILENAME [–v level] List FTP shares associated with a specific file system: ibrix_ftpshare -l –f FSNAME [–v level] Modify an FTP share: ibrix_ftpshare -m SHARENAME –c PROFILENAME [-f FSNAME -p dirpath] -I IP-Address:Port [–u USERLIST] [-S SETTINGLIST] Delete an FTP share: ibrix_ftpshare -d SHARENAME –c PROFILENAME The vsftpd service When the cluster services are started on a file serving node, the vsftpd service starts automatically if the node is included in a configuration profile. Similarly, when the cluster services are stopped, the vsftpd service also stops. If necessary, use the Linux command ps -ef | grep vsftpd to determine whether the service is running. If you do not want vsftpd to run on a particular node, remove the node from the configuration profile.

IMPORTANT: For FTP share access to work properly, the vsftpd service must be started by IBRIX software. Ensure that the chkconfig of vsftpd is set to OFF (chkconfig vsftpd off). Starting or stopping the FTP service manually Start the FTP service: /usr/local/ibrix/ftpd/etc/vsftpd start /usr/local/ibrix/ftpd/hpconf/ Stop the FTP service: /usr/local/ibrix/ftpd/etc/vsftpd stop /usr/local/ibrix/ftpd/hpconf/ Restart the FTP service: /usr/local/ibrix/ftpd/etc/vsftpd restart /usr/local/ibrix/ftpd/hpconf/

NOTE: When the FTP configuration is changed with the GUI or CLI, the FTP daemon is restarted automatically.

104 Using FTP Accessing shares Clients can access an FTP share by specifying a URL in their browser (Internet Explorer or Mozilla Firefox). In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share. • For a share configured with an IP-based virtual host and the anonymous parameter is set to true, use the following URL: ftp://IP_address:port/

• For a share configured with a userlist and having the anonymous parameter set to false, use the following URL: ftp://@IP_address:port/

NOTE: When a file is uploaded into an FTP share, the file is owned by the user who uploaded the file to the share. If a user uploads a file to an FTP share and specifies a subdirectory that does not already exist, the subdirectory will not be created automatically. Instead, the user must explicitly use the mkdir ftp command to create the subdirectory. The permissions on the new directory are set to 755. If the anonymous user created the directory, it is owned by ftp:ftp. If a non-anonymous user created the directory, the directory is owned by user:group.

You can also use curl commands to access an FTP share. (The default SSL port is 990.) You can access the shares as follows: • As an anonymous share. See “FTP and FTPS commands for anonymous shares” (page 105). • As a non-anonymous share. See “FTP and FTPS commands for non-anonymous shares” (page 106). • From any Fusion Manager that has FTP clients. See “FTP and FTPS commands for Fusion Manager” (page 107). FTP and FTPS commands for anonymous shares This section provides the following FTP and FTPS commands for anonymous shares. All commands should be entered on one line. Table 1 Upload a file by using the FTP protocol for anonymous shares

Use this command when... Command

Files can be uploaded by any user curl -T ftp://IP_address/pub/

You must provide the default user name and password curl -T ftp://IP_address/pub/ (“ftp” for the username and “ftp” for the password) -u ftp:ftp

Table 2 Upload a file by using the FTPS protocol for anonymous shares

Use this command when... Command

You do not need to specify the user name and password curl --ftp-ssl-reqd --cacert -T ftp://IP_address:990/pub/

You must provide the default user name and password curl --ftp-ssl-reqd --cacert -T ftp://IP_address:990/pub/ -u ftp:ftp

Accessing shares 105 Table 3 Download a file by using the FTP protocol

Use this command when... Command

You do not need to specify the user name and password curl ftp://IP_address/pub/server.pem -o \

You must provide the default user name and password curl ftp://IP_address/pub/server.pem -o (“ftp” for the username and “ftp” for the password) \ -u ftp:ftp

Table 4 Download a file by using the FTPS protocol

Use this command when... Command

You do not need to specify the user name and password curl --ftp-ssl-reqd --cacert ftp://IP_address:990/pub/ -o

You must provide the default user name and password curl --ftp-ssl-reqd --cacert ftp://IP_address:990/pub/ -o -u ftp:ftp

The following example shows a web browser accessing an anonymous share.

FTP and FTPS commands for non-anonymous shares This section provides the following FTP and FTPS commands for non-anonymous shares. All commands should be entered on one line. Table 5 Upload a file by using the FTP protocol for domain user

Use this command when... Command

You do not need to specify the domain curl -T ftp://IP_address/ -u USER:PASSWORD

You must specify the domain curl -T ftp://IP_address/ -u DOMAIN\\USER:PASSWORD

106 Using FTP Table 6 Upload a file by using the FTPS protocol for local user

Use this command when... Command

You need to supply the user name and password but not curl --ftp-ssl-reqd --cacert -T ftp://IP_address:990/pub/ -u USER:PASSWORD

You must specify the domain, such as for an Active curl --ftp-ssl-reqd --cacert -T ftp://IP_address/ -u DOMAIN\\USER:PASSWORD

Table 7 Download a file by using the FTP protocol for domain user

Use this command when... Command

You do not need to specify the domain curl ftp://IP_address/ -o \ -u USER:PASSWORD

You must specify the domain, such as for an Active curl ftp://IP_address/ -o \ -u DOMAIN\\USER:PASSWORD

Table 8 Download a file by using the FTPS protocol for local user

Use this command when... Command

You do not need to specify the domain curl --ftp-ssl-reqd --cacert ftp://IP_address:990/pub/ -o \ -u USER:PASSWORD

You must specify the domain curl --ftp-ssl-reqd --cacert ftp://IP_address:990/pub/ -o \ -u DOMAIN\\USER:PASSWORD

FTP and FTPS commands for Fusion Manager Shares can be accessed from any Fusion Manager that has FTP clients: ftp For FTPS, use the following command from the active Fusion Manager: lftp -u -p -e 'set ftp:ssl-force true'

Accessing shares 107 9 Using HTTP

The HTTP feature allows you to create HTTP file shares for data stored on the cluster. Clients access the HTTP shares using standard HTTP and HTTPS protocol services.

IMPORTANT: Before configuring HTTP, select an authentication method (either Local Users or Active Directory). See “Configuring authentication for SMB, FTP, and HTTP” (page 58) for more information. The HTTP configuration consists of a configuration profile, a virtual host, and an HTTP share. A profile defines global HTTP parameters that apply to all shares associated with the profile. The virtual host identifies the IP addresses and ports that clients will use to access shares associated with the profile. A share defines parameters such as access permissions and lists the file system to be accessed through the share. HTTP is administered from the GUI or CLI. On the GUI, select HTTP from the File Shares list in the Navigator. The HTTP Config Profiles panel lists the existing configuration profiles and the virtual hosts configured on the selected profile.

HTTP share types IBRIX software provides two types of HTTP shares: • Standard HTTP shares. These shares are used to access file system data. • HTTP-StoreAll REST API (also known as IBRIX Object API) shares.The StoreAll REST API provides programmatic access to user-stored files and their metadata. The metadata is stored on the HP StoreAll Express Query database in the IBRIX cluster and provides fast query access to metadata without scanning the file system. See “StoreAll REST API” (page 170) for more information. See the following section for information on how to create HTTP shares and HTTP-StoreAll REST API shares. Process checklist for creating HTTP shares Use the following checklist for creating HTTP shares.

108 Using HTTP NOTE: Some of the steps listed in the following table only apply to HTTP-StoreAll REST API shares. If you are creating standard HTTP shares, you can skip steps 1 through 3 in the following table. Table 9 Process checklist for creating HTTP shares

Step applies only nl for REST API Where to find more Step Step Shares Task information completed?

1 Yes Make sure the file system is mounted. “Creating and mounting file systems” (page 13)

2 Yes Enabled data retention on the mounted file system.1 “Enabling file systems for data retention” (page 145)

3 Yes Enable Express Query on the mounted file system. “Using the New Filesystem Wizard” (page 13)

4 Step applies Create or select an exiting HTTP config profile through “HTTP share types” to all HTTP the GUI or through the CLI (ibrix_httpconfig). (page 108) share types.

5 Step applies Create or select an existing HTTP Vhost through the HP IBRIX 9000 Storage to all HTTP GUI or through the CLI (ibrix_httpvhost). CLI Reference Guide share types.

6 Step applies Create the HTTP share through the GUI or through the HP IBRIX 9000 Storage to all HTTP CLI (ibrix_httpshare). CLI Reference Guide share types.

1Enabling data retention is also possible at the time the file system was created. Best practices for configuring HTTP When configuring HTTP, follow these best practices: • If an SSL certificate will be required for HTTPS access, add the SSL certificate to the cluster before creating the shares. See “Managing SSL certificates” (page 123) for information about creating certificates in the format required by IBRIX software and then adding them to the cluster. • When configuring a share on a file system, the file system must be mounted. • If the directory path to the share includes a subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute permissions to it. (IBRIX software does not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the share path.) • Ensure that all users who are given read or write access to HTTP shares have sufficient access permissions at the file system level for the directories exposed as shares. • For High Availability, when specifying IP addresses for accessing a share, use IP addresses for VIFs having VIF backups. See the administrator guide for your system for information about creating VIFs. Managing HTTP from the GUI Configuring HTTP shares Use the Add New File Share Wizard to configure HTTP. You can then view or modify the configuration as necessary.

Best practices for configuring HTTP 109 On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click Add to start the Add New File Share Wizard. On the File Share page, select HTTP or HTTP-IBRIX Object API as the File Sharing Protocol. Select the file system, which must be mounted, and enter a share name and the default directory path for the share.

Select on existing profile or configure a new profile on the Config Profile dialog box, specifying a name and the appropriate parameters for the profile.

110 Using HTTP The Host Servers dialog box displays differently whether you selected a previous profile or you are create a new one. If you selected the option Create a new HTTP Profile, you are prompted to select the file server nodes on which the HTTP service will be active. Only one configuration profile can be in effect on a particular server.

Managing HTTP from the GUI 111 If you selected an existing profile on the Config Profile dialog box, you are shown the hosts defined for that profile, as shown in the following figure.

The Virtual Host dialog box displays differently whether you selected a previous profile or you are create a new one. If you are creating a new profile, the Virtual Host dialog box prompts you to enter additional information, as shown in the following figure. Enter a name for the virtual host and specify an SSL certificate and domain name if used. Also add one or more IP addresses:ports for the virtual host. For High Availability, specify a VIF having a VIF backup.

If you selected a previous profile, the Virtual Host prompts you to select a pre-existing Vhost or create an HTTP Vhost.

112 Using HTTP If you already have Vhosts defined, you can select an existing Vhost from the On the Settings page, set the appropriate parameters for the share. Note the following: • When specifying the URL Path, do not include http:// or any variation of this in the URL path. For example, /reports/ is a valid URL path. Do not specify just a slash (/) as a URL path. It must contain a text string. The beginning and ending slashes of the path are optional. For example, /reports/, reports, and /reports are valid entries and will be stored as /reports/. For REST (Object) API shares, do not define a URL path of more than one directory level such as reports/sales ; however, your single-directory URL path can correspond to any arbitrarily deep directory path on the IBRIX file system. • The WebDAV option is greyed out for HTTP-StoreAll REST API shares, and it is always selected because every StoreAll REST API share is also a WebDAV-enabled share. • When the WebDAV feature is enabled for a standard HTTP share, the share becomes a readable and writable medium with locking capability. The primary user can make edits, while other users can only view the resource in read-only mode. The primary user must unlock the resource before another user can make changes. • Set the Anonymous field to false only if you want to restrict access to specific users.

Managing HTTP from the GUI 113 On the Users page, specify the users to be given access to the share.

IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient access permissions at the file system level for the directories exposed as shares.

114 Using HTTP To allow specific users read access, write access, or both, click Add. On the Add Users to Share dialog box, assign the appropriate permissions to the user. When you complete the dialog, the user is added to the list on the Users page.

The Summary panel presents an overview of the HTTP configuration. You can go back and modify any part of the configuration if necessary. When the wizard is complete, users can access the share from a browser. For example, if you configured the share with the anonymous user, specified 99.226.50.92 as the IP address on the Create Vhost dialog box, and specified /reports/ as the URL path on the Add HTTP Share dialog box, users can access the share using the following URL: http://99.226.50.92/reports/

Managing HTTP from the GUI 115 The users will see an index of the share (if the browsable property of the share is set to true), and can open and save files. For more information about accessing shares and uploading files, see “Accessing shares” (page 119). For StoreAll REST API shares, you can do Express Query operations, such as query and modify file properties, as described in “Express Query” (page 162). Managing the HTTP configuration Select File Shares > HTTP from the Navigator to display the current HTTP configuration. The HTTP Config Profiles panel lists the profiles that have been created. The Vhosts panel shows the virtual hosts associated with the selected profile.

Use the buttons on the panels to modify or delete the selected configuration profile or virtual host. To view HTTP shares on the GUI, select the appropriate profile on the HTTP Config Profiles top panel, and then select the appropriate virtual host from the lower navigator. The Shares bottom panel shows the shares configured on that virtual host. Click Add Share to add another share to the virtual host. For example, you could create multiple shares having the same physical path, but with different sets of properties, and then assign users to the appropriate share. You can also have any number of REST API and regular HTTP shares attached to the same Vhost.

Tuning the socket read block size and file write block size By default, the socket read block size and file write block size used by Apache are set to 8192 bytes. If necessary, you can adjust the values with the ibrix_httpconfig command. The values must be between 8 KB and 2 GB.

116 Using HTTP ibrix_httpconfig –a profile1 –h node1,node2 -S “wblocksize=,rblocksize=” You can also set the values on the Modify HTTP Profile dialog box:

Managing HTTP from the CLI On the command line, HTTPis managed by the ibrix_httpconfig, ibrix_httpvhost, and ibrix_httpshare commands. The ibrix_httpshare command is also used for creating a StoreAll REST API-enabled HTTP share. For detailed information, see the HP IBRIX 9000 Storage CLI Reference Guide. Configuring HTTP Add a configuration profile: ibrix_httpconfig -a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as “keepalive=true,maxclients=200,...”. To see a list of available settings for the share, use ibrix_httpconfig -L. Add a virtual host: ibrix_httpvhost -a VHOSTNAME -c PROFILENAME -I IP-Address:Port [-S SETTINGLIST] Add an HTTP share: ibrix_httpshare -a SHARENAME -c PROFILENAME -t VHOSTNAME -f FSNAME -p dirpath -P urlpath [-u USERLIST] [-S SETTINGLIST] For the -S option, use a comma to separate the settings, and enclose the settings in quotation marks, such as “davmethods=true,browseable=true,readonly=true”. For example, to create a new HTTP share and enable the WebDAV property on that share: # ibrix_httpshare –a share3 –c cprofile1 –t dav1vhost1 –f ifs1 –p /ifs1/dir1 –P url3 –S “davmethods=true” To see all of the valid settings for an HTTP share, use the following command: ibrix_httpshare –L

Managing HTTP from the CLI 117 Managing the HTTP configuration View a configuration profile: ibrix_httpconfig -i PROFILENAME [-v level] Modify a configuration profile: ibrix_httpconfig -m PROFILENAME [-h HOSTLIST] [-S SETTINGLIST] Delete a configuration profile: ibrix_httpconfig -d PROFILENAME View a virtual host: ibrix_httpvhost -i VHOSTNAME -c PROFILENAME [-v level] Modify a virtual host: ibrix_httpvhost -m VHOSTNAME -c PROFILENAME -I IP-Address:Port [-S SETTINGLIST] Delete a virtual host: ibrix_httpvhost -d VHOSTNAME -c PROFILENAME View an HTTP share: ibrix_httpshare -i SHARENAME -c PROFILENAME -t VHOSTNAME [-v level] Modify an HTTP share: ibrix_httpshare -m SHARENAME -c PROFILENAME -t VHOSTNAME [-f FSNAME -p dirpath] [-P urlpath] [-u USERLIST] [-S SETTINGLIST] The following example modifies an HTTP, enabling WebDAV: # ibrix_httpshare –-m share1 –c cprofile1 –t dav1vhost1 –S "davmethods=true" Delete an HTTP share: ibrix_httpshare -d SHARENAME -c PROFILENAME -t VHOSTNAME Starting or stopping the HTTP service manually Start the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k start -f /usr/local/ibrix/httpd/conf/httpd.conf Stop the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k stop -f /usr/local/ibrix/httpd/conf/httpd.conf Restart the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k restart -f /usr/local/ibrix/httpd/conf/httpd.conf

NOTE: When the HTTP configuration is changed with the GUI or CLI, the HTTP daemon is restarted automatically.

118 Using HTTP Accessing shares Clients access an HTTP share by specifying a URL in their browser (Internet Explorer or Mozilla Firefox). In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share. • For a share configured with an IP-based virtual host and the anonymous parameter is set to true, use the following URL: http://IP_address:port/urlpath/

• For a shared configured with a userlist and having the anonymous parameter set to false, use the following URL: http://IP_address:port/urlpath/ Enter your user name and password when prompted.

NOTE: When a file is uploaded into an HTTP share, the file is owned by the user who uploaded the file to the share. If a user uploads a file to an HTTP share and specifies a subdirectory that does not already exist, the subdirectory will be created. For example, you could have a share mapped to the directory /ifs/http/ and using the URL:http_url. A user could upload a file into the share: curl -T file http://:/http_url/new_dir/file If the directory new_dir does not exist under http_url, the http service automatically creates the directory /ifs/http/new_dir/ and sets the permissions to 777. If the anonymous user performed the upload, the new_dir directory is owned by daemon:daemon. If a non-anonymous user performed the upload, the new_dir directory is owned by user:group.

You can also use curl commands to access an HTTP share. For anonymous users: • Upload a file using HTTP protocol: curl -T http://IP_address:port/urlpath/

• Upload a file using HTTPS protocol: curl --cacert -T https://IP_address:port/urlpath//

• Download a file using HTTP protocol: curl http://IP_address:port/urlpath/ -o //

• Download a file using HTTPS protocol: curl --cacert https://IP_address:port/urlpath/ -o // For Active Directory users (specify the user as in this example: mycompany.com\\User1): • Upload a file using HTTP protocol: curl –T -u http://IP_address:port/urlpath/

• Upload a file using HTTPS protocol: curl --cacert -T -u https://IP_address:port/urlpath/

Accessing shares 119 • Download a file using HTTP protocol: curl -u http://IP_address/dils/urlpath -o path to download>//

• Download a file using HTTPS protocol: curl --cacert -u https://IP_address:port/urlpath/ -o path to download>//

Configuring Windows clients to access HTTP WebDAV shares Complete the following steps to set up and access WebDAV enabled shares: • Verify the entry in the Windows hosts file. Before mapping a network drive in Windows, verify that an entry exists in the c:\Windows\ System32\drivers\etc\hosts file. For example, IP address 10.2.4.200 is assigned to a Vhost named vhost1, and if the Vhost name is not being used to map the network drive, the client should be able to resolve the domain name such as www.storage.hp.com (in reference to domain name-based virtual hosts). • Verify the characters in the Windows hosts file. The Windows c:\Windows\System32\drivers\etc\hosts file specifies IP versus hostname mapping. Verify that the hostname in the file includes alphanumeric characters only. • Verify that the WebClient Service is started. The WebClient Service must be started on Windows-based clients attempting to access the WebDAV share. The WebClient service is missing by default on Windows 2008. To install the WebClient service, the Desktop Experience package must be installed. See http:// technet.microsoft.com/en-us/library/cc754314.aspx for more information. • Update the Windows registry. When using WebDAV shares in Windows Explorer, you must edit the Windows registry if there are many files in the WebDAV shares or the files are large. Launch the windows registry editor using the regedit command. Go to: Computer\HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\WebClient\Parameters Change the value of FileSizeLimitInBytes from the default value of 50000000 to 2147483648 (which is the value of 2 GB in bytes). Change the value of FileAttributesLimitInBytes from the default value of 1000000 to 10000000. • Enable debug logging on the server. Edit the /usr/local/ibrix/httpd/conf/httpd.conf file and change the line LogLevel warn to LogLevel debug. Next, restart Apache on the file serving nodes: Use the following command to stop Apache: /usr/local/ibrix/httpd/bin/apachectl stop Use the following command to start Apache: /usr/local/ibrix/httpd/bin/apachectl start

• Save documents during node failovers. During a failover, MS Office 2010 restores the connection when the connection is lost on the server, but you must wait until you are asked to refresh the document being edited. In MS Office 2003 and 2007, you must save the document locally. After the failover is successful, you must re-map the drive and save the document on the WebDAV share.

120 Using HTTP • Create an SSL certificate. When using basic authentication to access WebDAV-enabled HTTP shares, SSL-based access is mandatory. • Verify that the hostname in the certificate matches the Vhost name. When creating a certificate, the hostname should match the Vhost name or the domain name issued when mapping a network drive or opening the file directly using the URL such as https:// storage.hp.com/share/foo.docx. • Ensure that the WebDAV URL includes the port number associated with the Vhost. • Consider the assigned IP address when mapping a network drive on Windows. When mapping a network drive in Windows, if the IP address assigned to the Vhost is similar to the format 10.2.4.200, there should be a corresponding entry in the Windows hosts file. Instead of using the IP address in the mapping, use the name specified in the hosts file. For example, 10.2.4.200 can be mapped as srv1vhost1, and you can issue the URL https:// srv1vhost1/share when mapping the network drive. • Unlock locked files. Use the command BitKinex to unlock locked files if the files do not unlock before closing the application. • Remove zero byte files created by Microsoft Excel. Microsoft Excel creates 0 byte files on the WebDAV shares. For example, after editing the file foo.xlsx and saving it more than once, a file such as ~$foo.xlsx is created with 0 bytes in size. Delete this file using a tool such as BitKinex, or remove the file on the file system. For example, if the file system is mounted at /ifs1 and the share directory is /ifs1/dir1, remove the file /ifs1/dir1/~$foo.xlsx. • Use the correct URL path when mapping WebDAV shares on Windows 2003. When mapping WebDAV shares on Windows 2003, the URL should not end with a trailing slash (/). For example, http://storage.hp.com/share can be mapped, but http:// storage.hp.com/ cannot be mapped. Also, you cannot map https:// because of limitations with Windows 2003. • Delete read-only files through Windows Explorer. If you map a network drive for a share that includes files designated as read-only on the server, and you then attempt to delete one of those files, the file appears to be deleted. However, when you refresh the folder (using the REFRESH command), the folder containing the deleted file in Windows Explorer reappears. This behavior is expected in Windows Explorer.

NOTE: Symbolic links are not implemented in the current WebDAV implementation (Apache’s mod-dav module). NOTE: After mapping a network drive of a WebDAV share on Windows, Windows Explorer reports an incorrect folder size or available free space on the WebDAV share. Troubleshooting HTTP HTTP WebDAV share is inaccessible through Windows Explorer when files greater than 10 KB are created When files greater than 10 KB are created, the HTTP WebDAV share is inaccessible through Windows Explorer and the following error appears: Windows cannot access this disc: This disc might be corrupt. This condition is seen in various Windows clients such as Windows 2008, Windows 7, and Windows Vista. The condition persists even if the share is

Troubleshooting HTTP 121 disconnected and re-mapped through Windows Explorer. The files are accessible on the file serving node and through BitKinex. Use the following workaround to resolve this condition: 1. Disconnect the network drive. 2. In Windows, select Start > Run and enter regedit. 3. Increase FileAttributeLimitInBytes from the default value of 1000000 to 10000000 (by a factor of 10). 4. Increase FileSizeLimitInBytes 10 times by adding one extra zero. 5. Save the registry and quit. 6. Reboot the Windows system. 7. Map the network drive to allow you to access the WebDAV share containing large files. HTTP WebDAV share fails when downloading a large file from a mapped network drive When downloading or copying a file greater than 800 MB in Windows Explorer, the HTTP WebDAV share fails. Use the following workaround to resolve this condition: 1. In Windows, select Start > Run and type regedit to open the Windows registry editor. 2. Navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters

NOTE: This hierarchy exists only if WebClient is installed on Windows Vista or Windows 7.

3. Change the registry parameter values to allow for the increased file size. a. Set the value of FileAttributesLimitInBytes to 1000000 in decimal. b. Set the value of FileSizeLimitInBytes to 2147483648 in decimal, which equals 2 GB.

Mapping HTTP WebDAV share as AD or local user through Windows Explorer fails if the HTTP Vhost IP address is used Mapping the HTTP WebDAV share to a network drive as Active Directory or local user through Windows Explorer fails on Windows 2008 if the HTTP Vhost IP address is used. To resolve this condition, add the Vhost names and IP addresses in the hosts file on the Windows clients.

122 Using HTTP 10 Managing SSL certificates

Servers accepting FTPS and HTTPS connections typically provide an SSL certificate that verifies the identity and owner of the web site being accessed. You can add your existing certificates to the cluster, enabling file serving nodes to present the appropriate certificate to FTPS and HTTPS clients. IBRIX software supports PEM certificates. When you configure the FTP share or the HTTP vhost, select the appropriate certificate. You can manage certificates from the GUI or the CLI, On the GUI, select Certificates from the Navigator to open the Certificates panel. The Certificate Summary shows the parameters for the selected certificate.

Creating an SSL certificate Before creating a certificate, OpenSSL must be installed and must be included in your PATH variable (in RHEL5, the path is /usr/bin/openssl). There are two parts to a certificate: the certificate contents (specified in a .crt file) and a private key (specified in a .key file). Certificates added to the cluster must meet these requirements: • The certificate contents (the .crt file) and the private key (the .key file) must be concatenated into a single file. • The concatenated certificate file must include the headers and footers from the .crt and .key files. • The concatenated certificate file cannot contain any extra spaces. Before creating a real certificate, you can create a self-signed SSL certificate and test access with it. Complete the following steps to create a test certificate that meets the requirements for use in an IBRIX cluster:

Creating an SSL certificate 123 1. Generate a private key: openssl genrsa -des3 -out server.key 1024 You will be prompted to enter a passphrase. Be sure to remember the passphrase. 2. Remove the passphrase from the private key file (server.key). When you are prompted for a passphrase, enter the passphrase you specified in step 1. cp server.key server.key.org openssl rsa -in server.key.org -out server.key rm -f server.key.org 3. Generate a Certificate Signing Request (CSR): openssl req -new -key server.key -out server.csr 4. Self-sign the CSR: openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt 5. Concatenate the signed certificate and the private key: cat server.crt server.key > server.pem When adding a certificate to the cluster, use the concatenated file (server.pem in our example) as the input for the GUI or CLI. The following example shows a valid PEM encoded certificate that includes the certificate contents, the private key, and the headers and footers: -----BEGIN CERTIFICATE----- MIICUTCCAboCCQCIHW1FwFn2ADANBgkqhkiG9w0BAQUFADBtMQswCQYDVQQGEwJV UzESMBAGA1UECBMJQmVya3NoaXJlMRAwDgYDVQQHEwdOZXdidXJ5MQwwCgYDVQQK EwNhYmMxDDAKBgNVBAMTA2FiYzEcMBoGCSqGSIb3DQEJARYNYWRtaW5AYWJjLmNv bTAeFw0xMDEyMTEwNDQ0MDdaFw0xMTEyMTEwNDQ0MDdaMG0xCzAJBgNVBAYTAlVT MRIwEAYDVQQIEwlCZXJrc2hpcmUxEDAOBgNVBAcTB05ld2J1cnkxDDAKBgNVBAoT A2FiYzEMMAoGA1UEAxMDYWJjMRwwGgYJKoZIhvcNAQkBFg1hZG1pbkBhYmMuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDdrjHH/W93X7afTIUOrllCHw21 u31tinMDBZzi+R18r9SZ/muuyvG4kJCbOoQnohuir/s4aAEULAOnf4mvqLfZlkBe 25HgT+ImshLzyHqPImuxTEXvjG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W6 8juMVAw2cFDHxji2GQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAKvYJK8RXKMObCKk ae6oJ36FEkdl/ACHCw0Nxk/VMR4dv9lIk8Dv8sdYUUqHkNAME2yOaRI190c5bWSa MjhSjOOqUmmgmeDYlAu+ps3/1Fte5yl4ZV8VCu7bHCWx2OSy46Po03MMOu99JXrB /GCKE8fO8Fhyq/7LjFDR5GeghmSw -----END CERTIFICATE------BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQDdrjHH/W93X7afTIUOrllCHw21u31tinMDBZzi+R18r9SZ/muu yvG4kJCbOoQnohuir/s4aAEULAOnf4mvqLfZlkBe25HgT+ImshLzyHqPImuxTEXv jG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W68juMVAw2cFDHxji2GQIDAQAB AoGBAMXPWryKeZyb2+np7hFbompOK32vAA1vLZHUwFoI0Tch7yQ60vv2PBvlZCQf 4y06ik5xmkqLA+tsGxarx8DnXKUy0PHJ3hu6mTocIJdqqN0n+KO4tG2dvDPdSE7l phX2sY9MVt4X/QN3eNb/F3cHjnM9BYEr0BY3mTkKXz61jzABAkEA+M3PProYwvS6 P8m4DenZh6ehsu4u/ycjmW/ujdp/PcRd5HBAWJasTXTezF5msugHnnNBe8F1i1q4 9PfL0C+kuQJBAOQXjrmPZxDc8YA/V45MUKv4eHHN0E03p84budtblHQ70BCLaO41 n267t3DrZfW+VtsVDVBMja4UhoBasgv3rGECQQCILDR6k2YMBd+OG/xleRD6ww+o G96S/bvpNa7t6qFrj/cHmTxOgCDLv+RVHHG/B2lsGo7Dig2oeL30LU9aoUjZAkBV KSqDw7PyitusS3oQShQQsTufGf385pvDi3yQFxhNcYuUschisCivumyaP3mZEBDz yV9oLLz1UvqI79PsPfPhAkEAxSqebd1Ymqr2wi0RnKTmHfDCb3yWLPi57kc+lgrK LUlxawhTzDwzTWJ9m4gQqRlAaXoIElfk6ITwW0g9Th5Ouw== -----END RSA PRIVATE KEY-----

NOTE: When you are ready to create a real SSL certificate, consult the following site for a description of the procedure: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcer

124 Managing SSL certificates Adding a certificate to the cluster To add an existing certificate to the cluster, click Add on the Certificates panel. On the Add Certificate dialog box, enter a name for the certificate. Use a Linux command such as cat to display your concatenated certificate file. For example: cat server.pem Copy the contents of the file to the Certificate Content section of the dialog box. The copied text must include the certificate contents and the private key in PEM encoding. It must also include the proper headers and footers, and cannot contain any extra spaces.

NOTE: You can add only one certificate at a time.

The certificate is saved on all file serving nodes in the directory /usr/local/ibrix/pki. To add a certificate from the CLI, use the following command. ibrix_certificate -a -c CERTNAME -p CERTPATH For example: # ibrix_certificate -a -c mycert -p server.pem Run the command from the active Fusion Manager. To add a certificate for a different node, copy that certificate to the active Fusion Manager and then add it to the cluster. For example, if node ib87 is hosting the active Fusion Manager and you have generated a certificate for node ib86, copy the certificate to ib87: scp server.pem ib87/tmp Then, on node ib87, add the certificate to the cluster: ibrix_certificate -a -c cert86 –p /tmp/server.pem

Adding a certificate to the cluster 125 Exporting a certificate If necessary, you can display a certificate and then copy and save the contents for future use. This step is called exporting. Select the certificate on the Certificates panel and click Export.

To export a certificate from the CLI, use this command: ibrix_certificate -e -c CERTNAME Deleting a certificate To delete a certificate from the GUI, select the certificate on the Certificates panel, click Delete, and confirm the operation. To delete a certificate from the CLI, use this command: ibrix_certificate -d -c CERTNAME

126 Managing SSL certificates 11 Using remote replication

This chapter describes how to configure and manage the Continuous Remote Replication (CRR) service.

NOTE: Keep in mind that when you set up CRR, the Express Query database is not replicated. You must set up periodic exports as described in “Metadata and continuous remote replication” (page 166). Overview The CRR service provides a method to replicate changes in a source file system on one cluster to a target file system on either the same cluster (intra-cluster replication) or a second cluster (inter-cluster replication). Both files and directories are replicated with remote replication, and no special configuration of segments is needed. A remote replication task includes the initial synchronization of the source and target file systems. When selecting file systems for remote replication, you should be aware of the following: • One, multiple, or all file systems in a single cluster can be replicated. • Remote replication is a one-way process. Bidirectional replication of a single file system is not supported. • The mountpoint of the source file system can be different from the mountpoint on the target file system. • The directory path /mnt/ibrix is reserved for use by CRR for internal operations. Do not use the /mnt/ibrix path for mounting any file systems, including ibrix. The CRR feature does not work properly if /mnt/ibrix is occupied by another file system mount. Remote replication has minimal impact on these cluster operations: • Cluster expansion (adding a new server) is allowed as usual on both the source and target. • File systems can be exported over NFS, SMB, FTP, or HTTP. • Source or target file systems can be rebalanced while a remote replication job is in progress. • File system policies (ibrix_fs_tune) can be set on both the source and target without any restrictions. The Fusion Manager initializes remote replication. However, each file serving node runs its own replication and synchronization processes, independent of and in parallel with other file serving nodes. The individual daemons running on the file serving nodes perform the actual file system replication. The source-side Fusion Manager monitors the replication and reports errors, failures, and so on. Continuous or run-once replication modes CRR can be used in two modes: continuous or run-once. Continuous replication. This method tracks changes on the source file system and continuously replicates these changes to the target file system. The changes are tracked for the entire file system and are replicated in parallel by each file serving node. There is no strict order to replication at either the file system or segment level. The continuous remote replication program tries to replicate on a first-in, first-out basis. When you configure continuous remote replication, you must specify a file system as the source. (A source directory cannot be specified.) File systems specified as the replication source or target must already exist. The replication starts at the root of the source file system (the mount point).

Overview 127 Run-once replication. This method replicates a single directory sub-tree or an entire file system from the source file system to the target file system. Run-once is a single-pass replication of all files and subdirectories within the specified directory or file system. All changes that have occurred since the last replication task are replicated from the source file system to the target file system. File systems specified as the replication source or target must exist. If a directory is specified as the replication source, the directory must exist on the source cluster under the specified source file system.

NOTE: Run-once can also be used to replicate a single software snapshot. This must be done on the GUI. You can replicate to a remote cluster (an intercluster replication) or the same cluster (an intracluster replication). Using intercluster replications Intercluster configurations can be continuous or run-once: • Continuous: asynchronously replicates the initial state of a file system and any changes to it. Snapshots cannot be replicated. • Run-once: replicates the current state of a file system, folder, or file system snapshot. The examples in the configuration rules use three IBRIX clusters: C1, C2, and C3: • C1 has two file systems, c1ifs1 and c1ifs2, mounted as /c1ifs1 and /c1ifs2. • C2 has two file systems, c2ifs1 and c2ifs2, mounted as /c2ifs1 and /c2ifs2. • C3 has two file systems, c3ifs1 and c3ifs2, mounted as /c3ifs1 and /c3ifs2. In the examples, : designates a replication target such as C1:/c1ifs1/target1. The following rules apply to intercluster replications: • Remote replication is not supported between 6.1.x and 6.2 clusters in either direction if Express Query is enabled on the 6.2 cluster. • Only one continuous Remote Replication task can run per file system. It must replicate from the root of the file system; you cannot continuously replicate a subdirectory of a file system. • A continuous Remote Replication task can replicate to only one target cluster. • Replication targets are directories in an IBRIX file system and can be: ◦ The root of a file system such as /c3ifs1. ◦ A subdirectory such as /c3ifs1/target1. Targets must be explicitly exported using CRR commands to make them available to CRR replication tasks. • A subdirectory created beneath a CRR export can be used as a target by a replication task without being explicitly exported in a separate operation. For example, if the exported target is /c3ifs1/target1, you can replicate to folder /c3ifs1/target1/subtarget1 if the folder already exists. • Directories exported as targets cannot overlap. For example, if C1 is replicating /c1ifs1 to C2:/c2ifs1/target1, C3 cannot replicate /c3ifs1 to C2:/c2ifs1/target1/target2. • A cluster can be a target for one replication task at the same time that it is replicating data to another cluster. For example, C1 can replicate /c1ifs1 to C2:/c2ifs1/target1 and C2 can replicate /c2ifs2 to C1:/c1ifs2/target2, with both replications occurring at the same time.

128 Using remote replication • A cluster can be a target for multiple replication tasks. For example, C1 can replicate /c1ifs1 to C3:/c3ifs1/target1 and C2 can replicate /c2ifs1 to C3:/c3ifs1/target2, with both replications occurring at the same time. • Continuous Remote Replication tasks can be linked. For example: ◦ C1 replicates /c1ifs1 to C2:/c2ifs1/target1. ◦ C2 replicates /c2ifs1/target1 to C3:/c3ifs2/target2.

NOTE: If a different file system is used for the target, the linkage can go back to the original cluster.

• To replicate a directory or snapshot on a file system covered by continuous replication, first pause the continuous task and then initiate a run-once replication task. For information about configuring intercluster replications, see “Configuring the target export for replication to a remote cluster” (page 129). Using intracluster replications There are two forms of intracluster replication: • The same cluster and a different file system. Configure either continuous or run-once replication. You will need to specify a target file system and optionally a target directory (the default is the root of the file system or the mount point). • The same cluster and the same file system. Configure run-once replication. You will need to specify a file system, a source directory, and a target directory. Be sure to specify two different, non-overlapping subdirectories as the source and target. For example, the following replication is not allowed: From dir1 to dir1/dir2 However, the following replication is allowed: From dir1 to dir3/dir4

File system snapshot replication You can use the run-once replication mode to replicate a single file system snapshot. If a snapshot replication is not explicitly configured, snapshots and all related metadata are ignored/filtered out during remote replications. Replication is not supported for block snapshots. Configuring the target export for replication to a remote cluster Use the following procedure to configure a target export for remote replication. In this procedure, target export refers to the target file system and directory (the default is root of the file system) exported for remote replication.

NOTE: These steps are not required when configuring intracluster replication.

• Register source and destination clusters. The source and target clusters of a remote replication configuration must be registered with each other before remote replication tasks can be created. • Create a target export. This step identifies the target file system and directory for replication and associates it with the source cluster. Before replication can take place, you must create

Configuring the target export for replication to a remote cluster 129 a mapping between the source cluster and the target export that receives the replicated data. This mapping ensures that only the specified source cluster can write to the target export. • Identify server assignments to use for remote replication. Select the servers and corresponding NICs to handle replication requests, or use the default assignments. The default server assignment is to use all servers that have the file system mounted.

NOTE: Do not add or change files on the target system outside of a replication operation. Doing this can prevent replication from working properly.

GUI procedure This procedure must be run from the target cluster, and is not required or applicable for intracluster replication. Select the file system on the GUI, and then select Remote Replication Exports from the lower Navigator. On the Remote Replication Exports bottom panel, select Add. The Create Remote Replication Export dialog box allows you to specify the target export for the replication. The mount point of the file system is displayed as the default export path. You can add a directory to the target export.

The Server Assignments section allows you to specify server assignments for the export. Check the box adjacent to Server to use the default assignments. If you choose to assign particular servers to handle replication requests, select those servers and then select the appropriate NICs. If the remote cluster does not appear in the selection list for Export To (Cluster), you will need to register the cluster. Select New to open the Add Remote Cluster dialog box and then enter the requested information.

130 Using remote replication If the remote cluster is running an earlier version of IBRIX software, you will be asked to enter the clustername for the remote cluster. This name appears on the Cluster Configuration page on the GUI for the remote cluster.

The Remote Replication Exports panel lists the replication exports you created for the file system. Expand Remote Replication Exports in the lower Navigator and select the export to see the configured server assignments for the export. You can modify or remove the server assignments and the export itself. CLI procedure

NOTE: This procedure does not apply to intracluster replication. Use the following commands to configure the target file system for remote replication:

Configuring the target export for replication to a remote cluster 131 1. Register the source and target clusters with each other using the ibrix_cluster -r command if needed. To list the known remote clusters, run ibrix_cluster -l on the source cluster. 2. Create the export on the target cluster. Identify the target export and associate it with the source cluster using the ibrix_crr_export command. 3. Identify server assignments for the replication export using the ibrix_crr_nic command. The default assignments are: • Use all servers that have the file system mounted. • Use the cluster NIC on each server.

Registering source and target clusters Run the following command on both the target cluster and the source cluster to register the clusters with each other. It is necessary to run the command only once per source or target. ibrix_cluster -r -C CLUSTERNAME -H REMOTE_FM_HOST CLUSTERNAME is the name of the Fusion Manager for a cluster. For the -H option, enter the name or IP address of the host where the remote cluster's Fusion Manager is running. For high availability, use the virtual IP address of the Fusion Manager. To list clusters registered with the local cluster, use the following command: ibrix_cluster -l To unregister a remote replication cluster, use the following command: ibrix_cluster –d –C CLUSTERNAME

Creating the target export To create a mapping between the source cluster and the target export that receives the replicated data, execute the following command on the target cluster: ibrix_crr_export –f FSNAME [-p DIRECTORY] –C SOURCE_CLUSTER [–P] FSNAME is the target file system to be exported. The –p option exports a directory located under the root of the specified file system (the default is the root of the file system). The -C option specifies the source cluster containing the file system to be replicated. Include the -P option if you do not want this command to set the server assignments. You will then need to identify the server assignments manually with ibrix_crr_nic, as described in the next section. To list the current remote replication exports, use the following command on the target cluster: ibrix_crr_export -l To unexport a file system for remote replication, use the following command: ibrix_crr_export –U –f TARGET_FSNAME [-p DIRECTORY]

Identifying server assignments for remote replication To identify the servers that will handle replication requests and, optionally, a NIC for replication traffic, use the following command: ibrix_crr_nic -a -f FSNAME [-p directory] —h HOSTLIST [-n IBRIX_NIC]

132 Using remote replication When specifying resources, note the following: • Specify servers by their host name or IP address (use commas to separate the names or IP addresses). A host is any server on the target cluster that has the target file system mounted. • Specify the network using the IBRIX software network name (NIC). Enter a valid user NIC or the cluster NIC. The NIC assignment is optional. If it is not specified, the host name (or IP) is used to determine the network. • A previous server assignment for the same export must not exist, or must be removed before a new assignment is created. The listed servers receive remote replication data over the specified NIC. To increase capacity, you can expand the number of preferred servers by executing this command again with another list of servers. You can also use the ibrix_crr_nic command for the following tasks: • Restore the default server assignments for remote replication: ibrix_crr_nic -D -f FSNAME [-p directory] • View server assignments for remote replication. The output lists the target exports and associated server assignments on this cluster. The assigned servers and NIC are listed with a corresponding ID number that can be used in commands to remove assignments. ibrix_crr_nic -l • Remove a server assignment: ibrix_crr_nic -r -P ASSIGNMENT_ID1[,...,ASSIGNMENT_IDn] To obtain the ID for a particular server, use ibrix_crr_nic -l. Configuring and managing replication tasks on the GUI

NOTE: When configuring replication tasks, be sure to following the guidelines described in “Overview” (page 127).

Viewing replication tasks To view replication tasks for a particular file system, select that file system on the GUI and then select Active Tasks > Remote Replication in the lower Navigator. The Remote Replication Tasks bottom panel lists any replication tasks currently running or paused on the file system.

You can use CRR health reports to check the status of CRR activities on the source and target cluster. To see a list of health reports for active replication tasks, click List Report on the Remote Replication Tasks panel.

Configuring and managing replication tasks on the GUI 133 Select a report from the CRR Health Reports dialog box and click OK to see details about that replication task.

If the health check finds an issue in the CRR operation, it generates a critical event. Reports are generated on the source cluster. If the target cluster is running a version of IBRIX software earlier than 6.2, only the network connectivity check is performed. It takes approximately two minutes to generate a CRR health report. Reports are updated every 10 minutes. Only the last five CRR health reports are preserved. On the CLI, use the following commands to view reports:

134 Using remote replication List reports: ibrix_crrhealth -l Show details for a report: ibrix_crrhealth –i –n REPORTNAME To see other reports for a specific task, expand Active Tasks > Remote Replication and then select the task (crr-25 in the following example). Select Overall Status to see a status summary.

Select Server Tasks to display the state of the task and other information for the servers where the task is running.

Starting a replication task To start a replication task, click New on the Remote Replication Tasks panel and then use the New Replication Task Wizard to configure the replication.

Replication Settings dialog box Define the replication method on the Replication Settings dialog box.

Configuring and managing replication tasks on the GUI 135 Source Settings dialog box for continuous replications For continuous replications, the Source Settings dialog box lists the file system selected on the Filesystems panel.

Source Settings dialog box for run-once replications For a run-once replication of data other than a snapshot, specify the source directory on the Source Settings dialog box.

136 Using remote replication If you are replicating a snapshot, click Use a snapshot and then select the appropriate Snap Tree and snapshot.

Target Settings dialog box For replications to a remote cluster, select the target cluster on the Target Settings dialog box. This cluster must already be registered as a target export. If the remote cluster is not in the Target Cluster selection list, select New to open the Add Remote Cluster dialog box and register the cluster as a target export. (See “Configuring the target export for replication to a remote cluster” (page 129) for more information.) Then enter the target file system. Optionally, you can also specify a target directory in the file system.

Configuring and managing replication tasks on the GUI 137 For replications to the same cluster and different file system, the Target Settings dialog box asks for the target file system. Optionally, you can also specify a target directory in the file system. For replications to the same cluster and file system, the Target Settings dialog box asks only for the target directory. This field is required.

Specifying a target directory Specifying a target directory is optional for remote cluster and same cluster/different file system replications. It is required for same cluster/same file system replications. For example, you could configure the following replication, which does not include a target directory: • Source directory: /srcFS/a/b/c • Exported file system and directory on target: /destFS/1/2/3 The contents of /srcFS/a/b/c are replicated to destFs/1/2/3/{contents_under_c}. If you also specify the target directory a/b/c, the replication goes to /destFs/1/2/3/a/b/ c{contents_under_c}.

IMPORTANT: If you specify a target directory, be sure that it does not overlap with a previous replication using the same target export.

Pausing or resuming a replication task To pause a task, select it on the Remote Replication Tasks panel and click Pause. When you pause a task, the status changes to PAUSED. Pausing a task that involves continuous data capture does not stop the data capture. You must allocate space on the disk to avoid running out of space because the data is captured but not moved. To resume a paused replication task, select the task and click Resume. The status of the task then changes to RUNNING and the task continues from the point where it was paused. Stopping a replication task To stop a task, select that task on the Remote Replication Tasks panel and click Stop. To view stopped tasks, select Inactive Tasks from the lower Navigator. You can delete one or more tasks, or see detailed information about the selected task.

138 Using remote replication Configuring and managing replication tasks from the CLI

NOTE: When configuring replication tasks, be sure to following the guidelines described in “Overview” (page 127).

Starting a remote replication task to a remote cluster Use the following command to start a continuous or run-once replication task to a remote cluster. The command is executed from the source cluster. ibrix_crr -s -f SRC_FSNAME [-o] [-S SRCDIR] –C TGT_CLUSTERNAME -F TGT_FSNAME [-X TGTEXPORT] [-P TGTDIR] [-R] Use the –s option to start a continuous remote replication task. The applicable options are:

-f SRC_FSNAME The source file system to be replicated. –C TGT_CLUSTERNAME The remote target cluster. -F TGT_FSNAME The remote target file system. -X TGTEXPORT The remote replication target (exported directory). The default is the root of the file system. NOTE: This option is used only for replication to a remote cluster. The file system specified with -F and the directory specified with -X must both be exported from the target cluster (target export).

-P TGTDIR A directory under the remote replication target export (optional). This directory must exist on the target, but does not need to be exported. -R Bypass retention compatibility checking.

Omit the -o option to start a continuous replication task. A continuous replication task does an initial full synchronization and then continues to replicate any new changes made on the source. Continuous replication tasks continue to run until you stop them manually. Use the -o option for run-once tasks. This option synchronizes single directories or entire file systems on the source and target in a single pass. If you do not specify a source directory with the -S option, the replication starts at the root of the file system. The run-once job terminates after the replication is complete; however, the job can be stopped manually, if necessary. Use -P to specify an optional target directory under the target export. For example, you could configure the following replication, which does not include the optional target directory: • Source directory: /srcFS/a/b/c • Exported file system and directory on target: /destFS/1/2/3 The replication command is: ibrix_crr –s -o –f srcFs –S a/b/c –C tcluster –F destFs –X 1/2/3 The contents of /srcFS/a/b/c is replicated to destFs/1/2/3/{contents_under_c}. When the same command includes the –P option to specify the target directory a/b/c:

Configuring and managing replication tasks from the CLI 139 ibrix_crr –s -o –f srcFs –S a/b/c –C tcluster –F destFs –X 1/2/3 –P a/b/c The replication now goes to /destFs/1/2/3/a/b/c{contents_under_c}. Starting an intracluster remote replication task Use the following command to start a continuous or run-once intracluster replication task for the specified file system: ibrix_crr -s -f SRC_FSNAME [-o [-S SRCDIR]] -F TGT_FSNAME [-P TGTDIR] The -F option specifies the name of the target file system (the default is the same as the source file system). The -P option specifies the target directory under the target file system (the default is the root of the file system). Use the -o option to start a run-once task. The -S option specifies a directory under the source file system to synchronize with the target directory. Starting a run-once directory replication task Use the following command to start a run-once directory replication for file system SRC_FSNAME. The -S option specifies the directory under the source file system to synchronize with the target directory. The -P option specifies the target directory. ibrix_crr -s -f SRC_FSNAME -o -S SRCDIR -P TGTDIR Stopping a remote replication task Use the following command to stop a continuous or run-once replication task. Use the ibrix_task -l command to obtain the appropriate ID. ibrix_crr -k –n TASKID The stopped replication task is moved to the inactive task list. Use ibrix_task -l -c to view the inactive task list. To forcefully stop a replication task, use the following command: ibrix_crr -k –n TASKID The stopped task is removed from the list of inactive tasks. Pausing a remote replication task Use the following command to pause a continuous replication or run-once replication task with the specified task ID. Use the ibrix_task -l command to obtain the appropriate ID. ibrix_crr -p –n TASKID Resuming a remote replication task Use the following command to resume a continuous or run-once replication task with the specified task ID. Use the ibrix_task -l command to obtain the appropriate ID. ibrix_crr -r –n TASKID Querying remote replication tasks Use the following command to list all active replication tasks in the cluster, optionally restricted by the specified file system and servers. ibrix_crr -l [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME] To see more detailed information, run ibrix_crr with the -i option. The display shows the status of tasks on each node, as well as task summary statistics (number of files in the queue, number of files processed). The query also indicates whether scanning is in progress on a given server and lists any error conditions.

140 Using remote replication ibrix_crr -i [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME] The following command prints detailed information about replication tasks matching the specified task IDs. Use the -h option to limit the output to the specified server. ibrix_crr -i –n TASKIDS [ [-h HOSTNAME] [-C SRC_CLUSTERNAME] Replicating WORM/retained files When using remote replication for file systems enabled for data retention, the following requirements must be met: • The source and target file systems must use the same data retention mode (Enterprise or Relaxed). • The default, maximum, and minimum retention periods must be the same on the source and target file systems. • A clock synchronization tool such as ntpd must be used on the source and target clusters. If the clock times are not in sync, file retention periods might not be handled correctly. Also note the following: • Multiple hard links on retained files on the replication source are not replicated. Only the first hard link encountered by remote replication is replicated, and any additional hard links are not replicated. (The retainability attributes on the file on the target prevent the creation of any additional hard links). For this reason, HP strongly recommends that you do not create hard links on files that will be retained. • For continuous remote replication, if a file is replicated as retained, but later its retainability is removed on the source file system (using data retention management commands), the new file’s attributes and any additional changes to that file will fail to replicate. This is because of the retainability attributes that the file already has on the target, which cause the file system on the target to prevent remote replication from changing it. • When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file on the target should have a legal hold, you will also need to set the legal hold on that file. • If a file has been replicated to a target and you then change the file's retention expiration time with the ibrix_reten_adm -e command, the new expiration time is not replicated to the target. If necessary, also change the file's retention expiration time on the target. Configuring remote failover/failback When remote replication is configured from a local cluster to a remote cluster, you can fail over the local cluster to the remote cluster: 1. Stop write traffic to the local site. 2. Wait for all remote replication queues to drain. 3. Stop remote replication on the local site. 4. Reconfigure shares as necessary on the remote site. The cluster name and IP addresses (or VIFs) are different on the remote site, and changes are needed to allow clients to continue to access shares. 5. Redirect write traffic to the remote site. When the local cluster is healthy again, take the following steps to perform a failback from the remote site: 1. Stop write traffic to the remote site. 2. Set up Run-Once remote replication, with the remote site acting as the source and the local site acting as the destination. 3. When the Run-Once replication is complete, restore shares to their original configuration on the local site, and verify that clients can access the shares. 4. Redirect write traffic to the local site.

Replicating WORM/retained files 141 Troubleshooting remote replication Continuous remote replication fails when a private network is used Continuous remote replication will fail if the configured cluster interface and the corresponding cluster Virtual Interface (VIF) for the Fusion Manager are in a private network on either the source or target cluster. By default, continuous remote replication uses the cluster interface and the Cluster VIF (the ibrixinit –C and –v options, respectively) for communication between the source cluster and the target cluster. To work around potential continuous remote replication communication errors, it is important that the ibrixinit -C and -v arguments correspond to a public interface and a public cluster VIF, respectively. If necessary, the ibrix_crr_nic command can be used to change the server assignments (the server/NICs pairs that handle remote replication requests). Truncating a file during replication If a file is truncated while it is being replicated, the file is replicated successfully, but a Domain scan failure message is displayed. Also, an error message is logged in the ibrcfrworker.log file. The text of the message is similar to the following: rsync: read errors mapping "/mnt/ibrix/filter/tasks/5/ifs1/test.dat": No data available (61) rsync: read errors mapping "/ifs1/test.dat": No data available (61) You can ignore these error messages. The file was replicated successfully.

142 Using remote replication 12 Managing data retention

Data retention is intended for sites that need to archive read-only files for business purposes, and ensures that files cannot be modified or deleted for a specific retention period. Data retention includes the following optional features: • Data validation scans to ensure that files remain unchanged. • Data retention reports. Overview This section provides overview information for data retention and data validation scans. Data retention Data retention must be enabled on a file system. When you enable data retention, you can specify a retention profile that includes minimum, maximum, and default retention periods that specify how long a file must be retained.

WORM and WORM-retained files The files in the file system can be in the following states: • Normal. The file is created read-only or read-write, and can be modified or deleted at any time. A checksum is not calculated for normal files and they are not managed by data retention. • Write-Once Read-Many (WORM). The file cannot be modified, but can be deleted at any time. WORM files can be managed by data retention. A checksum is calculated for WORM files and they can be managed by data retention. • WORM-retained. A WORM file becomes WORM-retained when a retention period is applied to it. The file cannot be modified, and cannot be deleted until the retention period expires. WORM-retained files can be managed by data retention. A checksum is calculated for WORM-retained files and they can be managed by data retention.

NOTE: You can apply a legal hold to a WORM or WORM-retained file. The file then cannot be deleted until the hold is released, even if the retention period has expired. For WORM and WORM-retained files, the file's contents and the following file attributes cannot be modified: • File name (the file cannot be renamed or moved) • User and group owners • File access permissions • File modification time Also, no new hard links can be made to the file and the extended attributes cannot be added, modified, or removed. The following restrictions apply to directories in a file system enabled for data retention: • A directory cannot be moved or renamed unless it is empty (even if it contains only normal files). • You can delete directories containing only WORM and normal files, but you cannot delete directories containing retained files.

Data retention attributes for a file system The data retention attributes configured on a file system are called a retention profile. The profile includes the following:

Overview 143 Default retention period. If a specific retention period is not applied to a file, the file will be retained for the default retention period. The setting for this period determines whether you can manage WORM (non-retained) files as well as WORM-retained files: • To manage both WORM (non-retained) files and WORM-retained files, set the default retention period to zero. To make a file WORM-retained, you will need to set the atime to a date in the future. • To manage only WORM-retained files, set the default retention period to a non-zero value. Minimum and maximum retention periods. Retained files cannot be deleted until their retention period expires, regardless of the file system retention policy. You can set a specific retention period for a file; however, it must be within the minimum and maximum retention periods associated with the file system. If you set a time that is less than the minimum retention period, the expiration time of the period will be adjusted to match the minimum retention period. Similarly, if the new retention period exceeds the maximum retention period, the expiration time will be adjusted to match the maximum retention period. If you do not set a retention period for a file, the default retention period is used. If that default is zero, the file will not be retained. Autocommit period. Files that are not changed during this period automatically become WORM or WORM-retained when the period expires. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) The autocommit period is optional and should not be set if you want to keep normal files in the file system.

IMPORTANT: For a file to become WORM, its ctime and mtime must be older than the autocommit period for the file system. On Linux, ctime means any change to the file, either its contents or any metadata such as owner, mode, times, and so on. The mtime is the last modified time of the file's contents. Retention mode. Controls how the expiration time for the retention period can be adjusted: • Enterprise mode. The expiration date of the retention period can be extended to a later date. • Relaxed mode. The expiration date of the retention period can be moved in or extended to a later date. The autocommit and default retention periods determine the steps you will need to take to make a file WORM or WORM-retained. See “Creating WORM and WORM-retained files” (page 149) for more information. Data validation scans To ensure that WORM and retained files remain unchanged, it is important to run a data validation scan periodically. Circumstances such as the following can cause a file to change unexpectedly: • System hardware errors, such as write errors • Degrading of on-disk data over time, which can change the stored bit values, even if no accesses to the data are performed • Malicious or accidental changes made by users A data validation scan computes hash sum values for the WORM, WORM-retained, and WORM-hold files in the scanned file system or subdirectory and compares them with the values originally computed for the files. If the scan identifies changes in the values for a particular file, an alert is generated on the GUI. You can then replace the bad file with an unchanged copy from an earlier backup or from a remote replication.

NOTE: Normal files are not validated. The time required for a data scan depends on the number of files in the file system or subdirectory. If there are a large number of files, the scan could take up to a few weeks to verify all content on

144 Managing data retention storage. A scheduled scan will quit immediately if it detects that a scan of the same file system is already running. You can schedule periodic data validation scans, and you can also run on-demand scans. Enabling file systems for data retention You can enable a new or an existing file system for data retention and, optionally, validation and archiving. When you enable a file system, you can define a retention profile that specifies the retention mode and the default, minimum, and maximum retention periods. New file systems Enable data retention. The New Filesystem Wizard includes a WORM/Data Retention dialog box that allows you to enable data retention and define a retention profile for the file system. You can also enable and define schedules for data validation scans and data collection for reports.

The default retention period determines whether you can manage WORM (non-retained) files as well as WORM-retained files. To manage only WORM-retained files, set the default retention period. WORM-retained files then use this period by default; however, you can assign a different retention period if desired. To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention Period, which sets the default retention period to 0 seconds. When you make a WORM file retained, you will need to assign a retention period to the file. The Set Auto-Commit Period option specifies that files will become WORM or WORM-retained if they are not changed during the specified period. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time period. The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period.

Enabling file systems for data retention 145 Enable Data Validation. Check this option to schedule periodic scans on the file system. Use the default schedule, or select Modify to open the Data Validation Scan Schedule dialog box and configure your own schedule.

Enable Report Data Generation. Check this option to generate data retention reports. Use the default schedule, or select Modify to open the Report Data Generation Schedule dialog box and configure your own schedule.

Enable Express Query. Check this option to enable Express Query on the file system.

146 Managing data retention Auditing Options. Use this dialog box to enable auditing, if desired. to set the expiration schedule and policy for audit log reports, to set the expiration policy for audit logs, and to select the file system events that you want to audit. See “Express Query” (page 162), for details. Enabling data retention from the CLI You can also enable data retention when creating a new file system from the CLI. Use ibrix_fs -c and include the following-o options: –o "retenMode=,retenDefPeriod=,retenMinPeriod=, retenMaxPeriod=,retenAutoCommitPeriod=" The retenMode option is required and is either enterprise or relaxed. You can specify any, all, or none of the period options. retenDefPeriod is the default retention period, retenMinPeriod is the minimum retention period, and retenMaxPeriod is the maximum retention period. The retenAutoCommitPeriod option specifies that files will become WORM or WORM-retained if they are not changed during the specified period. (If the default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period. When using a period option, enter a decimal number, optionally followed by one of these characters: • s (seconds) • m (minutes) • h (hours) • d (days) • w (weeks) • M (months) • y (years) If you do not include a character specifier, the decimal number is interpreted as seconds. The following example creates a file system with Enterprise mode retention, with a default retention period of 1 month, a minimum retention period of 3 days, a maximum retention period of 5 years, and an autocommit period of 1 hour: ibrix_fs -o "retenMode=Enterprise,retenDefPeriod=1M,retenMinPeriod=3d, retenMaxPeriod=5y,retenAutoCommitPeriod=1h" -c -f ifs1 -s ilv_[1-4] -a Configuring data retention on existing file systems

NOTE: Data retention cannot be enabled on a file system created on IBRIX software 5.6 or earlier versions. Instead, create a new file system on IBRIX software 6.0 or later, and then copy or move files from the old file system to the new file system. To enable or change the data retention or Express Query configuration an existing file system, first unmount the file system. Select Active Tasks > WORM/Data Retention from the lower Navigator, and then click Modify on the WORM/Data Retention panel. You do not need to unmount the file system to change the configuration for data validation or report data generation.

Enabling file systems for data retention 147 To enable data retention on an existing file system using the CLI, run this command: ibrix_fs -W -f FSNAME -o "retenMode=,retenDefPeriod=,retenMinPeriod=, retenMaxPeriod=" To enable data retention on an existing file system, created with IBRIX version 6.0 or earlier, follow the steps to upgrade the file system as described in the HP IBRIX 9300/9320 Storage Administrator Guide in the section, Upgrading the IBRIX software to the 6.2 release. Then, configure the file system retention profile as described in the steps provided earlier in this section. Viewing the retention profile for a file system To view the retention profile for a file system, select the file system on the GUI, and then select WORM/Data Retention from the lower navigator. The WORM/Data retention panel shows the retention profile.

To view the retention profile from the CLI, use the ibrix_fs -i command, as in the following example:

148 Managing data retention ibrix_fs -i -f ifs1 FileSystem: ifs1 ======{ … } RETENTION : Enterprise [default=15d,mininum=1d,maximum=5y] Changing the retention profile for a file system The file system must be unmounted when you make changes to the retention profile. After unmounting the file system, click Modify on the WORM/Data Retention panel to open the Modify WORM/Data Retention dialog box and then make your changes. To change the configuration from the CLI, use the following command: ibrix_fs -W -f FSNAME -o "retenMode=,retenDefPeriod=,retenMinPeriod=, retenMaxPeriod=,retenAutoCommitPeriod=" Managing WORM and retained files You can change a file to the WORM or WORM-retained state, view the retention information associated with a file, and use administrative tools to manage individual files, including setting or removing a legal hold, setting or removing a retention period, and administratively deleting a file. Creating WORM and WORM-retained files The autocommit and default retention periods determine the steps you will need to take. Autocommit period is set and default retention period is zero seconds: • To make a WORM file retained, set the atime to a time in the future. Autocommit period is set and default retention period is non-zero: • Files remaining unchanged during the autocommit period automatically become WORM-retained and use the default retention period. You can assign a different retention period to a file if necessary. Autocommit period is not set and default retention period is zero seconds: • To create a WORM file, run a command to make the file read-only. • To make a WORM file retained, set the atime to a time in the future. Auto commit period is not set and default retention period is non-zero: • To create a WORM-retained file, run a command to make the file read-only. By default, the file uses the default retention period. • To assign a different retention period to the WORM-retained file, set the atime to a time in the future.

NOTE: If you are not using autocommit, files must explicitly be made read-only. Typically, you can configure your application to do this.

Making a file read-only Linux. Use chmod to make the file read-only. For example: chmod 444 myfile.txt Windows. Use the attrib command to make the file read-only: C:\> attrib +r myfile.txt

Setting the atime Linux. Use a command such as touch to set the access time to the future: touch -a -d "30 minutes" myfile.txt

Managing WORM and retained files 149 See the touch(1) documentation for the time/date formats allowed with the -d option. You can also enter the following on a Linux command line to see the acceptable date/time strings for the touch command: info "Date input formats" Windows. Windows does not include a touch command. Instead, use a third-party tool such as cygwin or FileTouch to set the access time to the future.

NOTE: For SMB users setting the access time manually for a file, the maximum retention period is 100 years from the date the file was retained. For NFS users setting the access time manually for a file, the retention expiration date must be before February 5, 2106. The access time has the following effect on the retention period: • If the access time is set to a future date, the retention period of the file is set so that retention expires at that date. • If the access time is not set, the file inherits the default retention period for the file system. Retention expires at that period in the future, starting from the time the file is set read-only. • If the access time is not set and the default retention period is zero, the file will become WORM but not retained, and can be deleted. You can change the retention period if necessary; see “Changing a retention period” (page 152). Viewing the retention information for a file To view the retention information for a file, run the following command: ibrix_reten_adm -l -f FSNAME -P PATHLIST For example: # ibrix_reten_adm -l -f sales_fs -P /sales_fs/dir1/contacts.txt

nl /sales_fs/dir1/contacts.txt: state={retained} retain-to:{2011-Nov-10 15:55:06} [period: 182d15h (15778800s)] In this example, contacts.txt is a retained file, its retention period expires on November 10, 2011, and the length of the retention period is 182 days, 15 hours. File administration To administer files from the GUI, select File Administration on the WORM/Data Retention panel. Select the action you want to perform on the WORM/Data Retention – File Administration dialog box.

150 Managing data retention To administer files from the CLI, use the ibrix_reten_adm command.

IMPORTANT: Do not use the ibrix_reten_adm command on a file system that is not enabled for data retention.

Specifying path lists When using the GUI or the ibrix_reten_adm command, you need to specify paths for the files affected by the retention action. The following rules apply when specifying path lists: • A path list can contain one or more entries, separated by commas. • Each entry can be a fully-qualified path, such as /myfs1/here/a.txt. An entry can also be relative to the file system mount point. For example, if myfs1 is mounted at /myfs1, the path here/a.txt is a valid entry. • A relative path cannot begin with a slash (/). Relative paths are always relative to the mount point; they cannot be relative to the user’s current directory, unlike other UNIX commands. • A directory cannot be specified in a path list. Directories themselves have no retention settings, and the command returns an error message if a directory is entered. To apply an action to all files in a directory, you need to specify the paths to the files. You can use wildcards in the pathnames, such as /my/path/*,/my/path/.??*. The command does not apply the action recursively; you need to enter subdirectories. To apply a command to all files in all subdirectories of the tree, you can wrap the ibrix_reten_adm command in a find script (or other similar script) that calls the command for every directory in the tree. For example, the following command sets a legal hold on all files in the specified directory: find /ibrixFS/mydir -type d -exec ibrix_reten_adm -h -f ibrixFS -P {}/* \; The following script includes files beginning with a dot, such as .this. (This includes file uploaded to the file system, not file system files such as the .archiving tree.) find /ibrixFS/mydir -type d -exec ibrix_reten_adm -h -f ibrixFS -P {}/*,{}/.??* \;

Managing WORM and retained files 151 Setting or removing a legal hold When a legal hold is set on a retained or WORM file, the file cannot be deleted until the hold is released, even if the retention period has expired. On the WORM/Data Retention – File Administration dialog box, select Set a Legal Hold and specify the appropriate file. To remove a legal hold from a file, Remove a Legal Hold and specify the appropriate file. When the hold is removed, the file is again under the control of its original retention policy. To set a legal hold from the CLI, use this command: ibrix_reten_adm -h -f FSNAME -P PATHLIST To remove a legal hold from the CLI, use this command: ibrix_reten_adm -r -f FSNAME -P PATHLIST

Changing a retention period If necessary, you can change the length of the current retention period. For example, you might want to assign a different retention period to a retained file currently using the default retention period. This is done by resetting the expiration time of the period. If the retention mode is Enterprise, the new expiration time must be later than the current expiration time. If the retention mode is Relaxed, the new expiration time can be earlier or later than the current expiration time. On the WORM/Data Retention – File Administration dialog box, select Reset Expiration Time and specify the appropriate file. When you set the new expiration time, the length of the retention period is adjusted accordingly. If you specify a time that is less than the minimum retention period for the file system, the expiration time will be adjusted to match the minimum retention period. Similarly, if the new time will exceed the maximum retention period, the expiration time will be adjusted to match the maximum retention period.

To reset the expiration time using the CLI: ibrix_reten_adm -e expire_time -f FSNAME -P PATHLIST If you specify an interval such as 20m (20 minutes) for the expire_time, the retention expiration time is set to that amount of time in the future starting from now, not that amount of time from the original start of retention. If you specify an exact date/time such as 19:20:02 or 2/16/2012 for the expire_time, the command sets the retention expiration time to that exact time. If the file

152 Managing data retention system is in Relaxed retention mode (not Enterprise), the exact date/time can be in the past, in which case the file immediately expires from retention and becomes WORM but no longer retained. See the Linux date(1) man page for a description of the valid date/time formats for the expire_time parameter.

Removing the retention period When you remove the retention period from a retained file, the file becomes a WORM file. On the WORM/Data Retention – File Administration dialog box, select Remove Retention Period and specify the appropriate file. To remove the retention period using the CLI: ibrix_reten_adm -c -f FSNAME -P PATHLIST

Deleting a file administratively This option allows you to delete a file that is under the control of a data retention policy. On the WORM/Data Retention – File Administration dialog box, select Administrative Delete and specify the appropriate file.

CAUTION: Deleting files administratively removes them from the file system, regardless of the data retention policy. To delete a file using the CLI: ibrix_reten_adm -d -f FSNAME -P PATHLIST Running data validation scans Scheduling a validation scan When you use the GUI to enable a file system for data validation, you can set up a schedule for validation scans. You might want to run additional scans of the file system at other times, or you might want to scan particular directories in the file system.

NOTE: Although you can schedule multiple scans of a file system, only one scan can run at a time for a given file system. To schedule a validation scan, select the file system on the GUI, and then select Active Tasks from the lower navigator. Select New to open the Starting a New Task dialog box. Select Data Validation as the Task Type.

When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be scanned if necessary.

Running data validation scans 153 Go to the Schedule tab to specify when you want to run the scan.

Starting an on-demand validation scan You can run a validation scan at any time. Select the file system on the GUI, and then select Active Tasks from the lower navigator. Click New to open the Starting a New Task dialog box. Select Data Validation as the Task Type.

When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be scanned if necessary and click OK.

154 Managing data retention To start an on-demand validation scan from the CLI, use the following command: ibrix_datavalidation -s -f FSNAME [-d PATH] Viewing, stopping, or pausing a scan Scans in progress are listed on the Active Tasks panel on the GUI. If you need to halt the scan, click Stop or Pause on the Active Tasks panel. Click Resume to resume the scan. To view the progress of a scan from the CLI, use the ibrix_task command. The -s option lists scheduled tasks. ibrix_task -i [-f FILESYSTEMS] [-h HOSTNAME] To stop a scan, use this command: ibrix_task -k -n TASKID [-F] [-s] To pause a scan, use this command: ibrix_task -p -n TASKID To resume a scan, use this command: ibrix_task -r -n TASKID Viewing validation scan results While a validation scan is running, it is listed on the Active Tasks panel on the GUI (select the file system, and from the lower Navigator select Active Tasks > WORM/Data Retention > Data Validation Scans). Information about completed scans is listed on the Inactive Tasks panel (select the file system, and then select Inactive Tasks from the lower Navigator). On the Inactive Tasks panel, select a validation task and then click Details to see more information about the scan. A unique validation summary file is also generated at the end of each scan. The files are created under the root directory of the file system at {filesystem root}/.archiving/validation/ history. The validation summary files are named .sum, such as 1–0.sum, 2–0.sum, and so on. The ID is the task ID assigned by IBRIX software when the scan was started. The second number is 0 unless there is an existing summary file with the same task ID, in which case the second number is incremented to make the filename unique. Following is a sample validation summary file: # cat /fsIbrix/.archiving/validation/history/4-0.sum JOB_ID=4 FILESYSTEM_NAME=fsIbrix FILESYSTEM_MOUNT_DIR=/fsIbrix PATH=/fsIbrix/./directory90 SCANTYPE=hashsum CREATE_TIME=Wed Jul 18 05:18:12 2012 START_TIME=Wed Jul 18 05:18:12 2012

Running data validation scans 155 KICKOFF_TIME=Wed Jul 18 05:18:12 2012 STOP_TIME=Mon Jul 23 14:36:59 2012 NUM_JOB_ERRORS=0 NUM_FILES_VALIDATED=1000000 NUM_FILES_SKIPPED=0 NUM_CONTENT_INCONSISTENCIES=0 NUM_METADATA_INCONSISTENCIES=0 # Viewing and comparing hash sums for a file If a validation scan summary file reports inconsistent hash sums for a file and you want to investigate further, use the showsha (SHA test utility) and showvms (Validation Express Query lookup utility) commands to compare the current hash sums with the hash sums that were originally calculated for the file. The showsha command calculates and displays the hash sums for a file. For example: # /usr/local/ibrix/sbin/showsha rhnplugin.py Path hash: f4b82f4da9026ba4aa030288185344db46ffda7b Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493 The showvms command displays the hash sums stored for the file. For example: # /usr/local/ibrix/sbin/showvms rhnplugin.py VMSQuery returned 0 Path hash: f4b82f4da9026ba4aa030288185344db46ffda7b Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493 last attempt: Wed Dec 31 17:00:00 1969 last success: Wed Dec 31 17:00:00 1969 changed: 0 In this example, the hash sums match and there are no inconsistencies. The 1969 dates appearing in the showvms output mean than the file has not yet been validated. Handling validation scan errors When a validation scan detects files having hash values inconsistent with their original values, it displays an alert in the events section of the GUI. However, the alert lists only the first inconsistent file detected. It is important to check the validation summary report to identify all inconsistent files that were flagged during the scan. When there are inconsistencies, it is necessary to determine whether the cause is file corruption or checksum corruption. Compare the file checksum with the backup file: /usr/local/ibrix/sbin/showsha

Checksum corruption: If the checksums of the and are identical, the .archiving directory may have been corrupted (a checksum corruption). If this is the case, you must restore the checksums: • If only a few files are inconsistent and you want to postpone restoring the checksums, you can back up the files with a checksum inconsistency, delete those files from the file system, and restore the backed up files to the file system. • If many checksums are corrupted, there may be a hardware failure. To restore the checksums, complete the following procedure:

IMPORTANT: This procedure should be performed only under the guidance of HP Support. 1. Create a backup of the existing .archiving directory for future reference. 2. Replace the faulty hardware.

156 Managing data retention 3. Clean up the existing checksums. 4. Start a new data validation scan on the entire file system to compute the checksums.

File corruption If the checksums of the and are not identical, there is data (content) corruption. To replace an inconsistent file, follow these steps: 1. Obtain a good version of the file from a backup or a remote replication. 2. If the file is retained, remove the retention period for the file, using the GUI or the ibrix_reten_adm -c command. 3. Delete the file administratively using the GUI or the ibrix_reten_adm -d command. 4. Copy/restore the good version of the file to the data-retained file system or directory. If you recover the file using an NDMP backup application, the proper retention expiration period is applied from the backup copy of the file. If you copy the file another way, you will need to set the atime and read-only status. Creating data retention reports Three reports are available: data retention, utilization, and validation. The reports can show results either for the entire file system or for individual tiers. To generate a tiered report, the file system must include at least one tier. You can display reports as PDFs, CSV (CLI only), or in HTML format (GUI only). The latest files in each format are saved in /usr/local/ibrix/reports/output//. When you generate a report, the system creates a CSV file containing the data for the report. The latest CSV file is also stored in /usr/local/ibrix/reports/output//.

NOTE: Older report files are not saved. If you need to keep report files, save them in another location before you generate new reports. The data retention report lists ranges of retention periods and specifies the number of files in each range. The Number of Files reported on the graph scales automatically and is reported as individual files, thousands of files, or millions of files. The following example shows a data retention report for an entire file system.

Creating data retention reports 157 The utilization report summarizes how storage is utilized between retention states and free space. The next example shows the first page of a utilization report broken out by tiers. The results for each tier appear on a separate page. The total size scales automatically, and is reported as MB, GB, or TB, depending on the size of the file system or tier.

A data validation report shows when files were last validated and reports any mismatches. A mismatch can be either content or metadata. The Number of Files scales automatically and is reported as individual files, thousands of files, or millions of files.

Generating and managing data retention reports To run an unscheduled report from the GUI, select Filesystems in the upper Navigator and then select WORM/Data Retention in the lower Navigator. On the WORM/Data Retention panel, click Run a Report. On the Run a WORM/Data Protection Summary Report dialog box, select the type of report to view, and then specify the output format.

158 Managing data retention If an error occurs during report generation, a message appears in red text on the report. Simply run the report again. Generating data retention reports from the CLI You can generate reports at any time using the ibrix_reports command. Scheduled reports can be configured only on the GUI. First run the following command to scan the file system and collect the data to be used in the reports: ibrix_reports -s -f FILESYSTEM Then run the following command to generate the specified report: ibrix_reports -g -f FILESYSTEM -n NAME -o OUTPUT FORMAT Use the -n option to specify the type of report, where NAME is one of the following; • retention • retention_by_tier • validation • validation by tier • utilization • utilization_by_tier The output format specified with -o can be csv or pdf.

Creating data retention reports 159 Using hard links with WORM files You can use the Linux ln command without the -s option to create a hard link to a normal (non-WORM) file on an retention-enabled file system. If you later make the file a WORM file, the following restrictions apply until the file is deleted: • You cannot make any new hard links to the file. Doing this would increment the metadata of the link count in the file's inode, which is not allowed under WORM rules. • You can delete hard links (the original file system entry or a hard-link entry) without deleting the other file system entries or the file itself. WORM rules allow the link count to be decremented. Using remote replication When using remote replication for file systems enabled for retention, the following requirements must be met: • The source and target file systems must use the same retention mode (Enterprise or Relaxed). • The default, maximum, and minimum retention periods must be the same on the source and target file systems. • A clock synchronization tool such as ntpd must be used on the source and target clusters. If the clock times are not in sync, file retention periods might not be handled correctly. Also note the following: • Multiple hard links on retained files on the replication source are not replicated. Only the first hard link encountered by remote replication is replicated, and any additional hard links are not replicated. (The retainability attributes on the file on the target prevent the creation of any additional hard links). For this reason, HP strongly recommends that you do not create hard links on retained files. • For continuous remote replication, if a file is replicated as retained, but later its retainability is removed on the source file system (using data retention management commands), the new file’s attributes and any additional changes to that file will fail to replicate. This is because of the retainability attributes that the file already has on the target, which will cause the file system on the target to prevent remote replication from changing it. • When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file on the target should have a legal hold, you will also need to set the legal hold on that file. • If a file has been replicated to a target and you then change the file's retention expiration time with the ibrix_reten_adm -e command, the new expiration time is not replicated to the target. If necessary, also change the file's retention expiration time on the target. Backup support for data retention The supported method for backing up and restoring WORM/retained files is to use NDMP with DMA applications. Other backup methods will back up the file data, but will lose the retention configuration. Troubleshooting data retention Limitation on remote replication of retention-enabled file systems When there is a large I/O load on the source file system and the auto-commit period is set to a lower value, the following errors will display in the CRR logs: • Partial transfer: ibrcfrworker returned error 23 (Partial transfer), which can be expected in retention-enabled (WORM) environments when CRR cannot modify a

160 Managing data retention retained file on the target cluster, or in environments where the filters on the target cluster are different from the source cluster. • Data stream error: ibrcfrworker returned error 12 (RERR_STREAMIO), which can be expected in retention-enabled environments (WORM) when CRR cannot modify a retained file on the target cluster. To avoid these errors, set the auto-commit period to a higher value (the minimum value is five minutes and the maximum value is one year). Attempts to edit retained files can create empty files It you attempt to edit a WORM file in the retained state, applications such as the vi editor will be unable to edit the file, but can leave empty temp files on the file system. Applications such as vi can appear to update WORM files If you use an application such as vi to edit a WORM file that is not in the retained state, the file will be modified, and it will be retained with the default retention period. This is the expected behavior. The file modification occurs because the editor edits a temporary copy of the file, tries to rename it to the real file, and when that fails, deletes the original file and then does the rename, which succeeds (because unretained WORM files are allowed to be deleted). Cannot enable data retention on a file system with a bad segment Data retention must be set on all segments of a file system to ensure that all files can be managed properly. File systems with bad segments cannot be enabled for data retention. If a file system has a bad segment, evacuate or remove the segment first, and then enable the file system.

Troubleshooting data retention 161 13 Express Query

Express Query provides a per-file system database of system and custom metadata, and audit histories of system and file activity. When Express Query is enabled on the file system, you can manage the metadata service, configure auditing, create reports from the audit history, assign custom metadata and certain system metadata to files and directories, and query for selected metadata from files. Express Query provides the following components:

Component Brief Overview Where to find additional information

Metadata Service The processes that manage the set of “Managing the metadata service” per-file system metadata databases, (page 162) including auditing of file changes and accessing the database in response to REST API requests.

Auditing Auditing lets you configure what “Managing auditing” (page 167) system and file changes go into the Express Query database and create reports of selected parts of the audit history from the database.

StoreAll REST API (Also known as The StoreAll REST API lets you query “StoreAll REST API” (page 170) Object API) information from the Express Query database over an HTTP share. You can assign custom metadata to files and directories. You can set certain system attributes. In some parts of the product StoreAll REST API is also referred to as Object API.

Managing the metadata service The metadata service includes the database management processes, as well as the processes that watch for state changes and log the associated metadata in the Express Query database. The service is controlled with the ibrix_archiving command. One set of processes runs on the Active Fusion Manager server and manages all per-file system databases. Stopping the service disables access to all file systems’ databases. This section provides the basic commands for managing the metadata service, in addition to the following sections on metadata: • “Saving and importing file system metadata” (page 164) • “Metadata and continuous remote replication” (page 166) View the status of the metadata service: ibrix_archiving -i The Services section of the GUI dashboard also displays the status of the metadata service. List file systems registered for the metadata service: ibrix_archiving -l Start the metadata service: ibrix_archiving -s Stop the metadata service: ibrix_archiving -S [-F] [-t timeout secs] The -t option specifies the time (in seconds) to wait for the service to stop gracefully.

162 Express Query The -F option forcefully stops the archiving daemons and disables database access to all file systems enabled for Express Query. When you restart the service after using -F, the database enters in recovery mode, which can take a long time to complete depending on the size of the database. Restart the metadata service: ibrix_archiving -r Backing up and restoring file systems with Express Query data Express Query stores its metadata database for each file system on the file system, in the /.archiving/database directory. Therefore, if you back up a snapshot of the entire file system, you also back up the database. If you intend to restore the database with the file system, you must take a snapshot of the entire file system and back up the snapshot, not the live file system. Backing up the live file system does not produce a consistent view of the files and the database. You can: • Restore the backup to a new IBRIX file system. See “Restoring a backup to a new IBRIX file system” (page 163). or • Restore the backup to an existing file system that has Express Query enabled. See “Restore to an existing file system that has Express Query enabled” (page 163).

Restoring a backup to a new IBRIX file system To restore a backup to a new IBRIX file system: 1. Create a new file system with Express Query not yet enabled. 2. Restore the backed up file system to the new file system. 3. Enable Express Query on the new file system, either in the GUI or by the CLI command: ibrix_fs -T -E -f FSNAME 4. (Optional) Enable auditing, using the ibrix_fs -A [-f FSNAME] -oa ... CLI command or in the GUI. 5. (Optional) Create REST API shares. See “Using HTTP” (page 108). 6. Express Query re-synchronizes the file system and database by using the restored database. This process might take some time. 7. Wait for the metadata resync process to finish. Enter the following command to monitor the resync process for a file system: ibrix_archiving —l The status should be at OK for the file system before you proceed. Refer to the ibrix_archiving section in the HP IBRIX 9000 Storage CLI Reference Guide for information about the other states.

Restore to an existing file system that has Express Query enabled To restore a backup to an existing file system that has had Express Query enabled:

Managing the metadata service 163 1. . Disable the express query feature for the file system, including the removal of any StoreAllREST API shares. Disable auditing feature before you disable the express query feature. a. To disable auditing, enter the following command: ibrix_fs -A [-f FSNAME] -oa audit_mode=off b. Remove all StoreAll REST API shares created in the file system by entering the following command: ibrix_httpshare -d -f c. To disable the express query settings on a file system, enter the following command: ibrix_fs -T -D -f FSNAME 2. Delete the previously existing metadata database: rm -Rf /.archiving/database 3. Delete the previously existing archive journal files that the file system creates for Express Query to ingest: rm -Rf /.audit/* 4. Restore the backed up file system to this file system, overwriting existing files. 5. Re-enable Express Query on the file system, either in the GUI or by the CLI command: ibrix_fs -T -E -f 6. (Optional) Enable auditing, using the ibrix_fs -A [-f FSNAME] -oa … CLI command or in the GUI. 7. (Optional) Create REST API shares. See “Using HTTP” (page 108). Express Query re-synchronizes the file system and database by using the restored database information. This process might take some time. 8. Wait for the metadata resync process to finish. Enter the following command to monitor the resync process for a file system: ibrix_archiving —l The status should be at OK for the file system before you proceed. See the ibrix_archiving section in the HP IBRIX 9000 Storage CLI Reference Guide for information about the other states.

Saving and importing file system metadata Use the following procedures to save, or export, metadata that is stored only in the Express Query database and not in the files themselves. You can then import the metadata later on a CRR target. You can also import the metadata if you need to recreate the Express Query database on the file system from which you exported the metadata. If you are recovering a complete file system with NDMP backup or a supported non-NDMP backup application, you do not need to import the metadata. The Express Query database and the metadata are included in a full file system backup and thus it is recovered with the file system.

Saving custom metadata for a file system Use the perl script MDExport to save custom metadata in a CSV file. You can then use the file later on to import the metadata. The script has the following syntax: MDExport.pl [--help|?] --dbconfig /usr/local/Metabox/scripts/startup.xml --database --outputfile --user ibrix [--verbose]

164 Express Query The options specify the following:

Options Description --dbconfig The metadata configuration file. Use only this path and file name. /usr/local/Metabox/scripts/startup.xml --database The database containing the metadata. is the name of the file system. --outputfile The CSV output file used to save the metadata. --user ibrix The username for accessing the database. Use only the “ibrix” username.

Use perl to invoke the script. For example: perl /usr/local/ibrix/bin/MDExport.pl --database ibrixFS --user ibrix --dbconfig /usr/local/Metabox/scripts/startup.xml --output /home/mydir/save.csv This command exports metadata from the ibrixFS file system and generates the output file save.csv in the /home/mydir directory. The CSV file contains a row for every custom attribute, such as: subdir/myfile.txt,color,red

NOTE: This command must run as the “ibrix” user on the system, to interact with Express Query. Therefore, the directory to contain the output file must be writable by the ibrix user, which is in the “ibrix-user” group. Setting the output directory to world read, write, and execute permission, for example, lets the file be written regardless of the directory’s owning user and group.

Saving audit journal metadata The ibrix_audit_reports command saves audit data from Express Query on a specific file system. The data is placed in a CSV file. The command has the following syntax: ibrix_audit_reports -t SORT_ORDER-f FILESYSTEM [-p PATH] [-b BEGIN_DATE] [-e END_DATE] [-o class1[,class2,...]] For example: ibrix_audit_reports -t unordered -o all -f ibrixFS This command saves audit data for all events in file system ibrixFS. Use the “unordered” option for the fastest performance. See the HP IBRIX 9000 Storage CLI Reference Guide for more information about this command.

Importing metadata to a file system Use the MDImport tool to import a CSV file containing custom or audit metadata into a new Express Query database. The CSV file can be the output of either the MDExport script or the ibrix_audit_reports command. The command has the following syntax: MDImport -f -n -t The options specify the following:

Options Description -f The file system to receive the import. -n The name of the CSV file. -t The type of metadata being imported (either audit or custom).

The following command imports custom metadata exported by the MDExport script: MDimport –f newIbrixFs -t custom -n /home/mydir/save.csv

Managing the metadata service 165 The next command imports audit metadata exported by the ibrix_audit_reports command: MDimport –f target -t audit -n simple_report_for_source_at_1341513594723.csv The ibrix_audit_reports command automatically generates the file name simple_report_for_source_at_1341513594723.csv. Metadata and continuous remote replication When continuous remote replication (CRR) is configured for a file system, or run-once replication is performed, metadata that is stored only in Express Query from the source cluster is not transferred to the target cluster. System metadata stored in the inodes, such as file permissions, are transferred intact by remote replication. Custom and audit history metadata, which are stored only in the Express Query database, are not transferred, but can be exported manually and periodically to files that CRR will replicate. Express Query on the target system processes files replicated to its file system by CRR as new file creations and modifications, with the dates and times the files were replicated. The audit history contains the history of file replications from the source system, not the audit history of file accesses on the source system. The target Express Query is not be aware of any custom metadata applied to files on the source system; however, you can add the source system’s audit history and custom metadata to the target Express Query’s database by exporting and importing this metadata. To include Express Query metadata in remote replication, you can create shell scripts (or scripts in other languages) and set up periodic execution of those scripts by using standard Linux tools such as “cron”. Such scripts must perform the steps of the export of custom metadata and audit history described in “Saving and importing file system metadata” (page 164), with parameters appropriate to your file systems. The exported files are copied to the target automatically by continuous remote replication (CRR). Those files on the target can be imported into Express Query running on the target at any time, if the target needs to become the active Express Query-enabled file system. The output file listed on the MDExport.pl command line must be in a directory that will be replicated by CRR.

NOTE: The /.archiving directory is excluded from replication. The ibrix_audit_reports command creates its report output file in the / .archiving/reports subdirectory. Therefore, after issuing the ibrix_audit_reports command, your script must move or copy the report output file to another directory on the file system outside the .archiving tree for it to be replicated. Metadata and synchronized server times Metadata database updates often require coordinated actions from multiple IBRIX servers. Therefore, it is important to keep your nodes as close together in time as possible, using NTP. Modifications to the same file from different servers within the time difference between IBRIX servers could cause inconsistencies between system metadata stored in the file inode and the metadata stored in Express Query. To ensure consistency, keep servers synchronized with NTP and avoid modifying a file from clients connected to different IBRIX servers at the same time. If the server cannot be chosen by the client, such as when using external load balancers, then separate your client operations on the same file beyond the maximum expected time difference between any two IBRIX servers.

166 Express Query Managing auditing Auditing lets you: • Find out which events you have already captured in the Express Query database and control what gets captured in regards to file changes in the Express Query database. See “Audit log” (page 167) for more information. • Gather information from audit reports as to what is in the Express Query database. See “Audit log reports” (page 168) for more information. Audit log The audit log provides a detailed history of activity for specific file system events. The Audit Log panel shows the current audit configuration.

To change the configuration, click Modify on the Audit Log panel. On the Modify Audit Settings dialog box, you can change the expiration policies and schedule, and you can change the events that are audited. The default Audit Logs Expiration Policy is 45 days. If you need to keep audit history for a longer period of time, increase the time period. Enable and disable event types and groups using the checkboxes and the arrows to move events between the Disabled and Enabled lists. If an event is not selected for auditing, it cannot be included in an audit report. By default, all events are enabled. For significantly enhanced system performance and reduced audit log size, if files are accessed frequently, disable the “File Read” event.

Managing auditing 167 Audit log reports Audit log reports include metadata for selected file system events that occurred during a specific time period. To generate an audit log report, click Run a Report on the Audit Log panel. Specify the parameters for the report on the Run an Audit Log Report dialog box.

NOTE: Although you can select any of the events for a report, an event must be selected for auditing to appear in the report. Use ibrix_fs -A or the Modify Audit Settings dialog box to change the events selected for auditing.

168 Express Query The audit reports are in CSV (comma-separated) format and are placed in the following directory: /.archiving/reports The file names have this format: _report_for__at_.csv For example: file_report_for_ibrixFS_at_1343771410270.csv nl simple_report_for_ibrixFS_at_1343772788085.csv Following are definitions for the less obvious fields in an audit report.

Field Description

seqno The sequence number of this event, increased incrementally for each event processed per node in the IBRIX cluster

eshost The ID of the node in the IBRIX cluster that recorded this event a hex string)

eventsuccess Whether the attempted operation succeeded or failed

eventerrorcode Defined in the standard Linux header files errno_base.h and errno.h

description Currently unused

reserved1/2/3 Always zero; ignore this field

POID_lo32/hi64 The Permanent Object ID that uniquely identifies the file within the IBRIX cluster (a 96-bit integer split in two parts)

Managing auditing 169 Field Description

*time[n]sec The seconds and nanoseconds of that time, in UNIX epoch time, which is the number of seconds since the start of Jan 1, 1970 in UTC

mode The Linux mode/permission bits (a combination of the values shown by the Linux man 2 stat command)

*hash, content*, Currently unused meta*

To generate reports from the command line, use the ibrix_audit_reports command: ibrix_audit_reports -t SORT_ORDER -f FILESYSTEM [-p PATH] [-b BEGIN_DATE] [-e END_DATE] [-o class1[,class2,...]] See the HP IBRIX 9000 Storage CLI Reference Guide for more information about this command, including the events that can be specified for the report.

Managing audit reports When you create or modify a file system, you can set the following audit report options to specify when old reports are deleted: • Audit log reports expiration policy: whether reports should be deleted after a specific number of days, weeks, months, or years, or should never be deleted. • Audit log reports expiration schedule: the time each day at which expired audit reports are deleted. You can set also set these options for one or more file systems using the ibrix_audit_reports command. Be sure to monitor the space used by audit reports, especially if you are retaining them for a long period of time. StoreAll REST API The StoreAll REST API provides programmatic access to user-stored files and their metadata. The metadata is stored on the HP StoreAll Express Query database in the IBRIX cluster and provides fast query access to metadata without scanning the file system. The StoreAll REST API provides the ability to upload and download files, assign custom (user-defined) metadata to files and directories, manage file retention settings, and query the system and custom metadata of files and directories. You can associate any number of custom (user-defined) metadata attributes to any file or directory stored on an IBRIX retention-enabled file system where the Express Query metadata store is enabled. Each custom attribute consists of an attribute name and assigned value. The API provides commands to create custom metadata entries for a file, replace values of existing entries, and delete entries. The API extends the existing HTTP Shares feature that was introduced in IBRIX version 6.0. You can create HTTP-StoreAll REST API shares to access Express Query on the file system. See “Process checklist for creating HTTP shares” (page 108) for information on how to create a StoreAll REST API. Dual nature of REST API shares A REST API enabled HTTP share is a pair of HTTP shares managed as a set. IBRIX internally creates the share name as specified by you and it makes it WebDAV-enabled. It also automatically creates its peer share with the string “Ibrix” appended to the name, and this share is WebDAV disabled. These properties cannot be changed. The two shares are internally managed as a set. By default only the WebDAV-enabled share is exposed through the CLI. The GUI only displays the WebDAV-enabled share. When listing StoreAll REST API-enabled shares through the CLI or GUI,

170 Express Query the share name is displayed with its peer share name in parentheses in the share name column. For example, if you have a StoreAll REST API-enabled HTTP share named “share2” you will have a peer share named “share2Ibrix”. For example, assume you enter the following command, where ifs2 is the name of the file system: ibrix_httpshare -l -f ifs2 Output similar to the following is displayed for the HTTP shares associated with the file system named ifs2:

To see the share with its peer, enter the following command: ibrix_httpshare -l -o -f ifs2 Output similar to the following is displayed:

Note that the GUI displays only the user specified share. It does not display the pair of shares. In other words it does not have the equivalent of the ‘-o’ option on the CLI. Always send your HTTP requests to the URL path of the share you have created, not to the share appended by “Ibrix”. The API routes the request to the right share automatically. The “Ibrix” appended share handles API queries and assignments (any URI with a string “?...” after the path/file name). Table 10 Overview of a REST API enabled HTTP share

HTTP Share Name WebDAV Enabled Keep in mind How Exposed

Share name specified by you Yes Always send your HTTP GUI requests to the URL path of the share you have created, not to the share appended by “Ibrix”. The API routes the request to the right share automatically.

Ibrix No The “Ibrix” appended CLI share handles API queries and assignments (any URI with a string “?...” after the path/file name).

Component overview The StoreAll REST a number of components, such as custom metadata assignments, metadata queries, and retention properties assignments.

Custom metadata assignment You can associate any number of custom (user-defined) metadata attributes to any file or directory stored on an IBRIX retention-enabled file system where the Express Query metadata store is enabled. Each custom attribute consists of an attribute name and assigned value. The API provides commands to create custom metadata entries for a file, replace values of existing entries, and delete entries.

StoreAll REST API 171 Metadata queries You can issue StoreAll REST API commands that query the pathname and custom and system metadata attributes for a set of files and directories. Queries can be augmented with a search criterion for a certain system or custom attribute; only files and directories that match the criterion are included in the results. The query can specify a single file or a directory. If identifying a directory, the user can query all files in that directory only, or all files in all subdirectories of that directory recursively.

Retention properties assignment You can issue StoreAll REST API commands to change a file to the WORM (and optionally retained) state and set its retention expiration time, subject to the file system’s retention policy settings.

General topics regarding HTTP syntax Each feature is described in HTTP (with URL-encoded characters where required) and equivalent curl formats, such as: PUT command Enter the following command on one line: PUT /[/]?[version=1]assign==''

nl [,=''…] HTTP/1.1 curl command Enter the following command on one line: curl -g –X PUT "http[s]://:/[/]?[version=1]

nl assign==''[,=''…]" When using this syntax, note the following: • Optional parameters are shown in square brackets [ and ]. Everything enclosed in the brackets can be omitted from the request. Do not include the square brackets in the request. For example, the API supports either http or https for all requests, hence the http[s] nomenclature. • Parameters are shown in angle brackets < and >. Replace the parameter with the actual value, without the angle brackets. • Other characters shown in the syntax (such as =, ?, &, and /) must also be entered as-is in the request and sometimes must be URL-encoded. • All parameters before the ? (such as pathname) should be entered as strings without any surrounding quotes in standard URL format. • All parameters after the ? (the query string in HTTP parlance) are either commands, attribute names, or literals: ◦ Attribute names must be 80 characters or less. The first character must be alphabetic (a-z or A-Z), followed by a sequence of alphanumeric characters or underscores. No other characters are allowed. Colon characters (:) are allowed in system attribute names. All attribute names are case-sensitive. ◦ Literals are either strings or numeric values.

172 Express Query ◦ Literal strings must be enclosed in single quotes. Non-escaped UTF-8 characters are allowed. Literals are case-sensitive. Any single quotes that are part of the string must be escaped with a second single quote (no double quotes). For example: 'Dave''s book'

◦ Literal numeric values must not be enclosed by quotes, and are always in decimal (0-9). • All HTTP query responses generated by the API code follow the JSON standard. No XML response format is provided at this time. • HTTP request messages have a practical limit of about 2000 bytes, and it can be less if certain proxy servers are traversed in the network path.

URL encoding HTTP query strings are URL-decoded by the API code. API clients must encode special characters, such as greater-than character (>), by replacing them with their hexadecimal equivalent values as shown by the examples in this section. The API’s URL decoder interprets certain special characters properly without being URL encoded. Before the question mark character (?) in any HTTP request URL, the following characters are safe and do not need to be URL encoded: / : - _ . ~ @ # After the question mark character (?), the following characters are safe and do not need to be URL encoded: = & # All other characters must be URL encoded as their hexadecimal value as described in the ISO-8859-1 (ISO-Latin) standard. For example, the plus character (+) must be encoded as %2B, and the greater than character (>) must be encoded as %3E. Spaces can be encoded as either %20 or as the plus character (+), such as "my%20file.txt" or "my+file.txt" for the file "my file.txt". The plus character (+) is converted to a space when the URL is decoded by the API code. To include a plus character (+) in the URL, encode it as %2B, such as "A%2B" instead of "A+". If you are using a tool such as curl to send the HTTP request, the tool might URL-encode certain characters automatically, although you might have to enclose at least part of the URL in quotes for it to do so. The exact behavior depends on the tool. In the curl examples shown in this section, the entire URL is enclosed in double quotes so that the non-encoded characters can be shown for readability. The curl tool URL-encodes all required characters within the double quotes correctly. If you are using a different tool or constructing the URL programmatically or manually, ensure that the right characters are URL-encoded before sending it over HTTP to the API.

Pathname parameters The pathname parameter provided in HTTP requests throughout the syntax must be specified as a relative path from the , including the file name. However, the system metadata attribute system::path available for metadata queries must be specified as a path relative to the mount point of the IBRIX file system. Paths are stored in the metadata database by this technique.

Optional version parameter Every StoreAll REST API request can optionally include the following literal string immediately after the question mark character (?) in the request: version=1

StoreAll REST API 173 The version field is recommended, but not required. In the syntax descriptions, it is surrounded by square brackets to indicate that it is optional. Changes to this API will normally be backward-compatible, and not require any client-side syntax changes to perform the same operations. However, certain changes might be required in future API versions that break backward compatibility. In that case, the version is increased to the next value. Any request without the version field might no longer work as desired or it might return an error. Any request with the version field and a value less than or equal to the current version, is handled correctly by the new API version unless the capability has been removed or is beyond the support lifetime of the product.

API date formats All date/time values accepted by the API in HTTP requests must be in seconds only (no nanoseconds) since the UNIX epoch start point, which is: 1 Jan 1970 00:00:00 UTC In the following example, the user provided the number of seconds that have occurred between the UNIX epoch and April 17, 2012, 06:09:22 UTC/GMT as the date and time value in an HTTP request: 1334642962 All dates and times provided in API HTTP responses are in seconds and nanoseconds since the UNIX epoch start point. For example, the following date/time value as returned by the StoreAll REST API in a JSON HTTP response: 1334642962.678934883 This signifies the number of seconds since UNIX epoch on April 17, 2012, 06:09:22.678934883 UTC/GMT. Some of the time fields stored in the inode of the file in the IBRIX file system store a granularity of seconds only, no nanoseconds. In these cases the nanoseconds portion is returned as zeros (for example, 1334642962.000000000). Some of the time fields stored in the metadata database store a granularity of microseconds instead of nanoseconds. In these cases, the last 3 digits of the nanoseconds portion is returned as zeros (for example, 1334642962.865449000).

Authentication and permissions Any user accessing the API must authenticate as one of the valid users or groups configured for this HTTP share, except for anonymous shares. For file content transfer operations (see “File content transfer” (page 175)), including file upload, download, or deletion, the user must also have permission to perform the file system operation, as defined by the permissions and ownership of the file and its containing directories. Anonymous users on anonymous shares can only operate on files that an anonymous user has uploaded using the HTTP share. For custom metadata assignment and metadata queries, the user must also have file system permission to navigate to the directory containing the file or directory defined in the URI. If the user has that permission, custom metadata assignment and queries will be allowed regardless of the ownership or permissions of the file or directory. If a directory is specified, the operations will be allowed on all files and subdirectories regardless of their ownership and permissions. Custom metadata is stored only in the Express Query database and is not stored with the file or its inode on the file system. An anonymous user on an anonymous share must have this navigate permission as the Linux user “daemon” and group “daemon” (daemon:daemon), since that is the user the HTTP Server acts as, for anonymous operations. For retention properties assignment, the user must also have file system permission to navigate to the directory containing the file defined in the URI. Additionally, the user must be the owner of the file according to the file system’s properties for the file’s owning user. If these permissions are not

174 Express Query satisfied, the operation will not be allowed. Retention properties can never be assigned by anonymous users on anonymous shares, but they can be assigned by authenticated users with sufficient permissions, on anonymous shares. File content transfer Files can be uploaded and downloaded with the normal IBRIX HTTP shares feature with WebDAV enabled, as described in earlier sections. In addition, the API defines an HTTP DELETE command to delete a file. The delete command is only for WebDAV enabled shares.

Upload a file (create or replace) This command transfers the contents of a file from the client to the HTTP share. If the file identified does not already exist on the share, it will be created. If it already exists, the contents will be replaced with the client’s file, if allowed by the file’s IBRIX permissions and retention properties. Upload capability exists already in the IBRIX HTTP shares feature, and it is documented here for completeness. File creation and replacement is subject to IBRIX permissions on the file and directory, and it is subject to retention settings on that file system. If it is denied, an HTTP error is returned. The HTTP command is sent in the form of an HTTP PUT request.

HTTP syntax The HTTP request line format is: PUT // HTTP/1.1 The file’s contents are supplied as the HTTP message body. The equivalent curl command format is:

NOTE: The following command should be entered on one line. curl -T http[s]://:// If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not Allowed). See “Using HTTP” (page 108) for information about the IP address, port, and URL path.

Parameter Description

local_pathname The pathname of the file, stored on the client’s system, to be uploaded to the HTTP share.

pathname The pathname to be assign to the new file being created on the HTTP share, if the file does not yet exist. If the file does exist, the file will be overwritten. The pathname should be specified as a relative path from the , including the file’s name.

Example curl -T temp/a1.jpg https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg This example uploads the file a1.jpg, stored on the client’s machine in the temp subdirectory of the user’s current directory, to the HTTP share named ibrix_share1. The share is accessed by the IP address 99.226.50.92. Because it is accessed using the standard HTTPS port (443), the port number is not needed in the URL. The file is created as filename xyz.jpg in the subdirectory lab/images on the share. If the file already exists at that path in the share, its contents are overwritten by the contents of a1.jpg , provided that IBRIX permissions and retention settings on that file and directory allow it. If the overwriting is denied, an HTTP error is returned.

StoreAll REST API 175 If the local file does not exist, the response behavior depends on the client tool. In the case of curl, it returns an error message, such as the following: curl: can't open '/temp/a1.jpg'

Download a file This command transfers the contents of a file to the client from the HTTP share. Download capability already exists in the IBRIX HTTP shares feature, and it is documented here for completeness. If the file does not exist, a 404 Not Found HTTP error is returned, in addition to HTML output such as the following: 404 Not Found

Not Found

The requested URL /api/myfile.txt was not found on this server.

If using curl, the HTML output is saved to the specified local file as if it were the contents of the file. The HTTP command is sent in the form of an HTTP GET request.

HTTP syntax The HTTP request line format is: GET // HTTP/1.1 The equivalent curl command format is: curl -o http[s]://:// See “Using HTTP” (page 108) for information about the IP address, port, and URL path.

Parameter Description

local_pathname The pathname of the file to be downloaded from the HTTP share and stored on the client’s system.

pathname The pathname of the existing file on the HTTP share to download to the client.

Example curl -o temp/a1.jpg http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg This example downloads an existing file called xyz.jpg in the lab/images subdirectory of the ibrix_share1 HTTP share. The file is created with the filename a1.jpg on the client system, in the subdirectory temp of the user’s current directory. If the file already exists at that path on the client, its contents are overwritten by the contents of xyz.jpg, provided that the local client’s permissions and retention settings on that file and directory allow it. If the overwriting is denied, a local client system-specific error message is returned.

Delete a file This command removes a file from the IBRIX file system by using the HTTP share interface. File deletion is subject to IBRIX permissions on the file and directory, and it is subject to retention settings on that file system. If file deletion is denied, an HTTP error is returned. If the file does not exist, a 404 Not Found HTTP error is returned.

NOTE: The delete command is only for WebDAV enabled shares.

176 Express Query The HTTP command is sent in the form of an HTTP DELETE request.

HTTP syntax The HTTP request line format is: DELETE // HTTP/1.1 The equivalent curl command format is: curl –X DELETE http[s]://:// See “Using HTTP” (page 108) for information about the IP address, port, and URL path.

Parameter Description

pathname The pathname of the existing file on the HTTP share to be deleted.

Example curl –X DELETE http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg This example deletes the existing file called xyz.jpg in the lab/images subdirectory on the ibrix_share1 HTTP share. Custom metadata assignment The API provides commands to define a custom metadata attribute for an existing file or directory in an HTTP share and to delete an existing custom metadata attribute. For information about retrieving or querying existing custom metadata already applied to files, see “Metadata queries” (page 172).

Upload custom metadata (add or replace) This command adds or modifies one or more custom metadata attributes for an existing file or directory on the HTTP share. If the specified attribute does not exist for the file, it is added to the custom metadata list for that file or directory. If an attribute already exists, the current value is replaced with the client’s value. Custom metadata applied to a directory does not also apply to files or directories in that directory, only to the directory itself. Up to 15 metadata attributes can be assigned in one command. However, there is no defined limit on the number of metadata attributes that can be assigned to a file. To assign more than 15 attributes, send multiple PUT requests for the same file. The ability to add or replace custom metadata is not currently constrained by WORM/retention settings. Although attributes and values may be up to 80 characters each, HTTP request messages have a practical limit of about 2000 bytes which supersedes these maximums. The ability to add or replace custom metadata is not currently constrained by file permissions or the file's WORM setting. If the file's directory is accessible to the API user, custom metadata operations are allowed, regardless of file permissions or the WORM state of the file. When custom metadata is added, the file or directory becomes protected from renames or moves and such actions will be denied, even if all custom metadata is later removed. This protection is independent of file WORM and retention states. If the file does not exist, a "500 (Internal Server Error)" HTTP error is returned (not a "404 Not Found"). The HTTP command is sent in the form of an HTTP PUT request.

StoreAll REST API 177 HTTP syntax The HTTP request line format is:

NOTE: Enter the following commands on a single line. PUT command PUT /[/]?[version=1]assign=='' [,=''…] HTTP/1.1 curl command The equivalent curl command format is: curl -g –X PUT "http[s]://:/[/]? assign==[,…"] See “Using HTTP” (page 108) for information about the IP address, port, and URL path. If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not Allowed).

Parameter Description

pathname The name of the existing file/directory on the HTTP share for which custom metadata is being added or replaced. Directory pathnames must end in a trailing slash /. If the pathname parameter is not present, custom metadata is applied to the directory identified by .

attribute[n] The attribute name. Up to 15 attributes can be assigned in a single command. The first character must be alphabetic (a-z or A-Z), followed by a sequence of alphanumeric characters or underscores. No other characters are allowed. Attribute names must be less than 80 characters in length.

value[n] The value to associate with this attribute. Currently, only a string value can be assigned and the value must be enclosed in single quotes. Future versions of the API may support numeric or other value types. If the attribute already exists for this file or directory, its value will be replaced with this supplied value. Values must be less than 80 characters in length.

Example curl -g –X PUT "https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=physician ='Smith,+John;+8136',scan_pass='17'" This example assigns two custom metadata attributes to the existing file called xyz.jpg in the lab/imagessubdirectory on the ibrix_share1 HTTP share.

Attribute Description Metadata Value

The first attribute is physician. Its value contains the last name, first The metadata value for this key is name, and physician’s ID number in Smith, John; 8136 with spaces the medical center. encoded as the plus character (+).

The second attribute is scan_pass. Its value identifies this image as the The metadata value for this key is 17. 17th pass of a multi-image scan. All custom metadata values, even if they are numeric, must be quoted, since all custom metadata values are stored as strings.

178 Express Query Delete custom metadata This command removes one or more metadata attributes from an existing file or directory in the HTTP share. Up to 15 metadata attributes may be removed in one command. The ability to delete custom metadata is not currently constrained by WORM/retention settings. The ability to delete custom metadata is not currently constrained by file permissions or the file's WORM setting. If the file's directory is accessible to the API user, custom metadata operations are allowed, regardless of file permissions or the WORM state of the file. Deleting all custom metadata does not remove the protection from renames or moves. This protection is independent of file WORM and retention states. If the file does not exist, no HTTP error status is returned. A JSON error message is returned instead, as shown in the following example: [ { "physician" : "error in deleting" ] If the file exists but any attributes being deleted do not exist, no HTTP error status is returned, and the non-existent attributes are silently ignored. The HTTP command is sent in the form of an HTTP DELETE request.

HTTP syntax The HTTP request line format is the following on one line: DELETE /[/]?[version=1]attributes=

nl [,…] HTTP/1.1 The equivalent curl command format is the following on one line: curl -g –X DELETE "http[s]://:/[/]?[version=1]

nl attributes=[,…"] See “Using HTTP” (page 108) for information about the IP address, port, and URL path.

Parameter Description

pathname The name of the existing file/directory on the HTTP share for which custom metadata is to be deleted. Directory pathnames must end in a trailing slash /.

attribute[n] The existing name(s) for the custom metadata attribute(s) to be deleted from the file or directory custom metadata list.

Example curl -g –X DELETE "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg? attributes=physician,scan_pass" This example deletes two custom metadata attributes from an existing file called xyz.jpg in the lab/images subdirectory on the ibrix_share1 HTTP share. The first attribute to delete is physician and the second is scan_pass. Metadata queries The API provides a command to query the metadata about a file or directory on an IBRIX HTTP share. The command defines the file or directory to query, the metadata fields to return, and how to filter the list of files and directories returned based on metadata criteria. All queries are performed on the Express Query database, requiring no other file system access or scans. The HTTP command is sent in the form of an HTTP GET request.

StoreAll REST API 179 System and custom metadata Two types of metadata are supported for queries, and both can be referenced in the same query: • System metadata applies to all files and directories. Each file and directory stored in IBRIX includes a fixed set of attributes comprising its system metadata. System metadata attributes are distinguished from custom metadata attributes by the system:: prefix. System metadata attributes cannot be deleted by the user through the API. • Custom metadata applies only to files and directories where the user assigns them. Custom metadata names are user-defined, with value strings also defined by the user. Custom metadata is meaningful to the user, but it is not used by IBRIX. Custom metadata can be added, replaced, or deleted by the user (see “Custom metadata assignment” (page 171)).

System metadata available The following table describes the system metadata attributes available for query and updates using the API. For "date" types, see “API date formats” (page 174). With the exception of system::deleteTime, all of the system metadata attributes listed in this table are valid for live (for example, not-yet-deleted) files and directories. For deleted files, only the following attributes are valid: system::path, system::deleteTime, system::lastActivityTime and system::lastActivityReason.

System attribute (key) Type Description Example Writeable

system::path string The pathname of the images/xray.jpg no file or directory, expressed as a path relative to the mount point of the IBRIX file system. This attribute is always returned above the JSON stanza of requested attributes within curly braces { }, not inside the stanza.

system::ownerUserId numeric The IBRIX user ID (UID) 433 no number of the owner of the file or directory.

system::size numeric The file size. The 1025489 no number of bytes stored by IBRIX to hold the file’s contents.

system::ownerGroupId numeric The IBRIX group ID 700 no (GID) number of the primary group to which the owner of the file or directory belongs.

180 Express Query System attribute (key) Type Description Example Writeable system::onDiskAtime numeric The date/time recorded • Query criteria no in the atime field of the (seconds): file inode in the file system. See 1334642962 “system::onDiskAtime” • JSON response (page 183). (including nanoseconds): 334642962.556708192 See “API date formats” (page 174). system::lastChangedTime numeric The date/time of the See “API date no last status change formats” (page 174). (ctime). system::lastModifiedTime numeric The date/time of the See “API date no last content formats” (page 174). modification (mtime). system::retentionExpirationTime numeric The date/time when a See “API date yes (see retained file will expire formats” (page 174). “Retention (or has expired) from properties retention. After assignment” expiration, the file (page 172)) reverts to WORM but not retained status. This attribute applies only to files, returning 0 for directories. If a file has never been retained, this value is 0. system::mode numeric The Linux A decimal number, no mode/permission bits, such as 33060 for a combination of the the octal value values shown by the 0100444 (regular Linux man 2 stat file, read-only for command). See owner / group / “system::mode” other). (page 184) for more information. system::tier numeric The user-defined name tier1_fast no of the IBRIX tier of storage hosting this file or directory. If the file is stored in a segment that is not assigned to any tier, the string literal no tier is returned. system::createTime numeric The date/time when the See “API date no file or directory was formats” (page 174). created (added or uploaded) to the IBRIX file system. system::retentionState numeric The current A decimal number, partial (see WORM/retention state such as 11 for the bit system::worm) of the file, which is a value 0x0B (under combination of these bit legal hold, and values: retained, and 0x01: WORM WORM) nl

StoreAll REST API 181 System attribute (key) Type Description Example Writeable

0x02: Retained nl 0x04: (not used) nl 0x08: Under legal hold This attribute applies only to files, returning 0 for directories. A value of zero for files indicates it is a normal file, not WORM or retained.

system::worm numeric WORM status of the true yes, to file. true only, at most one time (see “Retention properties assignment” (page 172))

system::deleteTime numeric The date/time when the See “API date no file or directory was formats” (page 174). deleted from the IBRIX file system. The system attribute, system::deleteTime, is only valid for deleted files. Deleted files will be returned in query results only if the query explicitly includes system::deleteTime as an attribute to be returned or as a query criterion.

system::lastActivityTime numeric The latest date/time of See “API date no the following 5 formats” (page 174). attributes of the file or directory: system::createTime

nl system::lastModifiedTime

nl system::lastChangedTime

nl system::deleteTime The system attribute, system::lastActivityTime, is useful for determining the last date/time at which a file had any modification activity. This attribute will only be returned in query results if the request explicitly includes system::lastActivityTime as an attribute to be returned.

system::lastActivityReason numeric The attribute that is A decimal number, no represented by the such as 6, signifying

182 Express Query System attribute (key) Type Description Example Writeable

system::lastActivityTime, that the last activity which is a combination on this file was a of the following values: content modification, 0x1: which changes both lastModifiedTime nl system::createTime (0x2) and

nl lastChangedTime 0x2: (0x4) .

nl system::lastModifiedTime

nl 0x4:

nl system::lastChangedTime

nl 0x8:

nl system::deleteTime

nl 0x10:

nl custom metadata assignment time (not queryable as a system:: attribute) This attribute is returned in query results if the request explicitly includes system::lastActivityReason as an attribute to be returned. system::onDiskAtime The atime inode field in IBRIX can be accessed as the system::onDiskAtime attribute from the API. This field represents different concepts in the lifetime of a WORM/retained file, and it often represents a concept other than the time of the file’s last access, which is why the field was named onDiskAtime rather than (for example) lastAccessedTime. (See “Retention properties assignment” (page 172) for a description of this life cycle). • Before a file is retained, whether WORM state or not, atime represents the last accessed time, as long as the file system is mounted with the non-default atime option. If the file system is mounted with the default noatime option, atime is the file’s creation time, and never changes unless the file is retained (see the second bullet). See “Creating and mounting file systems” (page 13) for more information about mount options. • While a file is in the retained state, atime represents the retention expiration time. • After retention expires, atime represents the time at which the file was first retained (even if the file has been retained and expired more than once), and it never changes again, unless the file is re-retained (see the second bullet). If you have enabled the auditing of file read events, then reads are logged in the audit logs. However, file reads do not update system::onDiskAtime even if the file reads are being audited. All other file accesses modify the system::onDiskAtime with the current value of atime. Therefore, before the file is retained (first bullet), if the file system is mounted with the atime option, system::onDiskAtime represents the last accessed time before the last file modification, not necessarily the current atime or the last accessed time. To list all read accesses to a file, use the ibrix_audit_reports command as described in the CLI Reference Guide

StoreAll REST API 183 system::mode The following system::mode bits are defined (in octal): 0140000 socket 0120000 symbolic link 0100000 regular file 0060000 block device 0040000 directory 0020000 character device 0010000 FIFO 0004000 set UID bit 0002000 set-group-ID bit 0001000 sticky bit 0000400 owner has read permission 0000200 owner has write permission 0000100 owner has execute permission 0000040 group has read permission 0000020 group has write permission 0000010 group has execute permission 0000004 others have read permission 0000002 others have write permission 0000001 others have execute permission

Wildcards The StoreAll REST API provides three wildcards:

Wildcard Description

* A single attribute name of * returns all system and custom metadata attributes for the files and directories matching the query.

system::* A single attribute name of system::* returns all system metadata attributes for the files and directories matching the query. It does not include any custom metadata entries.

custom::* A single attribute name of custom::* returns all custom (user-defined) metadata attributes for the files and directories matching the query. It does not include any system metadata entries.

For wildcards that return system metadata attributes, the results will not include attributes that describe deleted files (system::deleteTime, system::lastActivityTime, and system::lastActivityReason).

Pagination The StoreAll REST API provides a way for users to specify a portion of the total list of records (files and directories) to return in the JSON query results.

Parameter Description

skip The skip parameter defines the number of records to skip before returning any results. The value is zero-based. For example, skip=100 skips the first 100 records of the results and the 101st record and later are returned. If the total result set is less than 101, no results are returned.

top The top parameter defines the maximum number of total records to return. For example, top=2000 returns at most 2000 rows.

184 Express Query The skip and top parameters can be combined. For example, supplying both skip=100 and top=2000 returns records 101 through 2100. By combining these two parameters, the user can absorb a large result set in chunks, for example, records 1-2000, 2001-4000, and so on. The following limitations apply: • Every query will be executed in full, even if only a subset of results is returned. For some queries, this may place a substantive load on the system. Keeping top values as large as possible will limit this load. • Because a query is executed for every request, there may be inconsistencies in query results if files are created or deleted between API requests. By default, if the skip parameter is not supplied, the results will not skip any records. Similarly, if the top parameter is not supplied, the results will contain all records.

HTTP syntax The HTTP request line format is the following on one line: GET /[/[]]?[version=1][attributes=[,,…]] [&query=][&recurse][&skip=] [&top=][&ordered] HTTP/1.1 The equivalent curl command format is the following on one line: curl -g "http[s]://:/[/[]]?

nl [version=1][attributes=[,,…]]&query=

nl [&recurse][&skip=][&top=][&ordered]" See “Using HTTP” (page 108) for information about the IP address, port, and URL path. If the urlpath or pathname does not exist, a JSON output of no results is returned (see the “JSON response format” (page 186)), and the HTTP status code 200 (OK) is returned rather than an HTTP error such as 404 (Not Found).

Parameter Description

pathname The name of the existing file or directory on the HTTP share, if querying metadata of a single file/directory. If not present, the query applies to the . Furthermore: • Directory pathnames must end in a trailing slash /. • If the &recurse identifier is supplied for a directory, the query applies to the entire directory tree: the directory itself, all files in that directory, and all subdirectories recursively. • If the &recurse identifier is not supplied and the pathname is for a directory, the query operates only on the given directory and all files in that directory or directory of files, but not subdirectories. • If the pathname is for a file, the query applies only to the file

attr[n] A comma-separated list of system and/or custom metadata attribute names to be returned in the JSON response for each file or directory matching the query criterion. The special attribute names *, system::*, and custom::* are described under “Wildcards” (page 184).

query_attr A system and/or custom metadata attribute to be compared against the value as the query criterion. Only one attribute can be listed per command.

operator The query operation to perform against the query_attr and value, one of: = (equals exactly) != (does not equal) < (less than) <= (less than or equal to)

StoreAll REST API 185 Parameter Description

> (greater than) >= (greater than or equal to) Only for custom attributes and string-valued system attributes (for example, system::path, system::tier): ~ (regular expression match) !~ (does not match regular expression)

query_value The value to compare against the query_attr using the operator. The value is either a numeric or string literal. See “General topics regarding HTTP syntax ” (page 172) for details about literals.

recurse If the recurse attribute is present, the query searches through the given directory and all of its subdirectories. If the recurse attribute is not present, the query operates only on the given file, directory, or directory of files (but not subdirectories). See pathname earlier in this table for details.

skip_records If this attribute is present, it defines the number of records to skip before returning any results. The value is zero-based. See “HTTP syntax” (page 185).

max_records If this attribute is present, it defines the maximum number of total records to return from the result set. See “HTTP syntax” (page 185).

ordered If this attribute is present, the list of files and attributes returned is sorted lexicographically by file name. The use of ordered on large results sets might affect the performance of the query. Without ordered, files might occur in any order in the result set.

Regular expressions The arguments to the regular expression operators (~ and !~) are POSIX regular expressions, as described in POSIX 1003.1-2008 at http://pubs.opengroup.org/onlinepubs/9699919799/, section 9, Regular Expressions.

JSON response format The result of the query is an HTTP response in JSON format, as in the following example: [ { "mydir" : { "system::ownerUserId" : 1701, "system::size" : 0, "system::ownerGroupId" : 650, "system::onDiskAtime" : 1346895723.552374000, "system::lastAccessedTime" : 1346895723.552810000, "system::lastChangedTime" : 1346895723.552374000, "system::lastModifiedTime" : 1346895723.552374000, "system::retentionExpirationTime" : 0.000000000, "system::mode" : 16877, "system::tier" : "no tier", "system::createTime" : 1346895723.552374000, "system::retentionState" : 0, "system::worm" : false } }, { "mydir/myfile.txt" : { "system::ownerUserId" : 1701, "system::size" : 3, "system::ownerGroupId" : 650, "system::onDiskAtime" : 1378432229.000000000, "system::lastAccessedTime" : 1346896240.316746000,

186 Express Query "system::lastChangedTime" : 1346896235.000000000, "system::lastModifiedTime" : 1346895753.000000000, "system::retentionExpirationTime" : 1378432229.000000000, "system::mode" : 33060, "system::tier" : "no tier", "system::createTime" : 1346895753.815070000, "system::retentionState" : 3, "system::worm" : true, "scan_pass" : "17", "physician" : "Smith, John; 8136" } }, } If no files or directory meet the criteria of the query (an empty result set), or if the urlpath or pathname does not exist, then a JSON output of no results is returned, consisting of just an open and close bracket on two separate lines: [ ]

Example queries

Get selected metadata for a given file The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?

nl attributes=system::size,physician" This example queries only the file called xyz.jpg in the lab/images subdirectory on the ibrix_share1 HTTP share. A JSON document is returned containing the system size value and the custom metadata value for the physician attribute, for this file only.

Get selected system metadata for all files in a given directory The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= system::mode,system::tier" This example queries only the directory lab/images on the ibrix_share1 HTTP share. A JSON document is returned containing the POSIX mode/permission bits and storage tier name for the lab/images directory itself in addition to the files and directories in lab/images (but not in any recursive subdirectories, because there is no &recurse option).

Get selected metadata for all files in a given directory The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= system::size,physician" This example queries all files in the ibrix_share1 HTTP share in the subdirectory lab/images of the share, but not files or directories in any subdirectories. A JSON document is returned containing the system size value and the custom metadata value for the physician attribute, for all files in lab/images. For files that don’t have a physician attribute, only the system::size is returned.

Get selected metadata for all files in a given directory tree The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= system::size,physician&recurse&ordered"

StoreAll REST API 187 This example queries all files in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned containing the system size value and the custom metadata value for the physician key, for all files and subdirectories in the lab/images directory tree, as well as for the lab/ imagesdirectory itself. The list of files is ordered alphabetically by file name.

Get selected metadata for a page of files in a given directory tree The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= system::size,physician&recurse&skip=2000&top=100" This example queries all files in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned containing the system size value and the custom metadata value for the physician key, for result set entries 2001 through 2100. The results are not ordered, which speeds up the query. In a typical scenario, such as in the example mentioned in this section, the client has already issued queries to receive the first 2000 results. The client usually issues further queries until no more results are returned.

Get selected metadata for all files in a given directory tree that matches a system metadata query The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= system::size,physician&query=system::size>2048&recurse" This example queries all files larger than 2 KB in the lab/images subdirectory of the ibrix_share1 HTTP share, as well as all files in all subdirectories, recursively walking the directory tree. A JSON document is returned containing the system size value and the custom metadata value for the physician attribute, for all >2KB files and subdirectories in the lab/ images directory tree, as well as for the lab/imagesdirectory itself (if >2KB).

Get selected metadata for all files in a given directory tree that matches a custom metadata query The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= system::size,physician&query=department!='billing'&recurse" This example queries all files that do not have a custom metadata attribute of department with a value other than billing, in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned containing the system size value and the custom metadata value for the physician attribute, for all files and subdirectories not in the billing department in the lab/images directory tree. Files without a department attribute are not included in the results.

Get all metadata for all files in a given directory tree that matches a custom metadata query The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= *&query=physician~'^S.*'&recurse" This example queries all files that have a custom metadata attribute of physician with a value that starts with S in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned containing all attribute values, for all files and subdirectories in the lab/images directory tree that matches the custom metadata criterion.

Get all custom metadata for all files in a given directory tree that matches a custom metadata query The following is one command line:

188 Express Query curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes= custom::*&query=physician~'^S.*'&recurse" This example queries all files that have a custom metadata attribute of physician with a value that starts with S in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned containing all custom metadata attribute values, for all files and subdirectories in the lab/images directory tree that matches the custom metadata criterion.

Get all files that match a name pattern The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images?query=

nl system::path~'.*\.(gif|jpg)$'" This example returns a JSON document that contains all files in the lab/images directory that end in .gif or .jpg.

Get all activity-related times for files with recent activity The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/?query=system::

nl createTime,system::lastChangedTime,system::lastModifiedTime,system::deleteTime&query=system::

nl lastActivityTime>1334642962" This example returns a JSON document that contains all files in the lab/images directory that have experienced activity since April 17, 2012, 06:09:22 UTC/GMT. For live files, the following attributes are returned: system::createTime,system::lastChangedTime and system::lastModifiedTime. For live files, system::deleteTime is returned. Retention properties assignment Retention and WORM support was initially implemented in 9000/IBRIX v6.0 following an atime-based retention date interface, to be compatible with existing products that implement retention this way. This feature is independent of the StoreAll REST API. Briefly, without the API, the sequence of events is: 1. A user creates or uploads a file to an IBRIX file system. 2. If the autocommit feature is enabled, the file becomes WORM after a certain period of inactivity. If a non-zero default retention period is defined for the file system, then the file is also set to the retained state for that period of time, at the same time it becomes WORM. 3. If no autocommit, the user sets the last access time (atime) to the desired retention expiration date/time (or skips this step if a non-zero default retention period is defined for the file system and the user does not want to override that period). 4. If no autocommit, after setting the atime, the user turns off all write permission bits on the file. This triggers the file’s state transition to WORM, and also to retained if the atime is in the future or if there is a non-zero default retention period. 5. Later, the user can change the atime to change the expiration time, subject to the file system’s retention policy settings. The StoreAll REST API commands provide the same ability to perform these actions, but in fewer steps.

HTTP syntax The commands provided in this section should be entered on one line. The HTTP request line format is the following on one line:

StoreAll REST API 189 PUT //?assign=[system::retentionExpirationTime=] [,system::worm='true'] HTTP/1.1 The equivalent curl command format is the following on one line: curl -g –X PUT "http[s]://://?assign=

nl [system::retentionExpirationTime= ] [,system::worm='true']" Either or both system::retentionExpirationTime or system::worm can be specified.

Parameter Description

pathname The name of an existing file on the HTTP share. The retention properties of this file will be changed.

system:retentionExpirationTime If present, defines the date/time at which the file should expire from the retained state. After that time, the file will still be WORM (immutable) forever, but the file can be deleted. The date/time must be formatted according to “API date formats” (page 174). If the file is not currently in the retained state, the date/time is stored as the file’s atime, but retention rules are only applied to the file if system::worm=true in this command or a later command. If the file is already retained, the date/time is changed to system::retentionExpirationTime, unless system::retentionExpirationTime is earlier than the file’s existing retention expiration date/time and the file system’s retention mode is set to enterprise. In this case, an error is returned and the date/time is not changed. The retention period can y be shortened only in relaxed mode, not in enterprise mode. If not present, and system::worm is present, the default retention period is applied to the file, if a default is defined for this file system. If no default is applied, then the file becomes WORM (immutable) but not retained (so it can still be deleted).

system::worm This attribute sets the state of the file to WORM. If present, the value must be the literal string true; no other value is accepted. At the same time, if the atime (retention expiration date/time) is in the future, or if the file system’s default retention period is nonzero, it sets the retention expiration date/time either to the atime (if it is in the future) or the default retention period. A file’s state can be changed to WORM only once. A file in WORM or retained state cannot be reverted to non-WORM, and cannot be un-retained through the StoreAll REST API. See the ibrix_reten_adm command or the equivalent Management Console actions for administrative override methods to un-retain a file.

Example: Set a file to WORM without specifying retention expiration curl -g –X PUT "https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::worm='true'" In this example, no retention expiration date/time is provided, but the file state is changed to WORM. As part of processing this command, the file may also be set to the retained state. This will occur if the atime has already been set into the future, or if the file system’s default retention period is non-zero. The retention expiration time will be set to the atime (if in the future) or the default.

190 Express Query Example: Set a file to WORM and retained with a retention expiration date/time curl -g –X PUT "https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::retentionExpirationTime= 1376356584,system::worm='true'" In this example, the file state is changed to WORM and retained. The retention expiration date/time is set to 13 Aug 2013 01:16:24. The file system default retention period is ignored.

Example: Set/change the retention expiration date/time without a WORM state transiti curl -g –X PUT "https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::retentionExpiration Time=1376356584" In this example, a file’s retention expiration date/time is assigned, but no state transition to WORM is performed. If the file is not already retained, the atime is assigned this value, and it remains un-retained. But the value will take effect if the file is ever transitioned to WORM in the future, either manually or by autocommit. If the file is already retained, the retention expiration date/time will be changed to this new value. If retention settings prohibit this, an error is returned. HTTP Status Codes The following HTTP status codes can be returned by the StoreAll REST API. For error status codes, check the following files for further information about the error: • access_log • error_log The logs are in the following directory, on the Active FM server node of the cluster: /usr/local/ibrix/httpd/debug/logs By default, there is no activity written to the access_log file. To enable the HTTP Server to write entries to the file for every HTTP access from a client, uncomment this line in the file /usr/local/ ibrix/httpd/conf/httpd.conf: # CustomLog "debug/logs/access_log" common Be aware that this log file can grow quickly from client HTTP accesses. Manage the size of this file so that it does not fill up the local root file system. Enable it only when needed to diagnose HTTP traffic.

Status code Description

200 (OK) If no errors are encountered, the status code 200 is returned.

204 If no errors are encountered and there is no content to be returned that fills the Store REST API query conditions/restrictions, the status code 204 is returned in the message header.

400 (Bad Request) If the URL parser in the StoreAll REST API detects an error in the URL it receives, it returns a 400 error. See the access and error logs for details.

404 (Not Found) If the path and filename in the URL does not exist and the request is not a PUT (upload) of a new file, the StoreAll REST API returns a 404 error.

500 (Internal Server If the StoreAll REST API encounters an error other than those described previously, it returns a Error) 500 error. See the access and error logs for details.

StoreAll REST API 191 14 Configuring Antivirus support

The IBRIX Antivirus feature can be used with supported Antivirus software, which must be run on systems outside the cluster. These systems are called external virus scan engines. To configure the Antivirus feature on an IBRIX cluster, complete these steps: 1. Add the external virus scan engines to be used for virus scanning. You can schedule periodic updates of virus definitions from the virus scan engines to the cluster nodes. 2. Enable Antivirus on file systems. 3. Configure Antivirus settings as appropriate for your cluster. For file sharing protocols other than SMB(CIFS), when Antivirus is enabled on a file system, scans are triggered when a file is first read. Subsequent reads to the file do not trigger a scan unless the file has been modified or the virus definitions have changed. For SMB, you must specify the file operations that trigger a scan (open, close, or both). The scans are forwarded to an external scan engine, which blocks the operation until the scan is complete. After a successful scan, if the file is found to be infected, the system reports a permission denied error message as the result of the file operation. If the file is clean, the file operation is allowed to go through. All infected files are quarantined by default. Use the quarantine utility (ibrix_avquarantine) to manage the quarantined infected files, such as to move, delete, list or reset the infected files. For more information, see the HP IBRIX 9000 Storage CLI Reference Guide. You can define Antivirus exclusions on directories in a file system to exclude files from being scanned. When you define an exclusion rule for a directory, all files/folders in that directory hierarchy are excluded from Antivirus scans based on the rule. Anti-virus support can be configured from the GUI or the CLI. On the GUI, select Cluster Configuration from the Navigator, and then select Antivirus from the lower Navigator. The Antivirus Settings panel displays the current configuration.

192 Configuring Antivirus support On the CLI, use the ibrix_avconfig command to configure Antivirus support. Use the ibrix_av command to update Antivirus definitions or view statistics. Adding or removing external virus scan engines The Antivirus software is run on external virus scan engines. You will need to add these systems to the Antivirus configuration.

IMPORTANT: HP recommends that you add a minimum of two virus scan engines to provide load balancing for scan requests and to prevent loss of scanning if one virus scan engine becomes unavailable. On the GUI, select Virus Scan Engines from the lower Navigator to open the Virus Scan Engines panel, and then click Add on that panel. On the Add dialog box, enter the IP address of the external scan engine and the ICAP port number configured on that system.

NOTE: The default port number for ICAP is 1344. HP recommends that you use this port, unless it is already in use by another activity. You may need to open this port for TCP/UDP in your firewall.

Adding or removing external virus scan engines 193 To remove an external virus scan engine from the configuration, select that system on the Virus Scan Engines panel and click Delete. To add an external virus scan engine from the CLI, use the following command: ibrix_avconfig -a -S -I IPADDR -p PORTNUM The port number specified here must match the ICAP port number configured on the virus scan engines. Use the following command to remove an external virus scan engine: ibrix_avconfig -r -S -I IPADDR Enabling or disabling Antivirus on IBRIX file systems On the GUI, select AV Enable/Disable file systems from the lower Navigator to open the AV Enable Disable panel, which lists the file systems in the cluster. Select the file system to be enabled, click Enable, and confirm the operation. To disable Antivirus, click Disable. The CLI commands are as follows: Enable Antivirus on all file systems in the cluster: ibrix_avconfig -e -F Enable Antivirus on specific file systems: ibrix_avconfig -e -f FSLIST If you specify more than one file system, use commas to separate the file systems. Disable Antivirus on all file systems: ibrix_avconfig -d -F Disable Antivirus on specific file systems: ibrix_avconfig -d -f FSLIST Updating Antivirus definitions You should update the virus definitions on the cluster nodes periodically. On the GUI, click Update ClusterWide ISTag on the Antivirus Settings panel. The cluster then connects with the external virus scan engines and synchronizes the virus definitions on the cluster nodes with the definitions on the external virus scan engines.

194 Configuring Antivirus support NOTE: All virus scan engines should have the same virus definitions. Inconsistencies in virus definitions can cause files to be rescanned. Be sure to coordinate the schedules for updates to virus definitions on the virus scan engines and updates of virus definitions on the cluster nodes.

On the CLI, use the following commands: Schedule cluster-wide updates of virus definitions: ibrix_av -t [-S CRON_EXPRESSION] The CRON_EXPRESSION specifies the time for the virus definition update. For example, the expression "0 0 12 * * ?" executes this command at noon every day. View the current schedule: ibrix_av -l -T Configuring Antivirus settings Defining the Antivirus unavailable policy This policy determines how targeted file operations are handled when an external virus scan engine is not available. The policies are: • Allow (Default). All operations triggering scans are allowed to run to completion. • Deny. All operations triggering scans are blocked and returned with an error. This policy ensures that a virus is not returned when Antivirus is not available. Following are examples of situations that can cause Antivirus to be unavailable: • All configured virus scan engines are unreachable. • The cluster nodes cannot communicate with the virus scan engines because of network issues. • The number of incoming scan requests exceeds the threads available on the cluster nodes to process the requests. The Antivirus Settings panel shows the current setting for this policy. To toggle the policy, click Configure AV Policy.

To set the policy from the CLI, use this command: ibrix_avconfig -u -g A|D

Configuring Antivirus settings 195 Defining protocol-specific policies For certain file sharing protocols (currently only SMB/CIFS), you can specify the file operations that trigger a scan (open, close, or both). There are three policies: • OPEN (Default). Scan on open. • CLOSE. Scan on close. • BOTH. Scan on open and close.

NOTE: If you configure the protocol specific policy to CLOSE – Scan on close, older written files are not scanned automatically whenever the virtual scan engine is updated with newer virus definitions. Also there is a delay of 35 seconds before the file is subject to scanning after the file closes to flush all the data. Any read or open to the file during this time is not scanned. As virus detection and updates of virus definition files always lags new viruses being discovered, it is highly recommended that AV be configured to use the Both option to re-scan older written files on open whenever new virus definitions are provided, thereby ensuring protection against virus infections. Scan on Close should be used only as an optimization to take the virus scan penalty at close time instead of at an open time.

To set the policy: 1. Select Protocol Scan Settings from the lower Navigator tree. The AV Protocol Settings panel then displays the current setting. 2. To set or change the setting, click Modify on the panel and then select the appropriate setting from the Action dialog box.

To set the policy from the CLI, use this command: ibrix_avconfig -u -k PROTOCOL -G O|C|B Defining exclusions Exclusions specify files to be skipped during Antivirus scans. Excluding files can improve performance, as files meeting the exclusion criteria are not scanned. You can exclude files based on their file extension or size. By default, when exclusions are set on a particular directory, all of its child directories inherit those exclusions. You can overwrite those exclusions for a child directory by explicitly setting exclusions on the child directory or by using the No rule option to stop exclusion inheritance on the child directory. To configure exclusions by using the Management Console:

196 Configuring Antivirus support 1. Select an appropriate AV-enabled file system from the list.

2. Click Exclusion on the AV Enable/Disable Filesystem panel. 3. On the Exclusion dialog box, specify the directory path where the exclusion is to be applied. 4. Click Display Rule to set the Rule information.

Configuring Antivirus settings 197 5. Select the appropriate type of rule: • Inherited Rule/Remove Rule. Use this option to reset or remove exclusions that were explicitly set on the child directory. The child directory will then inherit exclusions from its parent directory. You should also use this option to remove exclusions on the top-most level directory where exclusions rules have been are set. • No rule. Use this option to remove or stop exclusions at the child directory. The child directory will no longer inherit the exclusions from its parent directory. • Custom rule. Use this option to exclude files having specific file extensions or exceeding a specific size. If you specify multiple file extensions, use commas to separate the extension. To exclude all types of files from scans, enter an asterisk (*) in the file extension field. You can specify either file extensions or a file size (or both).

On the CLI, use the following options to specify exclusions with the ibrix_avconfig command: • -x FILE_EXTENSION — Excludes all files having the specified extension, such as .jpg. If you specify multiple extensions, use commas to separate the extensions. • -s FILE_SIZE — Excludes all files larger than the specified size (in MB). • -N — Does not exclude any files in the directory hierarchy. Add an exclusion to a directory: ibrix_avconfig -a -E -f FSNAME -P DIR_PATH {-N | [-x FILE_EXTENSION] [-s FILE_SIZE]} View exclusions on a specific directory:

198 Configuring Antivirus support ibrix_avconfig -l -E -f FSNAME -P DIR_PATH Remove all exclusions from a directory: ibrix_avconfig -r -E -f FSNAME -P DIR_PATH Managing Antivirus scans You can run an Antivirus scan at any time, and you can schedule periodic Antivirus scans of an entire file system or directory. Multiple Antivirus scans can run in the cluster; however, you can run only one scan task at a time on a specific AV-enabled file system. The Antivirus scan honors the AV exclusion rules defined. Any directory or files that meet the exclusion criteria is not scanned. You can view the status of active and inactive Antivirus scan tasks, and you can stop, pause, or resume active tasks. Recommendations for Antivirus scans: • Run Antivirus scans when the system is not being heavily used. • Configure your Antivirus scans so that a huge number of files in a subtree are not assigned to an Antivirus scan. • Do not run Antivirus scans on many file systems at the same time as there is a resource limitation on the AV daemon. Keep in mind the following in regards to Antivirus scans: • Antivirus scan task let you specify a file system or directory path under which all the files will be subjected to antivirus scans, which is different from on-access scanning, where a scan is triggered, when the file is accessed by an application, typically during a open/read operation. On-access scanning is done automatically by the kernel. The AV feature defines a mechanism, which lets you run or schedule periodic Antivirus scans on an entire file system or directory at any time. • Antivirus scans are independent of on-access scanning, and they can be run in parallel. • Antivirus scans are similar to on-access scans in that they continue to honor exclusion rules defined by you. When you set an Antivirus scan, you are asked to enter a value for the duration scan. The maximum duration scan that can be provided is 168 hours (7 days). If a duration time is not provided for the scan, all files in the given path are scanned without any timeout. You can plan the scheduling option so that Antivirus scans run on multiple directories in serial. For example, assume you have five directories on which you want to run Antivirus scans in a particular priority order. You could schedule an Antivirus scan to run on the first directory for 2 hours maximum (value in the Duration of Scans text box) at a set time. Then, schedule the scan task on the next directory after 2 hours (T+2.15 (15 minutes extra as a previous Antivirus scan needs a few minutes for cleanup)). Do the same steps for the next three directories. Starting or scheduling Antivirus scans You can start Antivirus scans by using the GUI or the CLI. Only the GUI provides the functionality for scheduling Antivirus scans.

GUI To start a scan or schedule periodic scans on the GUI: 1. Select the file system to be scanned from the Filesystems panel. 2. Select Active Tasks > Antivirus Scan from the lower Navigator panel. 3. Click Start on the Antivirus Task Summary panel You can also click New on the Active Tasks panel and then select Antivirus Scan as the task type on the Starting a New Task dialog box.

Managing Antivirus scans 199 4. Complete the Scan Settings tab on the New Antivirus Scan Task dialog box. Specify the directory path to be scanned and the maximum number of hours (optional) that the scan should run. At the end of that time, the scan is stopped and becomes an inactive task. You can view the scan statistics of an inactive task in the Inactive Tasks panel. 5. Antivirus scans can be scheduled or started immediately. If you click OK on the Scan tab without populating the Schedule tab, the scan starts immediately.

6. On the Schedule tab, click Schedule this task and then select the frequency (once, daily, weekly, monthly) and specify when the scan should run.

CLI On the CLI, use the following command to start an Antivirus scan: ibrix_avscan -s -f FSNAME -p PATH [-d DURATION] The scan runs immediately.

200 Configuring Antivirus support Viewing, pausing, resuming, or stopping Antivirus scan tasks Viewing an active task To view an active scan task on a file system, select the file system on the Filesystems panel on the GUI, and then select Active Tasks from lower Navigator. The Active Tasks panel lists all active tasks on the cluster, including Antivirus tasks that are currently running or paused on the selected file system. To see details for the Antivirus task, select it on the Active Tasks panel and then select Active Tasks > Antivirus Scan from the lower Navigator. The Antivirus Task Summary panel then shows current information for the scan.

Stopping or pausing an active task Use the buttons on the Antivirus Task Summary panel to stop or pause a running task, or to resume a paused task.

Viewing the results of an inactive task To view inactive Antivirus scan tasks for a file system, select the file system on the Filesystems panel and then select Inactive Tasks on the lower Navigator. The Inactive Tasks panel lists all inactive tasks in the cluster, including the following types of Antivirus scan tasks: • Scan tasks that have run to completion • Scan tasks that stopped because the duration period expired • Scan tasks that were stopped manually For more information about an inactive task, select the task and click Details on the Inactive Tasks panel. Inactive tasks cannot be restarted but can be deleted. Inodes scanned indicates the files that were scanned by the Antivirus scan. Inodes might be marked as skipped when an Antivirus scan task runs on a file system in which: • AV becomes unavailable • The file system or directory have set exclusion rules • Files are already scanned • The file system has hot inodes

Managing Antivirus scans 201 CLI commands for viewing, stopping or Pausing Antivirus scans On the CLI, use the following commands to view, stop, pause, or resume Antivirus scans. View a status summary of Antivirus scan tasks: ibrix_avscan -l [-f FSLIST] View detailed information about Antivirus scan tasks: ibrix_avscan -i [-f FSLIST] Stop the specified Antivirus scan task: ibrix_avscan -k -t TASKID [-F] Pause the specified Antivirus scan task: ibrix_task -p -n TASKID Resume the specified Antivirus scan task: ibrix_task -r -n TASKID Run the ibrix_avscan -l command to obtain the task ID. Viewing Antivirus statistics Antivirus statistics are accumulated whenever a scan is run. To view statistics, select Statistics from the lower Navigator. Click Clear Stats to clear the current statistics and start accumulating them again.

The CLI commands are: View statistics from all cluster nodes: ibrix_av -l -s Delete statistics from all nodes: ibrix_av -d -s Antivirus quarantines and software snapshots The quarantine utility has the following limitations when used with snap files.

202 Configuring Antivirus support Limitation 1: When the following sequence of events occurs: • A virus file is created inside the snap root • A snap is taken • The original file is renamed or moved to another path • The original file is read The quarantine utility cannot locate the snap file because the link was formed with the new filename assigned after the snap was taken. Limitation 2: When the following sequence of events occurs: • A virus file is created inside the snap root • A snap is taken • The original file is renamed or moved to another path • The snap file is read The quarantine utility cannot track the original file because the link was not created with its name. That file cannot be listed, reset, moved, or deleted by the quarantine utility. Limitation 3: When the following sequence of events occurs: • A virus file is created inside the snap root • The original file is read • A snap is taken • The original file is renamed or moved to another path The quarantine utility displays both the snap name (which still has the original name), and the new filename, although they are same file.

Antivirus quarantines and software snapshots 203 15 Creating IBRIX software snapshots

The IBRIX software snapshot feature allows you to capture a point-in-time copy of a file system or directory for online backup purposes and to simplify recovery of files from accidental deletion. Software snapshots can be taken of the entire file system or selected directories. Users can access the file system or directory as it appeared at the instant of the snapshot.

NOTE: To accommodate software snapshots, the inode format was changed in the IBRIX 6.0 release. Consequently, files used for snapshots must either be created on IBRIX 6.0 or later, or the pre-6.0 file system containing the files must be upgraded for snapshots. To upgrade a file system, use the upgrade60.sh utility. For more information, see the HP IBRIX 9000 Storage CLI Reference Guide. Before taking snapshots of a file system or directory, you must enable the directory tree for snapshots. An enabled directory tree is called a snap tree. You can then define a schedule for taking periodic snapshots of the snap tree, and you can also take on-demand snapshots. Users can access snapshots using NFS or SMB. All users with access rights to the root of the snapshot directory tree can navigate, view, and copy all or part of a snapshot.

NOTE: Snapshots are read only and cannot be modified, moved, or renamed. However, they can be copied. NOTE: You can use either the software method or the block method to take snapshots on a file system. Using both snapshot methods simultaneously on the same file system is not supported. File system limits for snap trees and snapshots A file system can have a maximum of 1024 snap trees. Each snap tree can have a maximum of 1024 snapshots. Configuring snapshot directory trees and schedules You can enable a directory tree for snapshots using either the GUI or the CLI; however, the GUI must be used to configure a snapshot schedule. On the GUI, select Snapshots from the Navigator. The Snap Trees panel lists all directory trees currently enabled for snapshots. The Schedule Details panel shows the snapshot schedule for the selected directory tree.

204 Creating IBRIX software snapshots To enable a directory tree for snapshots, click Add on the Snap Trees panel.

You can create a snapshot directory tree for an entire file system or a directory in that file system. When entering the directory path, do not specify a directory that is a parent or child of another snapshot directory tree. For example, if directory /dir1/dir2 is a snapshot directory tree, you cannot create another snapshot directory tree at /dir1 or /dir1/dir2/dir3. The snapshot schedule can include any combination of hourly, daily, weekly, and monthly snapshots. Also specify the number of snapshots to retain on the system. When that number is reached, the oldest snapshot is deleted. All weekly and monthly snapshots are taken at the same time of day. The default time is 9 pm. To change the time, click the time shown on the dialog box, and then select a new time on the Modify Weekly/Monthly Snapshot Creation Time dialog box.

To enable a directory tree for snapshots using the CLI, run the following command: ibrix_snap -m -f FSNAME -P SNAPTREEPATH SNAPTREEPATH is the full directory pathname, starting at the root of the file system. For example: ibrix_snap –m –f ifs1 -P /ifs1/dir1/dir2

IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees that have scheduled snapshots. If a snapshot reclamation task does not already exist, you will need to configure the task. See “Reclaiming file system space previously used for snapshots” (page 209).

Configuring snapshot directory trees and schedules 205 Modifying a snapshot schedule You can change the snapshot schedule at any time. On the Snap Trees panel, select the appropriate snap tree, select Modify, and make your changes on the Modify Snap Tree dialog box. Managing software snapshots To view the snapshots for a specific directory tree, select the appropriate directory tree on the Snap Trees panel, and then select Snapshots from the lower Navigator. The Snapshots panel lists snapshots for the directory tree and allows you to take a new snapshot or delete an existing snapshot. Use the filter at the bottom of the panel to select the snapshots you want to view.

The following CLI commands display information about snapshots and snapshot directory trees: • List all snapshots, or only the snapshots on a specific file system or snapshot directory tree: ibrix_snap -l -s [-f FSNAME [-P SnapTreePath]]

• List all snapshot directory trees, or only the snapshot directory trees on a specific file system: ibrix_snap -l [-f FSNAME]

Taking an on-demand snapshot To take an on-demand snapshot of a directory tree, select the directory tree on the Snap Trees panel and then click Create on the List of Snapshots panel.

206 Creating IBRIX software snapshots To take a snapshot from the CLI, use the following command: ibrix_snap -c -f FSNAME -P SNAPTREEPATH -n NAMEPATTERN SNAPTREEPATH is the full directory path starting from the root of the file system. The name that you specify is appended to the date of the snapshot. The following words cannot be used in the name, as they are reserved for scheduled snapshots: Hourly Daily Weekly Monthly You will need to manually delete on-demand snapshots when they are no longer needed. Determining space used by snapshots Space used by snapshots counts towards the used capacity of the file system and towards user quotas. Standard file system space reporting utilities work as follows: • The ls and du commands report the size of a file depending on the version you are viewing. if you are looking at a snapshot, the commands report the size of the file when it was snapped. If you are looking at the current version, the commands report the current size. • The df command reports the total space used in the file system by files and snapshots. Accessing snapshot directories Snapshots are stored in a read-only directory named .snapshot located under the directory tree. For example, snapshots for directory tree /ibfs1/users are stored in the /ibfs1/users/ .snapshot directory. Each snapshot is a separate directory beneath the .snapshot directory. Snapshots are named using the ISO 8601 date and time format, plus a custom value. For example, a snapshot created on June 1, 2011 at 9am will be named 2011-06-01T090000_. For snapshots created automatically, is hourly, daily, weekly, or monthly, depending on the snapshot schedule. If you create a snapshot on-demand, you can specify the . The following example lists snapshots created on an hourly schedule for snap tree /ibfs1/users. Using ISO 8601 naming ensures that the snapshot directories are listed in order according to the time they were taken. [root@9000n1 ~]# # cd /ibfs1/users/.snapshot/ [root@9000n1 .snapshot]# ls 2011-06-01T110000_hourly 2011-06-01T190000_hourly 2011-06-02T030000_hourly 2011-06-01T120000_hourly 2011-06-01T200000_hourly 2011-06-02T040000_hourly 2011-06-01T130000_hourly 2011-06-01T210000_hourly 2011-06-02T050000_hourly

Managing software snapshots 207 2011-06-01T140000_hourly 2011-06-01T220000_hourly 2011-06-02T060000_hourly 2011-06-01T150000_hourly 2011-06-01T230000_hourly 2011-06-02T070000_hourly 2011-06-01T160000_hourly 2011-06-02T000000_hourly 2011-06-02T080000_hourly 2011-06-01T170000_hourly 2011-06-02T010000_hourly 2011-06-02T090000_hourly 2011-06-01T180000_hourly 2011-06-02T020000_hourly Users having access to the root of the snapshot directory tree (in this example, /ibfs1/users/) can navigate the /ibfs1/users/.snapshot directory, view snapshots, and copy all or part of a snapshot. If necessary, users can copy a snapshot and overlay the present copy to achieve manual rollback.

NOTE: Access to .snapshot directories is limited to administrators and NFS and SMB users.

Accessing snapshots using NFS Access over NFS is similar to local IBRIX access except that the mount point will probably be different. In this example, NFS export /ibfs1/users is mounted as /users1 on an NFS client. [root@rhel5vm1 ~]# cd /users1/.snapshot [root@rhel5vm1 .snapshot]# ls 2011-06-01T110000_hourly 2011-06-01T150000_hourly 2011-06-01T190000_hourly 2011-06-01T120000_hourly 2011-06-01T160000_hourly 2011-06-01T200000_hourly 2011-06-01T130000_hourly 2011-06-01T170000_hourly 2011-06-01T140000_hourly 2011-06-01T180000_hourly

Accessing snapshots using SMB Over SMB, Windows users can use Explorer to navigate to the .snapshot folder and view files. In the following example, /ibfs1/users/ is mapped to the Y drive on a Windows system.

Restoring files from snapshots Users can restore files from snapshots by navigating to the appropriate snapshot directory and copying the file or files to be restored, assuming they have the appropriate permissions on those files. If a large number of files need to be restored, you may want to use Run Once remote replication

208 Creating IBRIX software snapshots to copy files from the snapshot directory to a local or remote directory (see “Starting a replication task ” (page 135)). Deleting snapshots Scheduled snapshots are deleted automatically according to the retention schedule specified for the snapshot tree; however you can delete a snapshot manually if necessary. You also need to delete on-demand snapshots manually. Deleting a snapshot does not free the file system space that was used by the snapshot; you will need to reclaim the space.

IMPORTANT: Before deleting a directory that contains snapshots, take these steps: • Delete the snapshots (use ibrix_snap). • Reclaim the file system space used by the snapshots (use ibrix_snapreclamation). • Remove snapshot authorization for the snap tree (use ibrix_snap).

Deleting a snapshot manually To delete a snapshot from the GUI, select the appropriate snapshot on the List of Snapshots panel, click Delete, and confirm the operation. To delete the snapshot from the CLI, use the following command: ibrix_snap -d -f FSNAME -P SNAPTREEPATH -n SNAPSHOTNAME If you are unsure of the name of the snapshot, use the following command to locate the snapshot: ibrix_snap -l -s [-f FSNAME] [-P SNAPTREEPATH]

Reclaiming file system space previously used for snapshots Snapshot reclamation tasks are used to reclaim file system space previously used by snapshots that have been deleted.

IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees that have scheduled snapshots. Using the GUI, you can schedule a snapshot reclamation task to run at a specific time on a recurring basis. The reclamation task runs on an entire file system, not on a specific snapshot directory tree within that file system. If a file system includes two snapshot directory trees, space is reclaimed in both snapshot directory trees. To start a new snapshot reclamation task, select the appropriate file system from the Filesystems panel and then select Active Tasks > Snapshot Space Reclamation from the lower Navigator.

Managing software snapshots 209 Select New on the Task Summary panel to open the New Snapshot Space Reclamation Task dialog box.

On the General tab, select a reclamation strategy: • Maximum Space Reclaimed. The reclamation task recovers all snapped space eligible for recovery. It takes longer and uses more system resources than Maximum Speed. This is the default. • Maximum Speed of Task. The reclamation task reclaims only the most easily recoverable snapped space. This strategy reduces the amount of runtime required by the reclamation task, but leaves some space potentially unrecovered (that space is still eligible for later reclamation). You cannot create a schedule for this type of reclamation task. If you are using the Maximum Space Reclaimed strategy, you can schedule the task to run periodically. On the Schedule tab, click Schedule this task and select the frequency and time to run the task.

210 Creating IBRIX software snapshots To stop a running reclamation task, click Stop on the Task Summary panel.

Managing reclamation tasks from the CLI To start a reclamation task from the CLI, use the following command: ibrix_snapreclamation -r -f FSNAME [-s {maxspeed | maxspace}] [-v] The reclamation task runs immediately; you cannot create a recurring schedule for it. To stop a reclamation task, use the following command: ibrix_snapreclamation -k -t TASKID [-F] The following command shows summary status information for all replication tasks or only the tasks on the specified file systems: ibrix_snapreclamation -l [-f FSLIST] The following command provides detailed status information: ibrix_snapreclamation -i [-f FSLIST]

Removing snapshot authorization for a snap tree Before removing snapshot authorization from a snap tree, you must delete all snapshots in the snap tree and reclaim the space previously used by the snapshots. Complete the following steps: 1. Disable any schedules on the snap tree. Select the snap tree on the Snap Trees panel, select Modify, and remove the Frequency settings on the Modify Snap Tree dialog box. 2. Delete the existing snapshots of the snap tree. See “Deleting snapshots” (page 209) 3. Reclaim the space used by the snapshots. See “Reclaiming file system space previously used for snapshots” (page 209). 4. Delete the snap tree. On the Snap Trees panel, select the appropriate snap tree, click Delete, and confirm the operation. To disable snapshots on a directory tree using the CLI, run the following command: ibrix_snap -m -U -f FSNAME -P SnapTreePath

Managing software snapshots 211 Moving files between snap trees Files created on, copied, or moved to a snap tree directory can be moved to any other snap tree or non-snap tree directory on the same file system, provided they are not snapped. After a snapshot is taken and the files have become part of that snapshot, they cannot be moved to any other snap tree or directory on the same file system. However, the files can be moved to any snap tree or directory on a different file system. Backing up snapshots Snapshots are stored in a .snapshot directory under the directory tree. For example: # ls -alR /fs2/dir.tst /fs2/dir.tst: drwxr-xr-x 4 root root 4096 Feb 8 09:11 dir.dir -rwxr-xr-x 1 root root 99999999 Jan 31 09:33 file.0 -rwxr-xr-x 1 root root 99999999 Jan 31 09:33 file.1 drwxr-xr-x 2 root root 4096 Apr 6 15:55 .snapshot /fs2/dir.tst/.snapshot: lrwxrwxrwx 1 root root 15 Apr 6 15:39 2011-04-06T15:39:57_ -> ../.@1302118797 lrwxrwxrwx 1 root root 15 Apr 6 15:55 2011-04-06T15:55:07_tst1 -> ../.@1302119707 /fs2/dir.tst/dir.dir: -rwxr-xr-x 1 root root 99999999 Jan 31 09:34 file.1

NOTE: The links beginning with .@ are used internally by the snapshot software and cannot be accessed. To back up the snapshots, use the procedure corresponding to your backup method. Backups using NDMP By default, NDMP does not back up the .snapshot directory. For example, if you specify a backup of the /fs2/snapdir directory, NDMP backs up the directory but excludes /fs2/ snapdir/.snapshot and its contents. To back up the snapshot of the directory , specify the path /fs2/snapdir/.snapshot/ 2011-04-06T15:55:07_tst1. Now you can use the snapshot (a point in time copy) to restore its associated directory. For example use /fs2/snapdir/.snapshot/ 2011-04-06T15:55:07_tst1 to restore /fs2/snapdir. Backups without NDMP DMA applications cannot back up a snapshot directory tree using a path such as /fs2/snapdir/ .snapshot/time-stamp-name. Instead, mount the snapshot using the mount -t -o bind,ro options and then back up the mount point. For example, using a mount point such as /newmount, use the following command to mount the snapshot: mount -t ibrix –o bind,ro /fs2/snapdir/.snapshot/time-stamp-name /newmount Then configure the DMA to back up /newmount. Backups with the tar utility The tar symbolic link (h) option can copy snapshots. For example, the following command copies the /snapfs1/test3 directory associated with the point-in-time snapshot. tar –cvfh /snapfs1/test3/.snapshot/2011-07-01T044500_hourly

212 Creating IBRIX software snapshots 16 Creating block snapshots

The block snapshot feature allows you to capture a point-in-time copy of a file system for online backup purposes and to simplify recovery of files from accidental deletion. The snapshot replicates all file system entities at the time of capture and is managed exactly like any other file system.

NOTE: You can use either the software method or the block method to take snapshots on a file system. Using both snapshot methods simultaneously on the same file system is not supported. The block snapshot feature is supported as follows: • HP 9320 Storage: supported on the HP P2000 G3 MSA Array System or HP 2000 Modular Smart Array G2 provided with the platform. • HP 9300 Storage Gateway: supported on the HP P2000 G3 MSA Array System; HP 2000 Modular Smart Array G2; HP P4000 G2 Models; HP 3PAR F200, F400, T400 and T800s Storage Systems (OS version 2.3.1 (MU3); and Dell EqualLogic storage array (no arrays are provided with the 9300 system). • HP 9720/9730 Storage: no support. The block snapshot feature uses the copy-on-write method to preserve the snapshot regardless of changes to the origin file system. Initially, the snapshot points to all blocks that the origin file system is using (B in the following diagram). When a block in the origin file system is overwritten with additions, edits, or deletions, the original block (prior to changes) is copied to the snapshot store, and the snapshot points to the copied block (C in the following diagram). The snapshot continues to point to the origin file system contents from the point in time that the snapshot was executed.

To create a block snapshot, first provision or register the snapshot store. You can then create a snapshot from type-specific storage resources. The snapshot is active from the moment it is created. You can take snapshots via the IBRIX software block snapshot scheduler or manually, whenever necessary. Each snapshot maintains its origin file system contents until deleted from the system. Snapshots can be made visible to users, allowing them to access and restore files (based on permissions) from the available snapshots.

NOTE: By default, snapshots are read only. HP recommends that you do not allow writes to any snapshots. Setting up snapshots This section describes how to configure the cluster to take snapshots. Preparing the snapshot partition The block snapshot feature does not require any custom settings for the partition. However, HP recommends that you provide sufficient storage capacity to support the snapshot partition.

Setting up snapshots 213 NOTE: If the snapshot store is too small, the snapshot will eventually exceed the available space (unless you detect this and manually increase storage). If this situation occurs, the array software deletes the snapshot resources and the IBRIX software snapshot feature invalidates the snapshot file system. Although you can monitor the snapshot and manually increase the snapshot store as needed, the safest policy is to initially provision enough space to last for the expected lifetime of the snapshot. The optimum size of the snapshot store depends on usage patterns in the origin file system and the length of time you expect the snapshot to be active. Typically, a period of trial and error is required to determine the optimum size. See the array documentation for procedures regarding partitioning and allocating storage for file system snapshots. Registering for snapshots After setting up the snapshot partition, you can register the partition with the Fusion Manager. You will need to provide a name for the storage location and specify access parameters (IP address, user name, and password). The following command registers and names the array’s snapshot partition on the Fusion Manager. The partition is then recognized as a repository for snapshots. ibrix_vs -r -n STORAGENAME -t { msa | lefthand | 3PAR | eqlogic} -I IP(s) -U USERNAME [-P PASSWORD] To remove the registration information from the configuration database, use the following command. The partition will then no longer be recognized as a repository for snapshots. ibrix_vs -d -n STORAGENAME Discovering LUNs in the array After the array is registered, use the -a option to map the physical storage elements in the array to the logical representations used by IBRIX software. The software can then manage the movement of data blocks to the appropriate snapshot locations on the array. Use the following command to map the storage information for the specified array: ibrix_vs -a [-n STORAGENAME] Reviewing snapshot storage allocation Use the following command to list all of the array storage that is registered for snapshot use: ibrix_vs -l To see detailed information for named snapshot partitions on either a specific array or all arrays, use the following command: ibrix_vs -i [-n STORAGENAME] Automated block snapshots If you plan to take a snapshot of a file system on a regular basis, you can automate the snapshots. To do this, first define an automated snapshot scheme, and then apply the scheme to the file system and create a schedule. A snapshot scheme specifies the number of snapshots to keep and the number of snapshots to mount. You can create a snapshot scheme from either the GUI or the CLI.

214 Creating block snapshots The type of storage array determines the maximum number of snapshots you can keep and mount per file system.

Array Maximum number of snapshots to Maximum number of snapshots to mount keep

P2000 G3 MSA System/MSA2000 32 snapshots per file system 7 snapshots per file system G2 array

EqualLogic array 8 snapshots per file system 7 snapshots per file system

P4000 G2 storage system 32 snapshots per file system 7 snapshots per file system

3PAR storage system 32 snapshots per file system 7 snapshots per file system

For the P2000 G3 MSA System/MSA2000, the storage array itself also limits the total number of snapshots that can be stored. Arrays count the number of LUNs involved in each snapshot. For example, if a file system has four LUNs, taking two snapshots of the file system increases the total snapshot LUN count by eight. If a new snapshot will cause the snapshot LUN count limit to be exceeded, an error will be reported, even though the file system limits may not be reached. The snapshot LUN count limit on P2000 G3 MSA System/MSA2000 arrays is 255. The 3PAR storage system allows you to make a maximum of 500 virtual copies of a base volume. Up to 256 virtual copies can be read/write copies. Creating automated snapshots using the GUI Select the file system where the snapshots will be taken, and then select Block Snapshots from the lower Navigator. On the Block Snapshots panel, click New to display the Create Snapshot dialog box. On the General tab, select Recurring as the Snapshot Type.

Automated block snapshots 215 Under Snapshot Configuration, select New to create a new snapshot scheme. The Create Snapshot Scheme dialog box appears.

216 Creating block snapshots On the General tab, enter a name for the strategy and then specify the number of snapshots to keep and mount on a daily, weekly, and monthly basis. Keep in mind the maximums allowed for your array type. Daily means that one snapshot is kept per day for the specified number of days. For example, if you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day. On the 7th day, the oldest snapshot is deleted. Similarly, Weekly specifies the number of weeks that snapshots are retained, and Monthly specifies the number of months that snapshots are retained. On the Advanced tab, you can create templates for naming the snapshots and mountpoints. This step is optional.

Automated block snapshots 217 For either template, enter one or more of the following variables. The variables must be enclosed in braces ({ }) and separated by underscores (_). The template can also include text strings. When a snapshot is created using the templates, the variables are replaced with the following values.

Variable Value

fsname File system name

shortdate yyyy_mm_dd

fulldate yyyy_mm_dd_HHmmz + GMT

When you have completed the scheme, it appears in the list of snapshot schemes on the Create Snapshot dialog box. To create a snapshot schedule using this scheme, select it on the Create Snapshot dialog box and go to the Schedule tab. Click Schedule this task, set the frequency of the snapshots, and schedule when they should occur. You can also set start and end dates for the schedule. When you click OK, the snapshot scheduler will begin taking snapshots according to the specified snapshot strategy and schedule.

Creating an automated snapshot scheme from the CLI You can create an automated snapshot scheme with the ibrix_vs_snap_strategy command. However, you will need to use the GUI to create a snapshot schedule. To define a snapshot scheme, execute the ibrix_vs_snap_strategy command with the -c option:

218 Creating block snapshots ibrix_vs_snap_strategy -c -n NAME -k KEEP -m MOUNT [-N NAMESPEC] [-M MOUNTSPEC] The options are:

-n NAME The name for the snapshot scheme. -k KEEP The number of snapshots to keep per file system. For the P2000 G3 MSA System/MSA2000 G2 array, the maximum is 32 snapshots per file system. For P4000 G2 storage systems, the maximum is 32 snapshots per file system. For P4000 G2 storage systems, the maximum is 32 snapshots per file system. For Dell EqualLogic arrays, the maximum is eight snapshots per file system. Enter the number of days, weeks, and months to retain snapshots. The numbers must be separated by commas, such as -k 2,7,28. NOTE: One snapshot is kept per day for the specified number of days. For example, if you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day. On the 7th day, the oldest snapshot is deleted. Similarly, the weekly count specifies the number of weeks that snapshots are retained, and the monthly count specifies the number of months that snapshots are retained.

-m MOUNT The number of snapshots to mount per file system. The maximum number of snapshots is 7 per file system. Enter the number of snapshots to mount per day, week, and month. The numbers must be separated by commas, such as -m 2,2,3. The sum of the numbers must be less than or equal to 7.

-N NAMESPEC Snapshot name template. The template specifies a scheme for creating unique names for the snapshots. Use the variables listed below for the template. –M MOUNTSPEC Snapshot mountpoint template. The template specifies a scheme for creating unique mountpoints for the snapshots. Use the variables listed below for the template.

Variables for snapshot name and mountpoint templates.

fulldate yyyy_mm_dd_HHmmz + GMT shortdate yyyy_mm_dd fsname File system name

You can specify one or more of these variables, enclosed in braces ({ }) and separated by underscores (_). The template can also include text strings. Two sample templates follow. When a snapshot is created using one of these templates, the variables will be replaced with the values listed above. {fsname}_snap_{fulldate} snap_{shortdate}_{fsname} Other automated snapshot procedures Use the following procedures to manage automated snapshots.

Modifying an automated snapshot scheme A snapshot scheme can be modified only from the CLI. Use the following command: ibrix_vs_snap_strategy -e -n NAME -k KEEP -m MOUNT [-N NAMESPEC] [-M MOUNTSPEC]

Viewing automated snapshot schemes On the GUI, you can view snapshot schemes on the Create Snapshot dialog box. Select Recurring as the Snapshot Type, and then select a snapshot scheme. A description of that scheme will be displayed. To view all automated snapshot schemes or all schemes of a specific type using the CLI, execute the following command: ibrix_vs_snap_strategy -l [-T TYPE]

Automated block snapshots 219 To see details about a specific automated snapshot scheme, use the following command: ibrix_vs_snap_strategy -i -n NAME

Deleting an automated snapshot scheme A snapshot scheme can be deleted only from the CLI. Use the following command: ibrix_vs_snap_strategy -d -n NAME Managing block snapshots This section describes how to manage individual snapshots. Creating an on-demand snapshot To take an on-demand snapshot from the GUI, select the file system where the snapshot will be taken, and then select Block Snapshots from the lower Navigator. On the Block Snapshots panel, click New to display the Create Snapshot dialog box. On the General tab, select Once as the Snapshot Type and click OK. Use the following command to create a snapshot from the CLI ibrix_vs_snap -c -n SNAPFSNAME -f ORIGINFSNAME For example, to create a snapshot named ifs1_snap for file system ifs1: ibrix_vs_snap -c -n ifs1_snap -f ifs1 Mounting or unmounting a snapshot On the GUI, select Block Snapshots from the lower Navigator, select the snapshot on the Block Snapshots panel, and click Mount or Unmount. Include the -M option to the create command to automatically mount the snapshot file system after creating it. This makes the snapshot visible to authorized users. HP recommends that you do not allow writes to any snapshot file system. ibrix_vs_snap -c –M –n SNAPFSNAME -f ORIGINFSNAME For example, to create and mount a snapshot named ifs1_snap for file system ifs1: ibrix_vs_snap -c -M –n ifs1_snap -f ifs1 Recovering system resources on snapshot failure If a snapshot encounters insufficient resources when attempting to update its contents due to changes in the origin file system, the snapshot fails and is marked invalid. Data is no longer accessible in the snapshot. To clean up records in the configuration database for an invalid snapshot, use the following command from the CLI: ibrix_vs_snap -r -f SNAPFSLIST For example, to clean up database records for a failed snapshot named ifs1_snap: ibrix_vs_snap -r -f ifs1_snap On the GUI, select the snapshot on the Block Snapshots panel and click Cleanup. Deleting snapshots Delete snapshots to free up resources when the snapshot is no longer needed or to create a new snapshot when you have already created the maximum allowed for your storage system. On the GUI, select the snapshot on the Block Snapshots panel and click Delete. On the CLI, use the following command: ibrix_vs_snap -d -f SNAPFSLIST For example, to delete snapshots ifs0_snap and ifs1_snap: ibrix_vs_snap -d -f ifs0_snap,ifs1_snap

220 Creating block snapshots Viewing snapshot information Use the following commands to view snapshot information from the CLI.

Listing snapshot information for all hosts The ibrix_vs_snap -l command displays snapshot information for all hosts. Sample output follows: ibrix_vs_snap -l

NAME NUM_SEGS MOUNTED? GEN TYPE CREATETIME ------snap1 3 No 6 msa Wed Oct 7 15:09:50 EDT 2009 The following table lists the output fields for ibrix_vs_snap -l.

Field Description

NAME Snapshot name.

NUM_SEGS Number of segments in the snapshot.

MOUNTED? Snapshot mount state.

GEN Number of times the snapshot configuration has been changed in the configuration database.

TYPE Snapshot type, based on the underlying storage system.

CREATETIME Creation timestamp.

Listing detailed information about snapshots Use the ibrix_vs_snap -i command to monitor the status of active snapshots. You can use the command to ensure that the associated snapshot stores are not full. ibrix_vs_snap -i To list information about snapshots of specific file systems, use the following command: ibrix_vs_snap -i [-f SNAPFSLIST] The ibrix_vs_snap -i command lists the same information as ibrix_fs -i, plus information fields specific to snapshots. Include the -f SNAPFSLIST argument to restrict the output to specific snapshot file systems. The following example shows only the snapshot-specific fields. To view an example of the common fields, see “Viewing file system information” (page 36).

SEGMENT OWNER LV_NAME STATE BLOCK_SIZE CAPACITY(GB) FREE(GB) AVAIL(GB) FILES FFREE USED% BACKUP TYPE TIER LAST_REPORTED ------1 ib50-243 ilv11_msa_snap9__snap OK, SnapUsed=4% 4,096 0.00 0.00 0.00 0 0 0 MIXED 7 Hrs 56 Mins 46 Secs ago 2 ib50-243 ilv12_msa_snap9__snap OK, SnapUsed=6% 4,096 0.00 0.00 0.00 0 0 0 MIXED 7 Hrs 56 Mins 46 Secs ago 3 ib50-243 ilv13_msa_snap9__snap OK, SnapUsed=6% 4,096 0.00 0.00 0.00 0 0 0 MIXED 7 Hrs 56 Mins 46 Secs ago 4 ib50-243 ilv14_msa_snap9__snap OK, SnapUsed=8% 4,096 0.00 0.00 0.00 0 0 0 MIXED 7 Hrs 56 Mins 46 Secs ago 5 ib50-243 ilv15_msa_snap9__snap OK, SnapUsed=6% 4,096 0.00 0.00 0.00 0 0 0 MIXED 7 Hrs 56 Mins 46 Secs ago 6 ib50-243 ilv16_msa_snap9__snap OK, SnapUsed=5% 4,096 0.00 0.00 0.00 0 0 0 MIXED 7 Hrs 56 Mins 46 Secs ago

NOTE: For P4000 G2 storage systems, the state is reported as OK, but the SnapUsed field always reports 0%.

Managing block snapshots 221 The following table lists the output fields for ibrix_vs_snap -i.

Field Description

SEGMENT Snapshot segment number.

OWNER The file serving node that owns the snapshot segment.

LV_NAME Logical volume.

STATE State of the snapshot.

BLOCK_SIZE Block size used for the snapshot.

CAPACITY (GB) Size of this snapshot file system, in GB.

FREE (GB) Free space on this snapshot file system, in GB.

AVAIL (GB) Space available for user files, in GB.

FILES Number of files that can be created in this snapshot file system.

FFREE Number of unused file inodes available in this snapshot file system.

USED% Percentage of total storage occupied by user files.

BACKUP Backup host name.

TYPE Segment type. Mixed means the segment can contain both directories and files.

TIER Tier to which the segment was assigned.

Last Reported Last time the segment state was reported.

Accessing snapshot file systems By default, snapshot file systems are mounted in two locations on the file serving nodes: • / • //. For example, if you take a snapshot of the fs1 file system and name the snapshot fs1_snap1, it will be mounted at /fs1_snap1 and at /fs1/.fs1_snap1. The IBRIX 9000 clients must mount the snapshot file system (/) to access the contents of the snapshot. NFS and SMB clients can access the contents of the snapshot through the original file system (such as /fs1/.fs1_snap1) or they can mount the snapshot file system (in this example, /fs1_snap1). The following window shows an NFS client browsing the snapshot file system .fs1_snap2 in the fs1_nfs file system.

222 Creating block snapshots The next window shows an SMB client accessing the snapshot file system .fs1_snap1. The original file system is mapped to drive X.

Accessing snapshot file systems 223 Troubleshooting block snapshots Snapshot reserve is full and the MSA2000 is deleting snapshot volumes When the snapshot reserve is full, the MSA2000 will delete snapshot volumes on the storage array, leaving the device entries on the file serving nodes. To correct this situation, take the following steps: 1. Stop I/O or any applications that are reading or writing to the snapshot file systems. 2. Log on to the active Fusion Manager. 3. Unmount all snapshot file systems. 4. Delete all snapshot file systems to recover space in the snapshot reserve. CIFS clients receive an error when creating a snapshot CIFS clients might see the following error when attempting to create a snapshot: Make sure you are connected to the network and try again This error is generated when the snapshot creation takes longer than the CIFS timeout, causing the CIFS client to determine that the server has failed or the network is disconnected. To avoid this situation, do not take snapshots during periods of high CIFS activity. Cannot create 32 snapshots on an MSA system MSA systems or arrays support a maximum of 32 snapshots per file system. If snapshot creation is failing before you reach 32 snapshots, check the following: • Verify the version of the MSA firmware. • If the cluster has been rebuilt, use the MSA GUI or CLI to check for old snapshots that were not deleted before the cluster was rebuilt. The CLI command is show snapshots. • Verify the virtual disk and LUN layout.

224 Creating block snapshots 17 Using data tiering

A data tier is a logical grouping of file system segments. After creating tiers containing the segments in the file system, you can use the data tiering migration process to move files from the segments in one tier to the segments in another tier. For example, you could create a primary data tier for SAS storage and another tier for SATA storage. You could then migrate specific data from the SAS tier to the lower-cost SATA tier. Other configurations might be based on the type of file being stored, such as storing all streaming files in a tier or moving all files over a certain size to a specific tier. You can create any number of data tiers. A tier cannot be on tape or on a location external to the IBRIX file system, such as an NFS share. IBRIX data tiering is transparent to users and applications and is compatible with IBRIX software file system snapshots and other IBRIX data services. Migration is a storage and file system intensive process which, in some circumstances, can take days to complete. Migration tasks must be run at a time when clients are not generating significant load. Migration it is not suitable for environments where there are no quiet times to run migration tasks.

IMPORTANT: Data tiering has a cool-down period of approximately 10 minutes. If a file was last accessed during the cool-down period, the file will not be moved. Data tiering policy A tiering policy specifies migration rules for the file system. One tiering policy can be defined per file system and the policy must have at least one rule. Rules in the policy can migrate files between any two tiers in the file system. For example, rule1 could move files between Tier1 and Tier2, rule2 could migrate files from Tier2 to Tier1, and rule3 could migrate files between Tier1 and Tier3. A file is migrated according to the first rule that it matches. You can narrow the scope of rules by combining directives using logical operators. For example, you could create a policy that has three simple rules: • Migrate all files that have not been modified for 30 minutes from Tier1 to Tier2. (This rule is not valid for production, but is a good rule for testing.) • Migrate all files larger than 5 MB from Tier1 to Tier2. • Migrate all mpeg4 files from Tier1 to Tier 2. Changing the tiering configuration The following restrictions apply when changing the configuration: • You cannot modify the tiering configuration for a file system while an active migration task is running. • You cannot move segments between tiers, assign them to new tiers, or unassign them from tiers while an active migration task is running or while any rules exist that apply to the segments. Creating and managing data tiers You can use the Data Tiering Wizard to create and manage data tiers. Select the file system on the GUI, and then select Active Tasks > Data Tiering in the lower Navigator. On the Task Summary panel, click Data Tiering to open the wizard.

Creating and managing data tiers 225 For a new tier, on the Manage Tier dialog box, choose Create New Tier, enter a name for the tier, and select one or more segments to be included in the tier. To modify an existing tier, choose Use Existing Tier, select the tier, and make any changes to the segments included in the tier. Segments not currently included in a tier are specified as Unassigned. If you select a segment that is already mapped to a tier, the segment will be unassigned from that tier and reassigned to the tier you specified. If you remove a segment from a tier, that segment becomes unassigned.

226 Using data tiering You can work on only one tier at a time. However, when you click Next, you will be asked if you want to manage more tiers. If you answer Yes, the Manage Tier dialog box will be refreshed and you can work on another tier. All new files are written to the primary tier. On the Primary Tier dialog box, select the tier that should receive these files. You can also select cluster servers and any IBRIX 9000 clients whose I/O operations should be redirected to the primary tier.

The tiering policy consists of rules that specify the data to be migrated from one tier to another. The parameters and directives used in the migration rules include actions based on file access patterns (such as access and modification times), file size, and file type. Rules can be constrained to operate on files owned by specific users and groups and to specific paths. Logical operators can be used to combine directives. The Tiering Policy dialog box displays the existing tiering policy for the file system.

Creating and managing data tiers 227 To add a new tiering policy, click New. On the New Data Tiering Policy dialog box, select the source and destination tiers. Initially RuleSet1 is empty. Select a rule name, and the other fields will appear according to the rule you selected. Click + to specify the and/or operators and another rule. Click New to open another ruleset. The following example shows two new rulesets. To delete a ruleset, check the box in the rule set and click Delete.

228 Using data tiering The Tiering Schedule dialog box lists all executed and running migration tasks. Click New to add a new schedule, click Edit to reschedule the selected task, or click Delete to delete the selected schedules. Use the Enabled and Disabled buttons to enable or disable the selected schedule. When a schedule is enabled, it is put in a runnable state. When a schedule is disabled, it is put in a paused state. To run a migration task now, select the task and click Run Now.

When you click New to create a new schedule, the default frequency for migration tasks is displayed. For an existing schedule, the current frequency is displayed. To change the frequency, click Modify.

On the Data Tiering Schedule Wizard dialog box, select a time to run the migration task.

Creating and managing data tiers 229 Viewing tier assignments and managing segments On the GUI, select Filesystems from the Navigator and select a file system in the Filesystems panel. In the lower Navigator, select Segments. The Segments panel displays the segments in the file system and specifies whether they are assigned to a tier.

You can assign, reassign, or unassign segments from tiers using the Data Tiering Wizard. The GUI also provides additional options to perform these tasks. Assign or reassign a segment: On the Segments panel, select the segments you are assigning and click Assign to Tier. On the Assign to Tier dialog box, specify whether you are assigning the segment to an existing tier or a new tier and specify the tier.

When you click OK, the segment is assigned to the tier and the information on the Segments panel is updated.

230 Using data tiering Usassign a segment from a tier: Select the file system from the Filesystems panel and expand Segments in the lower Navigator to list the tiers in the file system. Select the tier containing the segment. On the Tier Segments panel, select the segment and click Unassign. Viewing data tiering rules On the GUI, select Filesystems from the Navigator and then select a file system in the Filesystems panel. In the lower Navigator, select Active Tasks > Data Tiering > Rules.

The Data Tiering Rules panel lists the existing rules for the file system. You can also create a new rule from this panel; however, it is simpler to use the Data Tiering Wizard to create rules. To create a rule from the Data Tiering Rules panel, click Create. On the Create Data Tiering Rule dialog box, select the source and destination tier and then define a rule. The rule can move files between any two tiers.

When you click OK, the rule is checked for correct syntax. If the syntax is correct, the rule is saved and appears on the Data Tiering Rules panel. The following example shows the three rules created for the example.

You can delete rules if necessary. Select the rule on the Data Tiering Rules panel and click Delete.

Viewing data tiering rules 231 Additional rule examples The following rule migrates all files from Tier2 to Tier1: name="*" The following rule migrates all files in the subtree beneath the path. The path is relative to the mountpoint of the file system. path=testdata2 The next example migrates all mpeg4 files in the subtree. A logical “and” operator combines the rules: path=testdata4 and name="*mpeg4" The next example narrows the scope of the rule to files owned by users in a specific group. Note the use of parentheses. gname=users and (path=testdata4 and name="*mpeg4") For more examples and detailed information about creating rules, see “Writing tiering rules” (page 236). Running a migration task You can use the Data Tiering Wizard to schedule and run migration tasks. You can also start or stop migration tasks from the Data Tiering Task Summary panel. Only one migration task can run on a file system at any time. The task is not restarted on failure, and cannot be paused and later resumed. However, a migration task can be started when a server is in the InFailover state. To start a migration task, select the file system from the Filesystems panel and then select Data Tiering in the lower Navigator. Click New on the Task Summary panel. The counters on the panel are updated periodically while the task is running.

If necessary, click Stop to stop the data tiering task. There is no pause/resume function. When the task is complete, it appears on the GUI under Inactive Tasks for the file system. You can check the exit status there.

232 Using data tiering Click Details to see summary information about the task.

Configuring tiers and migrating data using the CLI Use the ibrix_tier command to manage tier assignments and to list information about tiers. Use the ibrix_migrator command to create or delete rules defining migration policies, to start or stop migration tasks, and to list information about rules and migrator tasks. Assigning segments to tiers First determine the segments in the file system and then assign them to tiers. Use the following command to list the segments: ibrix_fs -f FSNAME –i For example (the output is truncated): [root@ibrix01a ~]# ibrix_fs -f ifs1 –i .

. SEGMENT OWNER LV_NAME STATE BLOCK_SIZE CAPACITY(GB) ------1 ibrix01b ilv1 OK 4,096 3,811.11 . . . 2 ibrix01a ilv2 OK 4,096 3,035.67 3 ibrix01b ilv3 OK 4,096 3,811.11 4 ibrix01a ilv4 OK 4,096 3,035.67 . . Use the following command to assign segments to a tier. The tier is created if it does not already exist. ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST For example, the following command creates Tier 1 and assigns segments 1 and 2 to it: [root@ibrix01a ~]# ibrix_tier -a -f ifs1 -t Tier1 -S 1,2 Assigned segment: 1 (ilv1) to tier Tier1 Assigned segment: 2 (ilv2) to tier Tier1 Command succeeded!

Configuring tiers and migrating data using the CLI 233 NOTE: Be sure to spell the name of the tier correctly when you add segments to an existing tier. If you spell the name incorrectly, a new tier is created with the incorrect tier name, and no error is recognized.

Displaying information about tiers Use the following command to list the tiers in a file system. The -t option displays information for a specific tier. ibrix_tier -l -f FSNAME [-t TIERNAME] For example: [root@ibrix01a ~]# ibrix_tier -i -f ifs1 Tier: Tier1 ======FS Name Segment Number Tier ------ifs1 1 Tier1 ifs1 2 Tier1

Tier: Tier2 ======FS Name Segment Number Tier ------ifs1 3 Tier2 ifs1 4 Tier2 Defining the primary tier All new files are written to the primary tier, which is typically the tier built on the fastest storage. Use the following command to define the primary tier: ibrix_fs_tune -f FILESYSTEM -h SERVERS -t TIERNAME The following example specifies Tier1 as the primary tier: ibrix_fs_tune -f ifs1 -h ibrix1a,ibrix1b -t Tier1 This policy takes precedence over any other file allocation polices defined for the file system.

NOTE: This example assumes users access the files over CIFS, NFS, FTP, or HTTP. If IBRIX 9000 clients are used, the allocation policy must be applied to the clients. (Use -h to specify the clients.)

Creating a tiering policy To create a rule for migrating data from a source tier to a destination tier, use the following command: ibrix_migrator -A -f FSNAME -r RULE -S SOURCE_TIER -D DESTINATION_TIER The following rule migrates all files that have not been modified for 30 minutes from Tier1 to Tier2: [root@ibrix01a ~]# ibrix_migrator -A -f ifs1 -r 'mtime older than 30 minutes' -S Tier1 -D Tier2 Rule: mtime

234 Using data tiering ======FS Name Id Rule Source Tier Destination Tier ------ifs1 9 mtime older than 30 minutes Tier1 Tier2 ifs1 10 name = "*.mpeg4" Tier1 Tier2 ifs1 11 size > 4M Tier1 Tier2 Running a migration task To start a migration task, use the following command: ibrix_migrator -s -f FSNAME For example: [root@ibrix01a ~]# ibrix_migrator -s -f ifs1 Submitted Migrator operation to background. ID of submitted task: Migrator_163 Command succeeded!

NOTE: The ibrix_migrator command cannot be run at the same time as ibrix_rebalance. To list the active migration task for a file system, use the ibrix_migrator -i option. For example: [root@ibrix01a ~]# ibrix_migrator -i -f ifs1 Operation: Migrator_163 ======Task Summary ======Task Id : Migrator_163 Type : Migrator File System : ifs1 Submitted From : root from Local Host Run State : STARTING Active? : Yes EXIT STATUS : Started At : Jan 17, 2012 10:32:55 Coordinator Server : ibrix01b Errors/Warnings : Dentries scanned : 0 Number of Inodes moved : 0 Number of Inodes skipped : 0 Avg size (kb) : 0 Avg Mb Per Sec : 0 Number of errors : 0 To view summary information after the task has completed, run the ibrix_migrator -i command again and include the -n option, which specifies the task ID. (The task ID appears in the output from ibrix migrator -i.) [root@ibrix01a testdata1]# ibrix_task -i -n Migrator_163 Operation: Migrator_163 ======Task Summary ======Task Id : Migrator_163 Type : Migrator File System : ifs1 Submitted From : root from Local Host Run State : STOPPED Active? : No EXIT STATUS : OK Started At : Jan 17, 2012 10:32:55 Coordinator Server : ibrix01b Errors/Warnings : Dentries scanned : 1025 Number of Inodes moved : 1002 Number of Inodes skipped : 1 Avg size (kb) : 525

Configuring tiers and migrating data using the CLI 235 Avg Mb Per Sec : 16 Number of errors : 0 Stopping a migration task To stop a migration task, use the following command: ibrix_migrator -k -t TASKID [-F] Changing the tiering configuration with the CLI The following restrictions apply when changing the configuration: • You cannot modify the tiering configuration for a file system while an active migration task is running. • You cannot move segments between tiers, assign them to new tiers, or unassign them from tiers while an active migration task is running or while any rules exist that apply to the segments. Moving a segment to another tier Use the following command to assign a segment to another tier: ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST Removing a segment from a tier The following command removes segments from a tier. If you do not specify a segment list, all segments in the file system are unassigned. ibrix_tier -u -f FSNAME [-S SEGLIST] The following example removes segments 3 and 4 from their current tier assignment: [root@ibrix01a ~]# ibrix_tier -u -f ifs1 -S 3,4 Deleting a tier Before deleting a tier, take these steps: • Delete all policy rules defined for the tier. • Allow any active tiering jobs to complete. To unassign all segments and delete the tier, use the following command: ibrix_tier -d -f FSNAME -t TIERNAME Deleting a tiering rule Before deleting a rule, run the ibrix_migrator -l [-f FSNAME] -r command and note the ID assigned to the rule. Then use the following command to delete the rule: ibrix_migrator -d -f FSNAME -r RULE_ID The -r option specifies the rule ID. For example: [root@ibrix01a ~]# ibrix_migrator -d -f ifs2 -r 2 Writing tiering rules A tiering policy consists of one or more rules that specify how data is migrated from one tier to another. You can write rules using the GUI, or you can write them directly to the configuration database using the ibrix_migrator -A command. Rule attributes Each rule identifies file attributes to be matched. It also specifies the source tier to scan and the destination tier where files that meet the rule’s criteria will be moved and stored.

236 Using data tiering Note the following: • Tiering rules are based on individual file attributes. • All rules are executed when the tiering policy is applied during execution of the ibrix_migrator command. • It is important that different rules do not target the same files, especially if different destination tiers are specified. If tiering rules are ambiguous, the final destination for a file is not predictable. See “Ambiguous rules” (page 239), for more information. The following are examples of attributes that can be specified in rules. All attributes are listed in “Rule keywords” (page 238). You can use AND and OR operators to create combinations. Access time • File was last accessed x or more days ago • File was accessed in the last y days Modification time • File was last modified x or more days ago File size—greater than n K File name or File type—jpg, wav, exe (include or exclude) File ownership—owned by user(s) (include or exclude) Use of the tiering assignments or policy on any file system is optional. Tiering is not assigned by default; there is no “default” tier. Operators and date/time qualifiers Valid rules operators are <, <=, =, !=, >, >=, and boolean and and or. Use the following qualifiers for fixed times and dates: • Time: Enter as three pairs of colon-separated integers using a 24-hour clock. The format is hh:mm:ss (for example, 15:30:00). • Date: Enter as yyyy-mm-dd [hh:mm:ss], where time of day is optional (for example, 2008-06-04 or 2008-06-04 15:30:00). Note the space separating the date and time. When specifying an absolute date and/or time, the rule must use a compare type operator (< | <= | = | != | > | >=). For example: ibrix_migrator -A -f ifs2 -r "atime > '2010-09-23' " -S TIER1 -D TIER2 Use the following qualifiers for relative times and dates: • Relative time: Enter in rules as year or years, month or months, week or weeks, day or days, hour or hours. • Relative date: Use older than or younger than. The rules engine uses the time the ibrix_migrator command starts execution as the start time for the rule. It then computes the required time for the rule based on this start time. For example, ctime older than 4 weeks refers to that time period more that 4 weeks before the start time. The following example uses a relative date: ibrix_migrator -A -f ifs2 -r "atime older than 2 days " -S TIER1 -D TIER2

Writing tiering rules 237 Rule keywords The following keywords can be used in rules.

Keyword Description

atime Access time, used in a rule as a fixed or relative time.

ctime Change time, used in a rule as a fixed or relative time.

mtime Modification time, used in a rule as a fixed or relative time

gid An integer corresponding to a group ID.

gname A string corresponding to a group name. Enclose the name string in double quotes.

uid An integer corresponding to a user ID.

uname A string corresponding to a user name, where the user is the owner of the file. Enclose the name string in double quotes.

type File system entity the rule operates on. Currently, only the file entity is supported.

size In size-based rules, the threshold value for determining migration. Value is an integer specified in K (KB), M (MB), G (GB), and T (TB). Do not separate the value from its unit (for example, 24K).

name Regular expression. A typical use of a regular expression is to match file names. Enclose a regular expression in double quotes. The * wildcard is valid (for example, name = "*.mpg"). A name cannot contain a / character. You cannot specify a path; only a filename is allowed.

path Path name that allows these wild cards: *, ?, /. For example, if the mountpoint for the file system is /mnt, path=ibfs1/mydir/* matches the entire directory subtree under /mnt/ibfs1/mydir. (A path cannot start with a /).

strict_path Path name that rigidly conforms to UNIX shell file name expansion behavior. For example, strict_path=/mnt/ibfs1/mydir/* matches only the files that are explicitly in the mydir directory, but does not match any files in subdirectories of mydir.

Migration rule examples When you write a rule, identify the following components: • File system (-f) • Source tier (-S) • Destination tier (-D) Use the following command to write a rule. The rule portion of the command must be enclosed in single quotes. ibrix_migrator -A -f FSNAME -r 'RULE' -S SOURCE_TIER -D DEST_TIER Examples: The rule in the following example is based on the file’s last modification time, using a relative time period. All files whose last modification date is more than one month in the past are moved. # ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month' -S T1 -D T2 In the next example, the rule is modified to limit the files being migrated to two types of graphic files. The or expression is enclosed in parentheses, and the * wildcard is used to match filename patterns. # ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month and ( name = "*.jpg" or name = "*.gif" )' -S T1 -D T2 In the next example, three conditions are imposed on the migration. Note that there is no space between the integer and unit that define the size threshold (10M): # ibrix_migrator -A -f ifs2 -r 'ctime older than 1 month and type = file and size >= 10M' -S T1 -D T2

238 Using data tiering The following example uses the path keyword. It moves files greater than or equal to 5M that are under the directory /ifs2/tiering_test from TIER1 to TIER2: ibrix_migrator -A -f ifs2 -r "path = tiering_test and size >= 5M" -S TIER1 -D TIER2 Rules can be group- or user-based as well as time- or data-based. In the following example, files associated with two users are migrated to T2 with no consideration of time. The names are quoted strings. # ibrix_migrator -A -f ifs2 -r 'type = file and ( uname = "ibrixuser" or uname = "nobody" )' -S T1 -D T2 Conditions can be combined with and and or to create very precise tiering rules, as shown in the following example. # ibrix_migrator -A -f ifs2 -r ' (ctime older than 3 weeks and ctime younger than 4 weeks) and type = file and ( name = "*.jpg" or name = "*.gif" ) and (size >= 10M and size <= 25M)' -S T1 -D T2 Ambiguous rules It is possible to write a set of ambiguous rules, where different rules could be used to move a file to conflicting destinations. For example, if a file can be matched by two separate rules, there is no guarantee which rule will be applied in a tiering job. Ambiguous rules can cause a file to be moved to a specific tier and then potentially moved back. Examples of two such situations follow. Example 1: In the following example, if a .jpg file older than one month exists in tier 1, then the first rule moves it to tier 2. However, once it is in tier 2, it is matched by the second rule, which then moves the file back to tier 1. # ibrix_migrator -A -f ifs2 -r ' mtime older than 1 month ' -S T1 -D T2 # ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1 There is no guarantee as to the order in which the two rules will be executed; therefore, the final destination is ambiguous because multiple rules can apply to the same file. Example 2: Rules can cause data movement in both directions, which can lead to issues. In the following example, the rules specify that all .doc files in tier 1 to be moved to tier 2 and all .jpg files in tier 2 be moved to tier 1. However, this might not succeed, depending on how full the tiers are. # ibrix_migrator -A -f ifs2 -r ' name = "*.doc" ' -S T1 -D T2 # ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1 For example, if tier 1 is filled with .doc files to 70% capacity and tier2 is filled with .jpg files to 80% capacity, then tiering might terminate before it is able to fully "swap" the contents of tier 1 and tier 2. The files are processed in no particular order; therefore, it is possible that more .doc files will be encountered at the beginning of the job, causing space on tier 2 to be consumed faster than on tier 1. Once a destination tier is full, obviously no further movement in that direction is possible. These rules in these two examples are ambiguous because they give rise to possible conflicting file movement. It is the user’s responsibility to write unambiguous rules for the data tiering policy for their file systems.

Writing tiering rules 239 18 Using file allocation

This chapter describes how to configure and manage file allocation. Overview IBRIX software allocates new files and directories to segments according to the allocation policy and segment preferences that are in effect for a client. An allocation policy is an algorithm that determines the segments that are selected when clients write to a file system. File allocation policies File allocation policies are set per file system on each file serving node and on the 9000 client. The policies define the following: • Preferred segments. The segments where a file serving node or the 9000 client creates all new files and directories. • Allocation policy. The policy that a file serving node or the 9000 client uses to choose segments from its pool of preferred segments to create new files and directories. The segment preferences and allocation policy are set locally for the 9000 client. For NFS, CIFS, HTTP, and FTP clients (collectively referred to as NAS clients), the allocation policy and segment preferences must be set on the file serving nodes from which the NAS clients access shares. Segment preferences and allocation policies can be set and changed at any time, including when the target file system is mounted and in use.

IMPORTANT: It is possible to set separate allocation policies for files and directories. However, this feature is deprecated and should not be used unless you are directed to do so by HP support.

NOTE: The 9000 client accesses segments directly through the owning file serving node and do not honor the file allocation policy set on file serving nodes.

IMPORTANT: Changing segment preferences and allocation policy will alter file system storage behavior. The following tables list standard and deprecated preference settings and allocation policies.

240 Using file allocation Standard segment preferences and allocation policies

Name Description Comment

ALL Prefer all of the segments available in the file This is the default segment preference. It is suitable system for new files and directories. for most use cases.

LOCAL Prefer the file serving node’s local segments for No writes are routed between the file serving nodes new files and directories. in the cluster. This preference is beneficial for performance in some configurations and for some workloads, but can cause some segments to be overutilized.

RANDOM Allocate files to a randomly chosen segment This is the default allocation policy. It generally among preferred segments. spreads new files and directories evenly (by number of files, not by capacity) across all of the preferred segments; however, that is not guaranteed.

ROUNDROBIN Allocate files to preferred segments in segment This policy guarantees that new files and folders order, returning to the first segment (or the are spread evenly across the preferred segments designated starting segment) when a file or (by number of files, not by capacity). directory has been allocated to the last segment.

Deprecated segment preferences and allocation policies

IMPORTANT: HP recommends that you do not use these options. They are currently supported but will be removed in a future release.

Name Description Comment

AUTOMATIC Lets the IBRIX software select the allocation policy. Should be used only on the advice of HP support.

DIRECTORY Allocates files to the segment where its parent Should be used only on the advice directory is located. of HP support.

STICKY Allocates files to one segment until the segment’s Should be used only on the advice storage limit is reached, and then moves to the next of HP support. segment as determined by the AUTOMATIC file allocation policy.

HOST_ROUNDROBIN_NB For clusters with more than 16 file serving nodes, Should be used only on the advice takes a subset of the servers to be used for file of HP support. creation and rotates this subset on a regular, periodic basis.

NONE Sets directory allocation policy only. Causes the Use NONE only to set file and directory allocation policy to revert to its default, directory allocation to the same which is the policy set for file allocation. policy.

How file allocation settings are evaluated By default, ALL segments are preferred and file systems use the RANDOM allocation policy. These defaults are adequate for most IBRIX environments; but in some cases, it may be necessary to change the defaults to optimize file storage for your system.

Overview 241 An IBRIX 9000 client or IBRIX file serving node (referred to as “the host”) uses the following precedence rules to evaluate the file allocation settings that are in effect: • The host uses the default allocation policies and segment preferences: The RANDOM policy is applied, and a segment is chosen from among ALL the available segments. • The host uses a non-default allocation policy (such as ROUNDROBIN) and the default segment preference: Only the file or directory allocation policy is applied, and a segment is chosen from among ALL available segments. • The host uses a non-default segment preference and a non-default allocation policy (such as LOCAL/ROUNDROBIN): A segment is chosen according to the following rules: ◦ From the pool of preferred segments, select a segment according to the allocation policy set for the host, and store the file in that segment if there is room. If all segments in the pool are full, proceed to the next rule. ◦ Use the AUTOMATIC allocation policy to choose a segment with enough storage room from among the available segments, and store the file.

When file allocation settings take effect on the 9000 client Although file allocation settings are executed immediately on file serving nodes, for the 9000 client, a file allocation intention is stored in the Fusion Manager. When IBRIX software services start on a client, the client queries the Fusion Manager for the file allocation settings that it should use and then implements them. If the services are already running on a client, you can force the client to query the Fusion Manager by executing ibrix_client or ibrix_lwhost --a on the client, or by rebooting the client. Using CLI commands for file allocation Follow these guidelines when using CLI commands to perform any file allocation configuration tasks: • To perform a task for NAS clients (NFS, CIFS, FTP, HTTP), specify file serving nodes for the -h HOSTLIST argument. • To perform a task for IBRIX 9000 clients, specify individual clients for -h HOSTLIST or specify a hostgroup for -g GROUPLIST. Hostgroups are a convenient way to configure file allocation settings for a set of IBRIX 9000 clients. To configure file allocation settings for all IBRIX 9000 clients, specify the clients hostgroup. Setting file and directory allocation policies You can set a nondefault file or directory allocation policy for file serving nodes and IBRIX 9000 clients. You can also specify the first segment where the policy should be applied, but in practice this is useful only for the ROUNDROBIN policy.

IMPORTANT: Certain allocation policies are deprecated. See “File allocation policies” (page 240) for a list of standard allocation policies. On the GUI, open the Modify Filesystem Properties dialog box and select the Host Allocation tab.

242 Using file allocation Setting file and directory allocation policies from the CLI Allocation policy names are case sensitive and must be entered as uppercase letters (for example, RANDOM). Set a file allocation policy: ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} –s LVNAMELIST –p POLICY [-S STARTSEGNUM] The following example sets the ROUNDROBIN policy for files only on file system ifs1 on file serving node s1.hp.com, starting at segment ilv1: ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -s ilv1 Set a directory allocation policy: Include the -R option to specify that the command is for a directory. ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p POLICY [-S STARTSEGNUM] [-R] The following example sets the ROUNDROBIN directory allocation policy on file system ifs1 for file serving node s1.hp.com, starting at segment ilv1: ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -R Setting segment preferences There are two ways to prefer segments for file serving nodes, IBRIX 9000 clients, or hostgroups: • Prefer a pool of segments for the hosts to use. • Prefer a single segment for files created by a specific user or group on the clients. Both methods can be in effect at the same time. For example, you can prefer a segment for a user and then prefer a pool of segments for the clients on which the user will be working. On the GUI, open the Modify Filesystem Properties dialog box and select the Segment Preferences tab.

Setting segment preferences 243 Creating a pool of preferred segments from the CLI A segment pool can consist of individually selected segments, all segments local to a file serving node, or all segments. Clients will apply the allocation policy that is in effect for them to choose a segment from the segment pool.

NOTE: Segments are always created in the preferred condition. If you want to have some segments preferred and others unpreferred, first select a single segment and prefer it. This action unprefers all other segments. You can then work with the segments one at a time, preferring and unpreferring as required. By design, the system cannot have zero preferred segments. If only one segment is preferred and you unprefer it, all segments become preferred. When preferring multiple pools of segments (for example, one for IBRIX 9000 clients and one for file serving nodes, make sure that no segment appears in both pools. Use the following command to specify the pool by logical volume name (LVNAMELIST): ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -s LVNAMELIST Use the following command and the LOCAL keyword to create a pool of all segments on file serving nodes. Use the ALL keyword to restore the default segment preferences. ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -S {SEGNUMLIST|ALL|LOCAL} Restoring the default segment preference The default is for all file system segments to be preferred. Use the following command to restore the default value: ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -S ALL

244 Using file allocation Tuning allocation policy settings To optimize system performance, you can globally change the following allocation policy settings for a file system: • File allocation policy. IMPORTANT: Certain allocation policies are deprecated. See “File allocation policies” (page 240) for a list of standard allocation policies.

• Starting segment number for applying changes. • Preallocation: number of KB to preallocate for files. • Readahead: number of KB in a file to pre-fetch. • NFS readahead: number of KB in a file to pre-fetch on NFS systems.

NOTE: Preallocation, Readahead, and NFS readahead are set to the recommended values during the installation process. Contact HP Support for guidance if you want to change these values. On the GUI, open the Modify Filesystem Properties dialog box and select the Allocation tab.

Restore the default file allocation policy: ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p -U Listing allocation policies Use the following command to list the preferred segments (the -S option) or the allocation policy (the -P option) for the specified hosts, hostgroups, or file system. ibrix_fs_tune -l [-S] [-P] [-h HOSTLIST | —g GROUPLIST] [-f FSNAME]

Tuning allocation policy settings 245 HOSTNAME FSNAME POLICY STARTSEG DIRPOLICY DIRSEG SEGBITS READAHEAD PREALLOC HWM SWM mak01.hp.com ifs1 RANDOM 0 NONE 0 DEFAULT DEFAULT DEFAULT DEFAULT DEFAULT

246 Using file allocation 19 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support

Before contacting HP, collect the following information: • Product model names and numbers • Technical support registration number (if applicable) • Product serial numbers • Error messages • Operating system type and revision level • Detailed questions Related information The following documents provide related information: • HP IBRIX 9000 Storage Release Notes • HP IBRIX 9000 Storage CLI Reference Guide • HP IBRIX 9300/9320 Storage Administrator Guide • HP IBRIX 9720/9730 Storage Administrator Guide • HP IBRIX 9000 Storage Installation Guide • HP IBRIX 9000 Storage Network Best Practices Guide Related documents are available on the IBRIX Manuals page at http://www.hp.com/support/ IBRIXManuals. HP websites For additional information, see the following HP websites: • http://www.hp.com • http://www.hp.com/go/StoreAll • http://www.hp.com/go/storage • http://www.hp.com/support/manuals Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates

After registering, you will receive e-mail notification of product enhancements, new driver versions, firmware updates, and other product resources.

Contacting HP 247 20 Documentation feedback

HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback ([email protected]). Include the document title and part number, version number, or the URL when submitting your feedback.

248 Documentation feedback Glossary

ACE Access control entry. ACL Access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage. A dedicated storage device that connects directly to one or more servers. DNS Domain name system. FTP File Transfer Protocol. GSI Global service indicator. HA High availability. HBA Host bus adapter. HCA Host channel adapter. HDD Hard disk drive. IAD HP 9000 Software Administrative Daemon. iLO Integrated Lights-Out. IML Initial microcode load. IOPS I/Os per second. IPMI Intelligent Platform Management Interface. JBOD Just a bunch of disks. KVM Keyboard, video, and mouse. LUN Logical unit number. A LUN results from mapping a logical unit number, port ID, and LDEV ID to a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the number of LDEVs associated with the LUN. MTU Maximum Transmission Unit. NAS Network attached storage. NFS Network file system. The protocol used in most UNIX environments to share folders or mounts. NIC Network interface card. A device that handles communication between a device and other devices on a network. NTP Network Time Protocol. A protocol that enables the storage system’s time and date to be obtained from a network-attached server, keeping multiple hosts and storage devices synchronized. OA Onboard Administrator. OFED OpenFabrics Enterprise Distribution. OSD On-screen display. OU Active Directory Organizational Units. RO Read-only access. RPC Remote Procedure Call. RW Read-write access. SAN Storage area network. A network of storage devices available to one or more servers. SAS Serial Attached SCSI.

249 SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SMB Server Message Block. The protocol used in Windows environments for shared folders. SNMP Simple Network Management Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. UDP User Datagram Protocol. UID Unit identification. VACM SNMP View Access Control Model. VC HP Virtual Connect. VIF Virtual interface. WINS Windows Internet Naming Service. WWN World Wide Name. A unique identifier assigned to a Fibre Channel device. WWNN World wide node name. A globally unique 64-bit identifier assigned to each Fibre Channel node process. WWPN World wide port name. A unique 64-bit address used in a FC storage network to identify each device in a FC network.

250 Glossary Index

delete files, 153 Symbols enable on file system, 145 /etc/likewise/vhostmap file, 91 export custom metadata, 164 file administration, 150 A change retention period, 152 Active Directory delete files, 153 configure, 62 remove retention period, 153 configure from CLI, 69 set or remove legal hold, 152 Linux static user mapping, 87 file states, 143 synchronize with NTP server, 90 hard links, 160 use with LDAP ID mapping, 58 import metadata, 165 Antivirus legal holds, 152 configure, 192 metadata service, 162 enable or disable, 194 on-demand data validation scans, 154 file exclusions, 196 remote replication, use with, 160 protocol scan settings, 196 rentention profile scans, start or schedule, 199 modify, 149 scans, status, 201 view, 148 statistics, 202 reports, 157 unavailable policy, 195 retained file, 143 virus definitions, 194 create, 149 virus scan engine, 192 view retention information, 150 add, 193 retention period remove, 194 change, 152 audit log, 167 remove, 153 authentication retention profile, 143 Active Directory, 58 save audit metadata, 165 configure from CLI, 69 schedule data validation scans, 153 configure from GUI, 60 troubleshooting, 160, 161 Local Users, 58 validation scan errors, 156 Authentication Wizard, 60 validation scan results, 155 automated block snapshots view retention information, 150 create from CLI, 218 WORM file, 143 create on GUI, 215 data tiering delete snapshot scheme, 220 assign segments, 230 modify snapshot scheme, 219 configure, 225 snapshot scheme, 214 manage from CLI, 233 view snapshot scheme from CLI, 219 migration task, 232 primary tier, 234 B tiering policy, 231 backups tiering rules, 236 snapshots, software, 212 data validation compare hash sums, 156 C on-demand scans, 154 case-insensitive filenames, 54 resolve scan errors, 156 CIFS schedule scans, 153 locking behavior, 92 stop or pause, 155 contacting HP, 247 view scan results, 155 data validation scans, 144 D directory tree quotas data retention delete, 32 audit log, 167 disk space information, 39 audit log reports, 168 document autocommit period, 144 related documentation, 247 backup support, 160 documentation data validation scans, 144 providing feedback on, 248

251 E access, 105 Export Control, enable, 17, 24 add, 98 Express Query start or stop, 104 export metadata, 164 vsftpd service, 104 HTTP-StoreAll REST API shares., 109 import metadata, 165 H save audit metadata, 165 help obtaining, 247 F HP file allocation technical support, 247 allocation policies, 240 HP websites, 247 evaluation of allocation settings, 241 HTTP list policies, 245 Apache tunables, 116 segment preferences, 240 authentication , 58 set file and directory policies, 242 configuration, 108 set segment preferences, 243 configuration best practices, 109 tune policy settings, 245 configure from the CLI, 117 file serving nodes configure on the GUI, 109 delete, 45 shares, access, 119 segment management, 10 start or stop, 118 unmount a file system, 22 troubleshooting, 121 view SMB shares, 81 WebDAV shares, 120 file systems HTTP-StoreAll REST API shares, 109 32-bit compatibility mode, 22 allocation policy, 10 I case-insensitive filenames, 54 IBRIX 9000 clients check and repair, 45 delete, 45 components of, 11 limit file access, 23 create locally mount a file system, 22 from CLI, 18 locally unmount file system, 22 New Filesystem Wizard, 13 options for, 13 L data retention and validation, 143 LDAP authentication delete, 44 configuration template, 59 disk space information, 39 configure, 63 Export Control, enable, 17, 24 configure from CLI, 69 extend, 39 remote LDAP server, configure, 59 file limit, 19 requirements, 59 lost+found directory, 38 LDAP ID mapping mount, 20, 22 configure, 62 mountpoints, , 20 configure from CLI, 70 operating principles, 9 use with Active Directory, 58 performance, 34 Linux IBRIX 9000 clients quotas, 25 disk space information, 39 remote replication, 127 Linux static user mapping, 87 segments Local Groups authentication, 65 defined, 9 configure from CLI, 71 rebalance, 40 Local Users authentication, 66 snapshots, block, 213 configure from CLI, 71 snapshots, software, 204 logical volumes troubleshooting, 46 view information, 36 unmount, 22 lost+found directory, 38 view summary information, 36 filenames, case-insensitive, 54 M FTP, 98 Microsoft Management Console authentication , 58 manage SMB shares, 83 configuration best practices, 98 migration, files, 232 configuration profile, 103 mounting, file system, 20, 22 FTP share mountpoints

252 Index create from CLI, 21 audit log reports, 168 delete, 21 autocommit period, 144 view, 20, 22 backup support, 160 data validation scans, 144 N enable on file system, 145 New Filesystem Wizard, 13 export custom metadata, 164 NFS file administration, 150 case-insensitive filenames, 54 change retention period, 152 configure NFS server threads, 51 delete files, 153 export file systems, 51 remove retention period, 153 unexport file systems, 54 set or remove legal hold, 152 file states, 143 P hard links, 160 physical volumes import metadata, 165 delete, 44 legal holds, 152 view information, 35 metadata service, 162 on-demand data validation scans, 154 Q remote replication, use with, 160 quotas rentention profile configure email notifications, 32 modify, 149 delete, 32 view, 148 enable, 25 reports, 157 export to file, 30 retained file, 143 import from file, 29 create, 149 online quota check, 31 view retention information, 150 operation of, 25 retention period quotas file format, 30 change, 152 SMB shares, 92 remove, 153 troubleshoot, 33 retention profile, 143 user and group, 26 save audit metadata, 165 schedule data validation scans, 153 R troubleshooting, 160, 161 rebalancing segments, 40 validation scan errors, 156 stop tasks, 44 validation scan results, 155 track progress, 43 view retention information, 150 view task status, 44 WORM file, 143 related documentation, 247 remote replication S configure target export, 129 SegmentNotAvailable alert, 48 continuous, 127 SegmentRejected alert, 48 defined, 127 segments failover/failback, 141 defined, 9 identify host and NIC designations, 130 delete, 44 intercluster, 128 rebalance, 40 intracluster, 129 stop tasks, 44 pause or resume task, 138, 140 track progress, 43 register clusters, 129 view task status, 44 remote cluster, 129 SMB replicate a snapshot, 136 Active Directory domain, configure, 88 run-once, 128 activity statistics per node, 73 start intracluster replication, 140 authentication, 58 start replication task, 135, 139 configure nodes, 73 start run-once task, 140 Linux static user mapping, 87 stop task, 138, 140 monitor SMB services, 74 troubleshooting, 142 permissions management, 94 view tasks, 133, 140 RFC2037 support, 87 WORM/retained files, 141 shadow copy, 92 retention, data share administrators, 68 audit log, 167 SMB server consolidation, 90

253 SMB signing, 80 HP, 247 start or stop SMB service, 73 service locator website, 247 troubleshooting, 96 tiering, data SMB server consolidation, 90 assign segments, 230 SMB shares configure, 225 add with MMC, 85 migration task, 232, 233 configure with GUI, 76 primary tier, 234 delete with MMC, 87 tiering policy, 231 manage with CLI, 82 tiering rules, 236 manage with MMC, 83 quotas information, 92 U SMB signing, 80 unmounting, file systems, 22 view share information, 81 SMB signing, 80 V snapshots, block validation scans, 144 access snapshot file systems, 222 validation, data automated, 214 compare hash sums, 156 create from CLI, 218 on-demand scans, 154 create on GUI, 215 resolve scan errors, 156 delete snapshot scheme, 220 schedule scans, 153 modify snapshot scheme, 219 stop or pause, 155 view snapshot scheme from CLI, 219 view scan results, 155 clear invalid snapshot, 220 volume groups create, 220 delete, 44 defined, 213 view information, 35 delete, 220 discover LUNs, 214 W list storage allocation, 214 websites mount, 220 HP Subscriber's Choice for Business, 247 register the snapshot partition, 214 set up the snapshot partition, 213 troubleshooting, 224 view information about, 221 snapshots, software access, 207 backup, 212 defined, 204 delete, 209 on-demand snapshots, 206 reclaim file system space, 209 replicate, 136 restore files, 208 schedule, 205 snap trees configure, 204 move files, 212 remove snapshot authorization, 211 schedule snapshots, 205 space usage, 207 view on GUI, 206 SSL certificates add to cluster, 125 create, 123 delete, 126 export, 126 Subscriber's Choice, HP, 247 T technical support

254 Index