Updated for 8.3.1

Clustered Data ONTAP® 8.3 File Access Management Guide for NFS

NetApp, Inc. Telephone: +1 (408) 822-6000 Part number: 215-10105_A0 495 East Java Drive Fax: +1 (408) 822-4501 June 2015 Sunnyvale, CA 94089 Support telephone: +1 (888) 463-8277 U.S. Web: www.netapp.com Feedback: [email protected]

Table of Contents | 3

Contents

Considerations before configuring file access ...... 10 File protocols that Data ONTAP supports ...... 10 How Data ONTAP controls access to files ...... 10 Authentication-based restrictions ...... 10 File-based restrictions ...... 11 LIF configuration requirements for file access management ...... 11 How namespaces and volume junctions affect file access on SVMs with FlexVol volumes ...... 11 What namespaces in SVMs with FlexVol volumes are ...... 11 Volume junction usage rules ...... 12 How volume junctions are used in SMB and NFS namespaces ...... 12 What the typical NAS namespace architectures are ...... 12 Creating and managing data volumes in NAS namespaces ...... 15 Creating data volumes with specified junction points ...... 15 Creating data volumes without specifying junction points ...... 16 Mounting or unmounting existing volumes in the NAS namespace ...... 17 Displaying volume mount and junction point information ...... 18 How security styles affect data access ...... 19 What the security styles and their effects are ...... 19 Where and when to set security styles ...... 20 How to decide on what security style to use on SVMs with FlexVol volumes ...... 20 How security style inheritance works ...... 21 How Data ONTAP preserves UNIX permissions ...... 21 How to manage UNIX permissions using the Windows Security tab ...... 21 Configuring security styles ...... 22 Configuring security styles on SVM root volumes ...... 22 Configuring security styles on FlexVol volumes ...... 23 Configuring security styles on qtrees ...... 23 NFS and CIFS file naming dependencies ...... 24 Characters a file name can use ...... 24 Case-sensitivity of a file name ...... 24 How Data ONTAP creates file names ...... 24 How Data ONTAP handles file names containing UTF-16 supplementary characters ...... 25 Use of hard mounts ...... 25 How Data ONTAP supports file access using NFS ...... 26 How Data ONTAP handles NFS client authentication ...... 26 How Data ONTAP uses name services ...... 26 How Data ONTAP grants CIFS file access from NFS clients ...... 27 Supported NFS versions and clients ...... 27 4 | File Access Management Guide for NFS

NFSv4.0 functionality supported by Data ONTAP ...... 28 Limitations of Data ONTAP support for NFSv4 ...... 28 Data ONTAP support for NFSv4.1 ...... 29 Data ONTAP support for parallel NFS ...... 29 Where to find information about NFS support on Infinite Volumes ...... 29 Process for NFS access to UNIX security style data on SVMs with FlexVol volumes ...... 29 Process for NFS access to NTFS security style data on SVMs with FlexVol volumes ...... 30 Setting up file access using NFS ...... 31 Modifying protocols for SVMs ...... 31 Creating an NFS server ...... 32 Securing NFS access using export policies ...... 32 How export policies control client access to volumes or qtrees ...... 33 Default export policy for SVMs with FlexVol volumes ...... 33 How export rules work ...... 33 How to handle clients with an unlisted security type ...... 35 How security types determine client access levels ...... 36 How to handle superuser access requests ...... 38 Creating an export policy ...... 40 Adding a rule to an export policy ...... 40 Loading netgroups into SVMs ...... 43 Verifying the status of netgroup definitions ...... 44 Setting an export rule's index number ...... 45 Associating an export policy to a FlexVol volume ...... 46 Assigning an export policy to a qtree ...... 47 Removing an export policy from a qtree ...... 48 Validating qtree IDs for qtree file operations ...... 48 Export policy restrictions and nested junctions for FlexVol volumes ...... 49 Checking client access to exports ...... 49 Using Kerberos with NFS for strong security ...... 50 Data ONTAP support for Kerberos ...... 50 Requirements for configuring Kerberos with NFS ...... 50 Configuring NFS Kerberos permitted encryption types ...... 53 Specifying the user ID domain for NFSv4 ...... 54 Creating an NFS Kerberos realm configuration ...... 55 Creating an NFS Kerberos configuration ...... 56 Configuring name services ...... 57 How Data ONTAP name service switch configuration works ...... 57 Configuring the name service switch table ...... 59 Using LDAP ...... 60 Using LDAP over SSL/TLS to secure communication ...... 60 Creating a new LDAP client schema ...... 62 Enabling LDAP RFC2307bis support ...... 63 Configuration options for LDAP directory searches ...... 64 Table of Contents | 5

Creating an LDAP client configuration ...... 65 Improving performance of LDAP directory netgroup-by-host searches ...... 67 Enabling LDAP on SVMs ...... 68 Configuring SVMs to use LDAP ...... 69 Creating a NIS domain configuration ...... 70 Configuring local UNIX users and groups ...... 71 Creating a local UNIX user ...... 71 Adding a user to a local UNIX group ...... 72 Loading local UNIX users from a URI ...... 73 Creating a local UNIX group ...... 74 Loading local UNIX groups from a URI ...... 75 How name mappings are used ...... 76 How name mapping works ...... 77 Multidomain searches for UNIX user to Windows user name mappings ..... 77 Name mapping conversion rules ...... 79 Creating a name mapping ...... 80 Configuring the default user ...... 81 Support for NFS over IPv6 ...... 82 Enabling IPv6 for NFS ...... 82 Enabling access for Windows NFS clients ...... 82 Where to find information about setting up file access to Infinite Volumes ...... 83 Managing file access using NFS ...... 84 Enabling or disabling NFSv3 ...... 84 Enabling or disabling NFSv4.0 ...... 84 Enabling or disabling NFSv4.1 ...... 84 Enabling or disabling pNFS ...... 85 Enabling NFS clients to view exports of SVMs ...... 85 Controlling NFS access over TCP and UDP ...... 86 Controlling NFS requests from nonreserved ports ...... 87 Handling NFS access to NTFS volumes or qtrees for unknown UNIX users ...... 87 Considerations for clients that mount NFS exports using a nonreserved port ...... 88 Performing stricter access checking for netgroups by verifying domains ...... 89 Securing file access by using Storage-Level Access Guard ...... 89 Modifying ports used for NFSv3 services ...... 91 Commands for managing NFS servers ...... 92 Troubleshooting name service issues ...... 93 Commands for managing name service switch entries ...... 95 Commands for managing name mappings ...... 95 Commands for managing local UNIX users ...... 96 Commands for managing local UNIX groups ...... 96 Limits for local UNIX users, groups, and group members ...... 97 Managing limits for local UNIX users and groups ...... 97 Commands for managing local netgroups ...... 98 Commands for managing NIS domain configurations ...... 98 Commands for managing LDAP client configurations ...... 99 6 | File Access Management Guide for NFS

Commands for managing LDAP configurations ...... 99 Commands for managing LDAP client schema templates ...... 100 Commands for managing NFS Kerberos interface configurations ...... 100 Commands for managing NFS Kerberos realm configurations ...... 101 Commands for managing export policies ...... 101 Commands for managing export rules ...... 102 Fencing or unfencing exports in clustered Data ONTAP ...... 102 Configuring the NFS credential cache ...... 103 How the NFS credential cache works ...... 103 Reasons for modifying the NFS credential cache time-to-live ...... 103 Configuring the time-to-live for cached NFS user credentials ...... 104 Managing export policy caches ...... 105 How Data ONTAP uses export policy caches ...... 105 Flushing export policy caches ...... 106 Displaying the export policy netgroup queue and cache ...... 106 Checking whether a client IP address is a member of a netgroup ...... 107 How the access cache works ...... 108 How access cache parameters work ...... 108 Optimizing access cache performance ...... 109 Managing file locks ...... 110 About file locking between protocols ...... 110 How Data ONTAP treats read-only bits ...... 110 Displaying information about locks ...... 111 Breaking locks ...... 113 Modifying the NFSv4.1 server implementation ID ...... 113 Managing NFSv4 ACLs ...... 114 Benefits of enabling NFSv4 ACLs ...... 114 How NFSv4 ACLs work ...... 114 Enabling or disabling modification of NFSv4 ACLs ...... 115 How Data ONTAP uses NFSv4 ACLs to determine whether it can delete a file ...... 116 Enabling or disabling NFSv4 ACLs ...... 116 Managing NFSv4 file delegations ...... 117 How NFSv4 file delegations work ...... 117 Enabling or disabling NFSv4 read file delegations ...... 118 Enabling or disabling NFSv4 write file delegations ...... 118 Configuring NFSv4 file and record locking ...... 119 About NFSv4 file and record locking ...... 119 Specifying the NFSv4 locking lease period ...... 120 Specifying the NFSv4 locking grace period ...... 120 How NFSv4 referrals work ...... 120 Enabling or disabling NFSv4 referrals ...... 121 Displaying NFS statistics ...... 122 Support for VMware vStorage over NFS ...... 123 Enabling or disabling VMware vStorage over NFS ...... 123 Table of Contents | 7

Enabling or disabling rquota support ...... 124 NFSv3 performance improvement by modifying the TCP maximum read and write size ...... 124 Modifying the NFSv3 TCP maximum read and write size ...... 125 Configuring the number of group IDs allowed for NFS users ...... 126 Controlling root user access to NTFS security-style data ...... 127 Auditing NAS events on SVMs with FlexVol volumes ...... 129 How auditing works ...... 129 Basic auditing concepts ...... 129 How the Data ONTAP auditing process works ...... 130 Aggregate space considerations when enabling auditing ...... 132 Auditing requirements and considerations ...... 132 What the supported audit event log formats are ...... 133 Viewing audit event logs ...... 133 How active audit logs are viewed using Event Viewer ...... 134 SMB events that can be audited ...... 134 Determining what the complete path to the audited object is ...... 136 Considerations when auditing symlinks and hard links ...... 136 Considerations when auditing alternate NTFS data streams ...... 137 NFS file and directory access events that can be audited ...... 138 Planning the auditing configuration ...... 139 Creating a file and directory auditing configuration on SVMs ...... 143 Creating the auditing configuration ...... 143 Enabling auditing on the SVM ...... 145 Verifying the auditing configuration ...... 145 Configuring file and folder audit policies ...... 146 Configuring audit policies on NTFS security-style files and directories .... 146 Configuring auditing for UNIX security style files and directories ...... 150 Displaying information about audit policies applied to files and directories ...... 150 Displaying information about audit policies using the Windows Security tab ...... 150 Displaying information about NTFS audit policies on FlexVol volumes using the CLI ...... 151 Managing auditing configurations ...... 153 Manually rotating the audit event logs ...... 154 Enabling and disabling auditing on SVMs ...... 154 Displaying information about auditing configurations ...... 155 Commands for modifying auditing configurations ...... 156 Deleting an auditing configuration ...... 157 What the process is when reverting ...... 157 Troubleshooting auditing and staging volume space issues ...... 158 How to troubleshoot space issues related to the event log volumes ...... 158 How to troubleshoot space issues related to the staging volumes (cluster administrators only) ...... 158 8 | File Access Management Guide for NFS

Using FPolicy for file monitoring and management on SVMs with FlexVol volumes ...... 160 How FPolicy works ...... 160 What the two parts of the FPolicy solution are ...... 160 What synchronous and asynchronous notifications are ...... 160 Roles that cluster components play with FPolicy implementation ...... 161 How FPolicy works with external FPolicy servers ...... 162 What the node-to-external FPolicy server communication process is ...... 163 How FPolicy services work across SVM namespaces ...... 165 FPolicy configuration types ...... 165 When to create a native FPolicy configuration ...... 166 When to create a configuration that uses external FPolicy servers ...... 166 How FPolicy passthrough-read enhances usability for hierarchical storage management ...... 166 How read requests are managed when FPolicy passthrough-read is enabled ...... 167 Requirements, considerations, and best practices for configuring FPolicy ...... 167 Ways to configure FPolicy ...... 168 Requirements for setting up FPolicy ...... 168 Best practices and recommendations when setting up FPolicy ...... 168 Passthrough-read upgrade and revert considerations ...... 169 What the steps for setting up an FPolicy configuration are ...... 169 Planning the FPolicy configuration ...... 170 Planning the FPolicy external engine configuration ...... 171 Planning the FPolicy event configuration ...... 178 Planning the FPolicy policy configuration ...... 183 Planning the FPolicy scope configuration ...... 188 Creating the FPolicy configuration ...... 191 Creating the FPolicy external engine ...... 192 Creating the FPolicy event ...... 193 Creating the FPolicy policy ...... 193 Creating the FPolicy scope ...... 195 Enabling the FPolicy policy ...... 195 Modifying FPolicy configurations ...... 196 Commands for modifying FPolicy configurations ...... 196 Enabling or disabling FPolicy policies ...... 197 Displaying information about FPolicy configurations ...... 197 How the show commands work ...... 197 Commands for displaying information about FPolicy configurations ...... 198 Displaying information about FPolicy policy status ...... 198 Displaying information about enabled FPolicy policies ...... 199 Managing FPolicy server connections ...... 200 Connecting to external FPolicy servers ...... 200 Disconnecting from external FPolicy servers ...... 201 Displaying information about connections to external FPolicy servers ...... 201 Table of Contents | 9

Displaying information about the FPolicy passthrough-read connection status ...... 203 Glossary ...... 205 Copyright information ...... 214 Trademark information ...... 215 How to send comments about documentation and receive update notifications ...... 216 Index ...... 217 10

Considerations before configuring file access

Data ONTAP allows you to manage access to files by clients using different protocols. There are certain concepts you should be familiar with before configuring file access.

File protocols that Data ONTAP supports Data ONTAP supports file access using the NFS and CIFS protocols. This means clients can access all files on Storage Virtual Machines (SVMs) regardless of what protocol they are connecting with or what type of authentication they require.

Related tasks Modifying protocols for SVMs on page 31

How Data ONTAP controls access to files Data ONTAP controls access to files according to the authentication-based and file-based restrictions that you specify. When a client connects to the storage system to access files, Data ONTAP has to perform two tasks: • Authentication Data ONTAP has to authenticate the client by verifying the identity with a trusted source. In addition, the authentication type of the client is one method that can be used to determine whether a client can access data when configuring export policies (optional for CIFS). • Authorization Data ONTAP has to authorize the user by comparing the user's credentials with the permissions configured on the file or directory and determining what type of access, if any, to provide. To properly manage file , Data ONTAP must communicate with external services such as NIS, LDAP, and servers. Configuring a storage system for file access using CIFS or NFS requires setting up the appropriate services depending on your environment in Data ONTAP.

Related concepts How Data ONTAP uses name services on page 26

Authentication-based restrictions With authentication-based restrictions, you can specify which client machines and which users can connect to the Storage Virtual Machine (SVM). Data ONTAP supports Kerberos authentication from both UNIX and Windows servers.

Related concepts Using Kerberos with NFS for strong security on page 50 Considerations before configuring file access | 11

File-based restrictions With file-based restrictions, you can specify which users can access which files. When a user creates a file, Data ONTAP generates a list of access permissions for the file. Although the form of the permissions list varies with each protocol, it always includes common permissions, such as reading and writing permissions. When a user tries to access a file, Data ONTAP uses the permissions list to determine whether to grant access. Data ONTAP grants or denies access according to the operation that the user is performing, such as reading or writing, and the following factors: • User account • User groups or netgroups • Client protocol • File type

LIF configuration requirements for file access management To properly manage file access control, Data ONTAP must communicate with external services such as NIS, LDAP, and Active Directory servers. The Storage Virtual Machine (SVM) LIFs must be properly configured to allow these communications. The communication with external services happens over the data LIF of the SVM. Therefore, you must ensure that the SVM has a data LIF properly configured to reach all required external services.

Related information Clustered Data ONTAP 8.3 Network Management Guide

How namespaces and volume junctions affect file access on SVMs with FlexVol volumes You must understand what namespaces and volume junctions are and how they work to correctly configure file access on Storage Virtual Machines (SVMs) in your storage environment.

Related concepts Creating and managing data volumes in NAS namespaces on page 15

What namespaces in SVMs with FlexVol volumes are A namespace is a logical grouping of volumes that are joined together at junction points to create a single, logical file system that derives from the Storage Virtual Machine (SVM) root volume. Each SVM has a namespace. CIFS and NFS servers on a data SVM can store and access data across the namespace. Each client can access the entire namespace by mounting an export or accessing a single SMB share at the top of the namespace. Alternatively, SVM administrators can create exports at each volume junction so that clients can create mount points at intermediate locations in the namespace, or they can create SMB shares that point to any directory path in the namespace. 12 | File Access Management Guide for NFS

Volumes can be added at any time by mounting them to any location in the namespace. Clients can immediately access the newly added volume, provided that the volume junction is under the point at which they are accessing the namespace and provided that they have sufficient permissions.

Volume junction usage rules Volume junctions are a way to join individual volumes together into a single, logical namespace to enable data access to NAS clients. Understanding how volume junctions are formed helps you to interpret and apply the usage rules. When NAS clients access data by traversing a junction, the junction appears to be an ordinary directory. A junction is formed when a volume is mounted to a mount point below the root and is used to create a file-system tree. The top of a file-system tree is always the root volume, which is represented by a slash (/). A junction leads from a directory in one volume to the root directory of another volume. • Although specifying a junction point is optional when a volume is created, data in the volume cannot be exported (NFS) and a share cannot be created (CIFS) until the volume is mounted to a junction point in the namespace. • A volume that was not mounted during volume creation can be mounted post-creation. • New volumes can be added to the namespace at any time by mounting them to a junction point. • Mounted volumes can be unmounted; however, unmounting a volume disrupts NAS client access to all data in the volume and to all volumes mounted at child junction points beneath the unmounted volume. • Junction points can be created directly below a parent volume junction, or they can be created on a directory within a volume. For example, a path to a volume junction for a volume named “vol3” might be /vol1/vol2/ vol3, or it might be /vol1/dir2/vol3, or even /dir1/dir2/vol3.

How volume junctions are used in SMB and NFS namespaces You can mount volumes at junction points anywhere within the namespace to create a single, logical namespace. If you specify a junction point when the volume is created, the volume is automatically mounted at the time the volume is created and is available for NAS access. You can create SMB shares and NFS exports on the mounted volume. If you do not specify a junction point, the volume is online but is not mounted for NAS file access. You must mount a volume to a junction point before it can be used for NAS file access.

What the typical NAS namespace architectures are All Storage Virtual Machine (SVM) name spaces derive from the root volume; however, there are several typical NAS namespace architectures that you can use as you create your SVM name space. You can choose the namespace architecture that matches your business and workflow needs. The top of the namespace is always the root volume, which is represented by a slash (/). The namespace architecture under the root falls into three basic categories: • A single branched tree, with only a single junction to the root of the namespace • Multiple branched trees, with multiple junction points to the root of the namespace • Multiple stand-alone volumes, each with a separate junction point to the root of the name space

Namespace with single branched tree An architecture with a single branched tree has a single insertion point to the root of the SVM namespace. The single insertion point can be either a junctioned volume or a directory beneath the Considerations before configuring file access | 13 root. All other volumes are mounted at junction points beneath the single insertion point (which can be a volume or a directory).

root (/) SVM root

A

A 5 A 4

A 1 A 2 A 3 A 41 A 42 A 51 A 52 A 53

A 51 A 5 A 52 A 53

A 3 A A 1 A 2 A 42 A 4 A 41

For example, a typical volume junction configuration with the above namespace architecture might look like the following configuration, where all volumes are junctioned below the single insertion point, which is a directory named “data”:

Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 corp1 true /data/dir1/corp1 RW_volume vs1 corp2 true /data/dir1/corp2 RW_volume vs1 data1 true /data/data1 RW_volume vs1 eng1 true /data/data1/eng1 RW_volume vs1 eng2 true /data/data1/eng2 RW_volume vs1 sales true /data/data1/sales RW_volume vs1 vol1 true /data/vol1 RW_volume vs1 vol2 true /data/vol2 RW_volume vs1 vol3 true /data/vol3 RW_volume vs1 vs1_root - / -

Namespace with multiple branched trees An architecture with multiple branched trees has multiple insertion points to the root of the SVM namespace. The insertion points can be either junctioned volumes or directories beneath the root. All other volumes are mounted at junction points beneath the insertion points (which can be volumes or directories). 14 | File Access Management Guide for NFS

root (/) SVM root

A B C

A A2 A3 B1 B2 C1 C2 C 3 C3

C1 C C2 C3

A3 A A1 A2 B2 B B1

For example, a typical volume junction configuration with the above namespace architecture might look like the following configuration, where there are three insertion points to the root volume of the SVM. Two insertion points are directories named “data” and “projects”. One insertion point is a junctioned volume named “audit”:

Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 audit true /audit RW_volume vs1 audit_logs1 true /audit/logs1 RW_volume vs1 audit_logs2 true /audit/logs2 RW_volume vs1 audit_logs3 true /audit/logs3 RW_volume vs1 eng true /data/eng RW_volume vs1 mktg1 true /data/mktg1 RW_volume vs1 mktg2 true /data/mktg2 RW_volume vs1 project1 true /projects/project1 RW_volume vs1 project2 true /projects/project2 RW_volume vs1 vs1_root - / -

Namespace with multiple stand-alone volumes In an architecture with stand-alone volumes, every volume has an insertion point to the root of the SVM namespace; however, the volume is not junctioned below another volume. Each volume has a unique path, and is either junctioned directly below the root or is junctioned under a directory below the root. Considerations before configuring file access | 15

root (/) SVM root

A B C D E

A B C D E

For example, a typical volume junction configuration with the above namespace architecture might look like the following configuration, where there are five insertion points to the root volume of the SVM, with each insertion point representing a path to one volume.

Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 eng true /eng RW_volume vs1 mktg true /vol/mktg RW_volume vs1 project1 true /project1 RW_volume vs1 project2 true /project2 RW_volume vs1 sales true /sales RW_volume vs1 vs1_root - / -

Creating and managing data volumes in NAS namespaces To manage file access in a NAS environment, you must manage data volumes and junction points on your Storage Virtual Machine (SVM) with FlexVol volumes. This includes planning your namespace architecture, creating volumes with or without junction points, mounting or unmounting volumes, and displaying information about data volumes and NFS server or CIFS server namespaces.

Related concepts How namespaces and volume junctions affect file access on SVMs with FlexVol volumes on page 11

Creating data volumes with specified junction points You can specify the junction point when you create a data volume. The resultant volume is automatically mounted at the junction point and is immediately available to configure for NAS access.

Before you begin The aggregate in which you want to create the volume must already exist. 16 | File Access Management Guide for NFS

Steps 1. Create the volume with a junction point: volume create -vserver vserver_name -volume volume_name -aggregate aggregate_name -size {integer[KB|MB|GB|TB|PB]} -security-style {ntfs| unix|mixed} -junction-path junction_path The junction path must start with the root (/) and can contain both directories and junctioned volumes. The junction path does not need to contain the name of the volume. Junction paths are independent of the volume name. Specifying a volume security style is optional. If you do not specify a security style, Data ONTAP creates the volume with the same security style that is applied to the root volume of the Storage Virtual Machine (SVM). However, the root volume's security style might not be the security style you want applied to the data volume you create. The recommendation is to specify the security style when you create the volume to minimize difficult-to-troubleshoot file-access issues. The junction path is case insensitive; /ENG is the same as /eng. If you create a CIFS share, Windows treats the junction path as if it is case sensitive. For example, if the junction is /ENG, the path of a CIFS share must start with /ENG, not /eng. There are many optional parameters that you can use to customize a data volume. To learn more about them, see the man pages for the volume create command.

2. Verify that the volume was created with the desired junction point: volume show -vserver vserver_name -volume volume_name -junction

Example The following example creates a volume named “home4” located on SVM vs1 that has a junction path /eng/home:

cluster1::> volume create -vserver vs1 -volume home4 -aggregate aggr1 -size 1g -junction-path /eng/home [Job 1642] Job succeeded: Successful

cluster1::> volume show -vserver vs1 -volume home4 -junction Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 home4 true /eng/home RW_volume

Creating data volumes without specifying junction points You can create a data volume without specifying a junction point. The resultant volume is not automatically mounted, and is not available to configure for NAS access. You must mount the volume before you can configure SMB shares or NFS exports for that volume.

Before you begin The aggregate in which you want to create the volume must already exist.

Steps 1. Create the volume without a junction point by using the following command: volume create -vserver vserver_name -volume volume_name -aggregate aggregate_name -size {integer[KB|MB|GB|TB|PB]} -security-style {ntfs| unix|mixed} Specifying a volume security style is optional. If you do not specify a security style, Data ONTAP creates the volume with the same security style that is applied to the root volume of the Storage Considerations before configuring file access | 17

Virtual Machine (SVM). However, the root volume's security style might not be the security style you want applied to the data volume. The recommendation is to specify the security style when you create the volume to minimize difficult-to-troubleshoot file-access issues. There are many optional parameters that you can use to customize a data volume. To learn more about them, see the man pages for the volume create command.

2. Verify that the volume was created without a junction point: volume show -vserver vserver_name -volume volume_name -junction

Example The following example creates a volume named “sales” located on SVM vs1 that is not mounted at a junction point:

cluster1::> volume create -vserver vs1 -volume sales -aggregate aggr3 -size 20GB [Job 3406] Job succeeded: Successful

cluster1::> volume show -vserver vs1 -junction Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 data true /data RW_volume vs1 home4 true /eng/home RW_volume vs1 vs1_root - / - vs1 sales - - -

Mounting or unmounting existing volumes in the NAS namespace A volume must be mounted on the NAS namespace before you can configure NAS client access to data contained in the Storage Virtual Machine (SVM) volumes. You can mount a volume to a junction point if it is not currently mounted. You can also unmount volumes.

About this task If you unmount a volume, all data within the junction point, including data in volumes with junction points contained within the unmounted volume's namespace, are inaccessible to NAS clients. When you unmount a volume, data within the volume is not lost. Additionally, existing volume export policies and SMB shares created on the volume or on directories and junction points within the unmounted volume are retained. If you remount the unmounted volume, NAS clients can access the data contained within the volume using existing export policies and SMB shares.

Steps 1. Perform the desired action:

If you want to... Enter the command...

Mount a volume volume mount -vserver vserver_name -volume volume_name -junction-path junction_path

Unmount a volume volume unmount -vserver vserver_name -volume volume_name

2. Verify that the volume is in the desired mount state: volume show -vserver vserver_name -volume volume_name -junction 18 | File Access Management Guide for NFS

Examples The following example mounts a volume named “sales” located on SVM vs1 to the junction point /sales:

cluster1::> volume mount -vserver vs1 -volume sales -junction-path /sales

cluster1::> volume show -vserver vs1 -junction Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 data true /data RW_volume vs1 home4 true /eng/home RW_volume vs1 vs1_root - / - vs1 sales true /sales RW_volume

The following example unmounts a volume named “data” located on SVM vs1:

cluster1::> volume unmount -vserver vs1 -volume data

cluster1::> volume show -vserver vs1 -junction Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 data - - - vs1 home4 true /eng/home RW_volume vs1 vs1_root - / - vs1 sales true /sales RW_volume

Displaying volume mount and junction point information You can display information about mounted volumes for Storage Virtual Machines (SVMs) and the junction points to which the volumes are mounted. You can also determine which volumes are not mounted to a junction point. You can use this information to understand and manage your SVM namespace.

Step 1. Perform the desired action:

If you want to display... Enter the command...

Summary information about volume show -vserver vserver_name -junction mounted and unmounted volumes on the SVM

Detailed information about volume show -vserver vserver_name -volume mounted and unmounted volume_name -instance volumes on the SVM Specific information about a. If necessary, you can display valid fields for the -fields parameter mounted and unmounted by using the following command: volumes on the SVM volume show -fields ?

b. Display the desired information by using the -fields parameter: volume show -vserver vserver_name -fields fieldname,...

Examples The following example displays a summary of mounted and unmounted volumes on SVM vs1: Considerations before configuring file access | 19

cluster1::> volume show -vserver vs1 -junction Junction Junction Vserver Volume Active Junction Path Path Source ------vs1 data true /data RW_volume vs1 home4 true /eng/home RW_volume vs1 vs1_root - / - vs1 sales true /sales RW_volume

The following example displays information about specified fields for volumes located on SVM vs2:

cluster1::> volume show -vserver vs2 -fields vserver,volume,aggregate,size,state,type,security- style,junction-path,junction-parent,node vserver volume aggregate size state type security-style junction-path junction-parent node ------vs2 data1 aggr3 2GB online RW unix - - node3 vs2 data2 aggr3 1GB online RW ntfs /data2 vs2_root node3 vs2 data2_1 aggr3 8GB online RW ntfs /data2/d2_1 data2 node3 vs2 data2_2 aggr3 8GB online RW ntfs /data2/d2_2 data2 node3 vs2 pubs aggr1 1GB online RW unix /publications vs2_root node1 vs2 images aggr3 2TB online RW ntfs /images vs2_root node3 vs2 logs aggr1 1GB online RW unix /logs vs2_root node1 vs2 vs2_root aggr3 1GB online RW ntfs / - node3

How security styles affect data access Each volume and qtree on the storage system has a security style. The security style determines what type of permissions are used for data on volumes when authorizing users. You must understand what the different security styles are, when and where they are set, how they impact permissions, how they differ between volume types, and more.

What the security styles and their effects are There are four different security styles: UNIX, NTFS, mixed, and unified. Each security style has a different effect on how permissions are handled for data. You must understand the different effects to ensure that you select the appropriate security style for your purposes. It is important to understand that security styles do not determine what client types can or cannot access data. Security styles only determine the type of permissions Data ONTAP uses to control data access and what client type can modify these permissions. For example, if a volume uses UNIX security style, SMB clients can still access data (provided that they properly authenticate and authorize) due to the multiprotocol nature of Data ONTAP. However, Data ONTAP uses UNIX permissions that only UNIX clients can modify using native tools. 20 | File Access Management Guide for NFS

Security Clients that Permissions that Resulting effective Clients that can style can modify clients can use security style access files permissions UNIX NFS NFSv3 mode bits UNIX NFS and SMB NFSv4.x ACLs UNIX NTFS SMB NTFS ACLs NTFS Mixed NFS or SMB NFSv3 mode bits UNIX NFSv4.x ACLs UNIX NTFS ACLs NTFS Unified NFS or SMB NFSv3 mode bits UNIX (only for NFSv4.1 ACLs UNIX Infinite Volumes) NTFS ACLs NTFS

When the security style is mixed or unified, the effective permissions depend on the client type that last modified the permissions because users set the security style on an individual basis. If the last client that modified permissions was an NFSv3 client, the permissions are UNIX NFSv3 mode bits. If the last client was an NFSv4 client, the permissions are NFSv4 ACLs. If the last client was an SMB client, the permissions are Windows NTFS ACLs. Note: Data ONTAP initially sets some default file permissions. By default, the effective security style on all data in UNIX, mixed, and unified security style volumes is UNIX and the effective permissions type is UNIX mode bits (0755 unless specified otherwise) until configured by a client as allowed by the default security style. By default, the effective security style on all data in NTFS security style volumes is NTFS and has an ACL allowing full control to everyone.

Related concepts Managing NFSv4 ACLs on page 114

Related information Clustered Data ONTAP 8.3 Infinite Volumes Management Guide

Where and when to set security styles Security styles can be set on FlexVol volumes (both root or data volumes) and qtrees. Security styles can be set manually at the time of creation, inherited automatically, or changed at a later time. Note: Infinite Volumes always use the unified security style. You cannot configure or change the security style of an Infinite Volume.

Related concepts Configuring security styles on page 22

How to decide on what security style to use on SVMs with FlexVol volumes To help you decide what security style to use on a volume, you should consider two factors. The primary factor is the type of administrator that manages the file system. The secondary factor is the type of user or service that accesses the data on the volume. When you configure the security style on a volume, you should consider the needs of your environment to ensure that you select the best security style and avoid issues with managing permissions. The following considerations can help you decide: Considerations before configuring file access | 21

Security style Choose if...

UNIX • The file system is managed by a UNIX administrator. • The majority of users are NFS clients. • An application accessing the data uses a UNIX user as the service account.

NTFS • The file system is managed by a Windows administrator. • The majority of users are SMB clients. • An application accessing the data uses a Windows user as the service account.

Mixed The file system is managed by both UNIX and Windows administrators and users consist of both NFS and SMB clients.

How security style inheritance works If you do not specify the security style when creating a new FlexVol volume or qtree, it inherits its security style. Security styles are inherited in the following manner: • A FlexVol volume inherits the security style of the root volume of its containing Storage Virtual Machine (SVM). • A qtree inherits the security style of its containing FlexVol volume. • A file or directory inherits the security style of its containing FlexVol volume or qtree. Infinite Volumes cannot inherit security styles. All files and directories in Infinite Volumes always use the unified security style. The security style of an Infinite Volume and the files and directories it contains cannot be changed.

How Data ONTAP preserves UNIX permissions When files in a FlexVol volume that currently have UNIX permissions are edited and saved by Windows applications, Data ONTAP can preserve the UNIX permissions. When applications on Windows clients edit and save files, they read the security properties of the file, create a new temporary file, apply those properties to the temporary file, and then give the temporary file the original file name. When Windows clients perform a query for the security properties, they receive a constructed ACL that exactly represents the UNIX permissions. The sole purpose of this constructed ACL is to preserve the file's UNIX permissions as files are updated by Windows applications to ensure that the resulting files have the same UNIX permissions. Data ONTAP does not set any NTFS ACLs using the constructed ACL.

How to manage UNIX permissions using the Windows Security tab If you want to manipulate UNIX permissions of files or folders in mixed security-style volumes or qtrees on Storage Virtual Machines (SVMs) with FlexVol volumes, you can use the Security tab on Windows clients. Alternatively, you can use applications that can query and set Windows ACLs. • Modifying UNIX permissions You can use the Windows Security tab to view and change UNIX permissions for a mixed security-style volume or qtree. If you use the main Windows Security tab to change UNIX 22 | File Access Management Guide for NFS

permissions, you must first remove the existing ACE you want to edit (this sets the mode bits to 0) before you make your changes. Alternatively, you can use the Advanced editor to change permissions. If mode permissions are used, you can directly change the mode permissions for the listed UID, GID, and others (everyone else with an account on the computer). For example, if the displayed UID has r-x permissions, you can change the UID permissions to rwx. • Changing UNIX permissions to NTFS permissions You can use the Windows Security tab to replace UNIX security objects with Windows security objects on a mixed security-style volume or qtree where the files and folders have a UNIX effective security style. You must first remove all listed UNIX permission entries before you can replace them with the desired Windows User and Group objects. You can then configure NTFS-based ACLs on the Windows User and Group objects. By removing all UNIX security objects and adding only Windows Users and Groups to a file or folder in a mixed security-style volume or qtree, you change the effective security style on the file or folder from UNIX to NTFS. When changing permissions on a folder, the default Windows behavior is to propagate these changes to all subfolders and files. Therefore, you must change the propagation choice to the desired setting if you do not want to propagate a change in security style to all child folders, subfolders, and files.

Configuring security styles You configure security styles on FlexVol volumes and qtrees to determine the type of permissions Data ONTAP uses to control access and what client type can modify these permissions. For information about the security style of Infinite Volumes, see the Clustered Data ONTAP Infinite Volumes Management Guide.

Related concepts How security styles affect data access on page 19

Configuring security styles on SVM root volumes You configure the Storage Virtual Machine (SVM) root volume security style to determine the type of permissions used for data on the root volume of the SVM.

Steps

1. Use the vserver create command with the -rootvolume-security-style parameter to define the security style. The possible options for the root volume security style are unix, ntfs, or mixed. You cannot use unified security style because it only applies to Infinite Volumes. For more information about the vserver create command, see the Clustered Data ONTAP System Administration Guide for Cluster Administrators.

2. Display and verify the configuration, including the root volume security style of the SVM you created: vserver show -vserver vserver_name Considerations before configuring file access | 23

Configuring security styles on FlexVol volumes You configure the FlexVol volume security style to determine the type of permissions used for data on FlexVol volumes of the Storage Virtual Machine (SVM).

Steps 1. Perform one of the following actions:

If the FlexVol volume... Use the command... Does not yet exist volume create and include the -security-style parameter to specify the security style. Already exists volume modify and include the -security-style parameter to specify the security style.

The possible options for the FlexVol volume security style are unix, ntfs, or mixed. You cannot use unified security style because it only applies to Infinite Volumes. If you do not specify a security style when creating a FlexVol volume, the volume inherits the security style of the root volume. For more information about the volume create or volume modify commands, see the Clustered Data ONTAP Logical Storage Management Guide.

2. To display the configuration, including the security style of the FlexVol volume you created, enter the following command: volume show -volume volume_name -instance

Configuring security styles on qtrees You configure the qtree volume security style to determine the type of permissions used for data on qtrees.

Steps 1. Perform one of the following actions:

If the qtree... Use the command... Does not exist yet volume qtree create and include the -security-style parameter to specify the security style. Already exists volume qtree modify and include the -security-style parameter to specify the security style.

The possible options for the qtree security style are unix, ntfs, or mixed. You cannot use unified security style because it only applies to Infinite Volumes. If you do not specify a security style when creating a qtree, the default security style is mixed. For more information about the volume qtree create or volume qtree modify commands, see the Clustered Data ONTAP Logical Storage Management Guide.

2. To display the configuration, including the security style of the qtree you created, enter the following command: volume qtree show -qtree qtree_name -instance 24 | File Access Management Guide for NFS

NFS and CIFS file naming dependencies File naming conventions depend on both the network clients’ operating systems and the file-sharing protocols. The operating system and the file-sharing protocols determine the following: • Characters a file name can use • Case-sensitivity of a file name

Characters a file name can use If you are sharing a file between clients on different operating systems, you should use characters that are valid in both operating systems. For example, if you use UNIX to create a file, do not use a colon (:) in the file name because the colon is not allowed in MS-DOS file names. Because restrictions on valid characters vary from one operating system to another, see the documentation for your client operating system for more information about prohibited characters.

Case-sensitivity of a file name File names are case-sensitive for NFS clients and case-insensitive but case-preserving for CIFS clients. For example, if a CIFS client creates Spec.txt, both CIFS and NFS clients display the file name as Spec.txt. However, if a CIFS user later tries to create spec.txt, the name is not allowed because, to the CIFS client, that name currently exists. If an NFS user later creates a file named spec.txt, NFS and CIFS clients display the file name differently, as follows:

• On NFS clients, you see both file names as they were created, Spec.txt and spec.txt, because file names are case-sensitive.

• On CIFS clients, you see Spec.txt and Spec~1.txt. Data ONTAP creates the Spec~1.txt file name to differentiate the two files.

How Data ONTAP creates file names Data ONTAP creates and maintains two file names for files in any directory that has access from a CIFS client: the original long name and a file name in 8.3 format. For file names that exceed the eight character name or the three character extension limit, Data ONTAP generates an 8.3-format file name as follows:

• It truncates the original file name to six characters, if the file name exceeds six characters. • It appends a tilde (~) and a number, one through five, to file names that are no longer unique after being truncated. If it runs out of numbers because there are more than five similar names, it creates a unique file name that bears no relation to the original file name. • It truncates the file name extension to three characters.

For example, if an NFS client creates a file named specifications.html, the 8.3 format file name created by Data ONTAP is specif~1.htm. If this name already exists, Data ONTAP uses a different number at the end of the file name. For example, if an NFS client then creates another file named specifications_new.html, the 8.3 format of specifications_new.html is specif~2.htm. Considerations before configuring file access | 25

How Data ONTAP handles file names containing UTF-16 supplementary characters If your environment uses file names containing UTF-16 supplementary characters, you must understand how Data ONTAP handles such file names to avoid errors when naming files on the storage system. Unicode character data is typically represented in Windows applications using the 16-bit Unicode Transformation Format (UTF-16). Characters in the basic multilingual plane (BMP) of UTF-16 are represented as single 16-bit code units. Characters in the additional 16 supplementary planes are represented as pairs of 16-bit code units that are referred to as surrogate pairs. When you create file names on the storage system that contain valid or invalid supplementary characters, Data ONTAP rejects the file name and returns an invalid file name error. To avoid this issue, use only BMP characters in file names and avoid using supplementary characters.

Use of hard mounts When troubleshooting mounting problems, you need to be sure that you are using the correct mount type. NFS supports two mount types: soft mounts and hard mounts. You should use only hard mounts for reliability reasons. You should not use soft mounts, especially when there is a possibility of frequent NFS timeouts. Race conditions can occur as a result of these timeouts, which can lead to data corruption. 26

How Data ONTAP supports file access using NFS

You can export and unexport volumes or qtrees on your storage system, making them available or unavailable, respectively, for mounting by NFS clients.

How Data ONTAP handles NFS client authentication NFS clients must be properly authenticated before they can access data on the Storage Virtual Machine (SVM). Data ONTAP authenticates the clients by checking their UNIX credentials against name services you configure. When an NFS client connects to the SVM, Data ONTAP obtains the UNIX credentials for the user by checking different name services, depending on the name services configuration of the SVM. Data ONTAP can check credentials for local UNIX accounts, NIS domains, and LDAP domains. At least one of them must be configured so that Data ONTAP can successfully authenticate the user. You can specify multiple name services and the order in which Data ONTAP searches them. In a pure NFS environment with UNIX volume security styles, this configuration is sufficient to authenticate and provide the proper file access for a user connecting from an NFS client. If you are using mixed, NTFS, or unified volume security styles, Data ONTAP must obtain a CIFS user name for the UNIX user for authentication with a controller. This can happen either by mapping individual users using local UNIX accounts or LDAP domains, or by using a default CIFS user instead. You can specify which name services Data ONTAP searches in which order, or specify a default CIFS user.

Related concepts How security styles affect data access on page 19

How Data ONTAP uses name services Data ONTAP uses name services to obtain information about users and clients. Data ONTAP uses this information to authenticate users accessing data on or administering the storage system, and to map user credentials in a mixed environment. When you configure the storage system, you must specify what name services you want Data ONTAP to use for obtaining user credentials for authentication. Data ONTAP supports the following name services: • Local users (file)

• External NIS domains (NIS) • External LDAP domains (LDAP)

You use the vserver services name-service ns-switch command family to configure Storage Virtual Machines (SVMs) with the sources to search for network information and the order in which to search them. These commands provide the equivalent functionality of the /etc/ nsswitch.conf file on UNIX systems. When an NFS client connects to the SVM, Data ONTAP checks the specified name services to obtain the UNIX credentials for the user. If name services are configured correctly and Data ONTAP can obtain the UNIX credentials, Data ONTAP successfully authenticates the user. How Data ONTAP supports file access using NFS | 27

In an environment with mixed security styles, Data ONTAP might have to map user credentials. You must configure name services appropriately for your environment to allow Data ONTAP to properly map user credentials. Data ONTAP also uses name services for authenticating SVM administrator accounts. You must keep this in mind when configuring or modifying the name service switch to avoid accidentally disabling authentication for SVM administrator accounts. For more information about SVM administration users, see the Clustered Data ONTAP System Administration Guide for Cluster Administrators.

Related concepts Configuring name services on page 57 Configuring local UNIX users and groups on page 71

Related tasks Configuring SVMs to use LDAP on page 69 Creating a NIS domain configuration on page 70

Related information NetApp Technical Report 4379: Name Services Best Practice Guide Clustered Data ONTAP

How Data ONTAP grants CIFS file access from NFS clients Data ONTAP uses Windows NT File System (NTFS) security semantics to determine whether a UNIX user, on an NFS client, has access to a file with NTFS permissions. Data ONTAP does this by converting the user’s UNIX User ID (UID) into a CIFS credential, and then using the CIFS credential to verify that the user has access rights to the file. A CIFS credential consists of a primary Security Identifier (SID), usually the user’s Windows user name, and one or more group SIDs that correspond to Windows groups of which the user is a member. The time Data ONTAP takes converting the UNIX UID into a CIFS credential can be from tens of milliseconds to hundreds of milliseconds because the process involves contacting a domain controller. Data ONTAP maps the UID to the CIFS credential and enters the mapping in a credential cache to reduce the verification time caused by the conversion.

Supported NFS versions and clients Before you can use NFS in your network, you need to know which NFS versions and clients Data ONTAP supports. Data ONTAP supports the following major and minor NFS protocol versions: • NFSv3 • NFSv4.0 • NFSv4.1 • pNFS For the latest information about which NFS clients Data ONTAP supports, see the Interoperability Matrix at mysupport.netapp.com/matrix. 28 | File Access Management Guide for NFS

NFSv4.0 functionality supported by Data ONTAP Data ONTAP supports all the mandatory functionality in NFSv4.0 except the SPKM3 and LIPKEY security mechanisms. The following NFSV4 functionality is supported: COMPOUND Allows a client to request multiple file operations in a single remote procedure call (RPC) request. File delegation Allows the server to delegate file control to some types of clients for read and write access. Pseudo-fs Used by NFSv4 servers to determine mount points on the storage system. There is no mount protocol in NFSv4. Locking Lease-based. There are no separate Network Lock Manager (NLM) or Network Status Monitor (NSM) protocols in NFSv4. For more information about the NFSv4.0 protocol, see RFC 3530.

Related concepts Managing NFSv4 file delegations on page 117 Managing file locks on page 110

Related tasks Configuring NFSv4 file and record locking on page 119

Limitations of Data ONTAP support for NFSv4 You should be aware of several limitations of Data ONTAP support for NFSv4. • The delegation feature is not supported by every client type. • Names with non-ASCII characters on volumes other than UTF8 volumes are rejected by the storage system.

• All file handles are persistent; the server does not give volatile file handles. • Migration and replication are not supported. • NFSv4 clients are not supported with read-only load-sharing mirrors. Data ONTAP routes NFSv4 clients to the source of the load-sharing mirror for direct read and write access. • Named attributes are not supported. • All recommended attributes are supported, except for the following:

◦ archive

◦ hidden How Data ONTAP supports file access using NFS | 29

◦ homogeneous

◦ mimetype

◦ quota_avail_hard

◦ quota_avail_soft

◦ quota_used

◦ system

◦ time_backup

Note: Although it does not support the quota* attributes, Data ONTAP does support user and group quotas through the RQUOTA side band protocol.

Data ONTAP support for NFSv4.1 Data ONTAP supports the NFSv4.1 protocol to allow access for NFSv4.1 clients. By default NFSv4.1 is disabled. You can enable it by specifying the -v4.1 option and setting it to enabled when creating an NFS server on the Storage Virtual Machine (SVM). Data ONTAP does not support NFSv4.1 directory and file level delegations.

Related tasks Enabling or disabling NFSv4.1 on page 84

Data ONTAP support for parallel NFS Data ONTAP supports parallel NFS (pNFS). The pNFS protocol offers performance improvements by giving clients direct access to the data of a set of files distributed across multiple nodes of a cluster. It helps clients locate the optimal path to a volume.

Related tasks Enabling or disabling pNFS on page 85

Where to find information about NFS support on Infinite Volumes For information about the NFS versions and functionality that Infinite Volumes support, see the Clustered Data ONTAP Infinite Volumes Management Guide.

Process for NFS access to UNIX security style data on SVMs with FlexVol volumes Understanding the process used for NFS access to UNIX security style data is helpful when designing a file access configuration that provides appropriate security settings. When an NFS client connects to a storage system to access data with UNIX security style, Data ONTAP goes through the following steps: 30 | File Access Management Guide for NFS

1. Obtain the UNIX credentials for the user. Data ONTAP checks local UNIX accounts, NIS servers, and LDAP servers, depending on the Storage Virtual Machine (SVM) configuration.

2. Authorize the user. Data ONTAP checks the UNIX credentials for the user against the UNIX permissions of the data to determine what type of data access the user is allowed, if any. In this scenario, name mapping is not performed because CIFS credentials are not required.

Related concepts How security styles affect data access on page 19 Configuring local UNIX users and groups on page 71

Process for NFS access to NTFS security style data on SVMs with FlexVol volumes Understanding the process used for NFS access to NTFS security style data is helpful when designing a file access configuration that provides appropriate security settings. When an NFS client connects to a storage system to access data with NTFS security style, Data ONTAP goes through the following steps:

1. Obtain the UNIX credentials for the user. Data ONTAP checks local UNIX accounts, NIS servers, and LDAP servers, depending on the Storage Virtual Machine (SVM) configuration.

2. Map the UNIX user to a CIFS name. Data ONTAP checks local name mapping rules, LDAP mapping rules, and the default CIFS user, depending on the SVM configuration.

3. Establish a connection to a Windows domain controller. Data ONTAP uses a cached connection, queries DNS servers, or uses a specified preferred domain controller.

4. Authenticate the user. Data ONTAP connects to the domain controller and performs pass-through authentication.

5. Authorize the user. Data ONTAP checks the CIFS credentials for the user against the NTFS permissions of the data to determine what type of data access the user is allowed, if any.

Related concepts How security styles affect data access on page 19 How name mappings are used on page 76 Configuring local UNIX users and groups on page 71 31

Setting up file access using NFS

You must complete a number of steps to allow clients access to files on Storage Virtual Machines (SVMs) using NFS. There are some additional steps that are optional depending on the current configuration of your environment. For clients to be able to access files on SVMs using NFS, you must complete the following tasks:

1. Enable the NFS protocol on the SVM. You must configure the SVM to allow data access from clients over NFS.

2. Create an NFS server on the SVM. An NFS server is a logical entity on the SVM that enables the SVM to serve files over NFS. You must create the NFS server and specify the NFS protocol versions you want to allow.

3. Configure export policies on the SVM. You must configure export policies to make volumes and qtrees available to clients.

4. Configure the NFS server with the appropriate security and other settings depending on the network and storage environment. This step might include configuring Kerberos, LDAP, NIS, name mappings, and local users.

Note: The SVM must exist before you can set up file access using NFS. For more information about SVMs, see the Clustered Data ONTAP System Administration Guide for SVM Administrators

Modifying protocols for SVMs Before you can configure and use NFS or SMB on Storage Virtual Machines (SVMs), you must enable the protocol. This is typically done during SVM setup, but if you did not enable the protocol during setup, you can enable it later by using the vserver add-protocols command.

About this task You can also disable protocols on SVMs using the vserver remove-protocols command.

Steps 1. Check which protocols are currently enabled and disabled for the SVM: vserver show -vserver vserver_name -protocols

You can also use the vserver show-protocols command to view the currently enabled protocols on all SVMs in the cluster.

2. Perform one of the following actions:

If you want to... Enter the command...

Enable a protocol vserver add-protocols -vserver vserver_name - protocols protocol_name[,protocol_name,...]

Disable a protocol vserver remove-protocols -vserver vserver_name - protocols protocol_name[,protocol_name,...]

See the man page for each command for more information.

3. Confirm that the allowed and disallowed protocols were updated correctly: 32 | File Access Management Guide for NFS

vserver show -vserver vserver_name -protocols

Example The following command displays which protocols are currently enabled and disabled on the SVM named vs1:

vs1::> vserver show -vserver vs1 -protocols Vserver Allowed Protocols Disallowed Protocols ------vs1 nfs cifs, fcp, iscsi, ndmp

The following command allows access over SMB by adding cifs to the list of enabled protocols on the SVM named vs1:

vs1::> vserver add-protocols -vserver vs1 -protocols cifs

Creating an NFS server The NFS server is necessary to provide NFS clients with access to the Storage Virtual Machine (SVM). You can use the vserver nfs create command to create an NFS server.

Before you begin You must have configured the SVM to allow the NFS protocol.

Step

1. Use the vserver nfs create command to create an NFS server.

Example The following command creates an NFS server on the SVM named vs1 with NFSv3 disabled, NFSv4.0 enabled, and NFSv4.0 ACLs enabled:

vs1::> vserver nfs create -vserver vs1 -v3 disabled -v4.0 enabled - v4.0-acl enabled

Related references Commands for managing NFS servers on page 92

Securing NFS access using export policies You can use export policies to restrict NFS access to volumes or qtrees to clients that match specific parameters. For information about how export policies affect Infinite Volumes, see the Clustered Data ONTAP Infinite Volumes Management Guide. Setting up file access using NFS | 33

How export policies control client access to volumes or qtrees Export policies contain one or more export rules that process each client access request. The result of the process determines whether the client is denied or granted access and what level of access. An export policy with export rules must exist on the SVM for clients to access data. You associate exactly one export policy with each volume or qtree to configure client access to the volume or qtree. The SVM can contain multiple export policies. This enables you to do the following for SVMs with multiple volumes or qtrees: • Assign different export policies to each volume or qtree of the SVM for individual client access control to each volume or qtree in the SVM. • Assign the same export policy to multiple volumes or qtrees of the SVM for identical client access control without having to create a new export policy for each volume or qtree. If a client makes an access request that is not permitted by the applicable export policy, the request fails with a permission-denied message. If a client does not match any rule in the export policy, then access is denied. If an export policy is empty, then all accesses are implicitly denied. You can modify an export policy dynamically on a system running Data ONTAP.

Default export policy for SVMs with FlexVol volumes Each Storage Virtual Machine (SVM) with FlexVol volumes has a default export policy that contains no rules. An export policy with rules must exist before clients can access data on the SVM, and each FlexVol volume contained in the SVM must be associated with an export policy. When you create your SVM with FlexVol volumes, the storage system automatically creates a default export policy called default for the root volume of the SVM. You must create one or more rules for the default export policy before clients can access data on the SVM. Alternatively, you can create a custom export policy with rules. You can modify and rename the default export policy, but you cannot delete the default export policy. When you create a FlexVol volume in its containing SVM with FlexVol volume, the storage system creates the volume and associates the volume with the default export policy for the root volume of the SVM. By default, each volume created in the SVM is associated with the default export policy for the root volume. You can use the default export policy for all volumes contained in the SVM, or you can create a unique export policy for each volume. You can associate multiple volumes with the same export policy.

How export rules work Export rules are the functional elements of an export policy. Export rules match client access requests to a volume or qtree against specific parameters you configure to determine how to handle the client access requests. An export policy must contain at least one export rule to allow access to clients. If an export policy contains more than one rule, the rules are processed in the order in which they appear in the export policy. The rule order is dictated by the rule index number. If a rule matches a client, the permissions of that rule are used and no further rules are processed. If no rules match, the client is denied access. You can reorder export rules in a policy by modifying the rule index number. You can configure export rules to determine client access permissions using the following criteria: • The file access protocol used by the client sending the request, for example, NFSv4 or SMB. • A client identifier, for example, host name or IP address. • The security type used by the client to authenticate, for example, Kerberos v5, NTLM, or AUTH_SYS. 34 | File Access Management Guide for NFS

If a rule specifies multiple criteria, and the client does not match one or more of them, the rule does not apply.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule any

• -rwrule any

The client access request is sent using the NFSv3 protocol and the client has the IP address 10.1.17.37. Even though the client access protocol matches, the IP address of the client is in a different subnet from the one specified in the export rule. Therefore, client matching fails and this rule does not apply to this client.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule any

• -rwrule any

The client access request is sent using the NFSv4 protocol and the client has the IP address 10.1.16.54. The client access protocol matches and the IP address of the client is in the specified subnet. Therefore, client matching is successful and this rule applies to this client. The client gets read- write access regardless of its security type.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule any

• -rwrule krb5,ntlm

Client #1 has the IP address 10.1.16.207, sends an access request using the NFSv3 protocol, and authenticated with Kerberos v5. Client #2 has the IP address 10.1.16.211, sends an access request using the NFSv3 protocol, and authenticated with AUTH_SYS. The client access protocol and IP address matches for both clients. The read-only parameter allows read-only access to all clients regardless of the security type they authenticated with. Therefore both clients get read-only access. However, only client #1 gets read-write access Setting up file access using NFS | 35

because it used the approved security type Kerberos v5 to authenticate. Client #2 does not get read-write access.

How to handle clients with an unlisted security type When a client presents itself with a security type that is not listed in an access parameter of an export rule, you have the choice of either denying access to the client or mapping it to the anonymous user ID instead by using the option none in the access parameter. A client might present itself with a security type that is not listed in an access parameter because it was authenticated with a different security type or was not authenticated at all (security type AUTH_NONE). By default, the client is automatically denied access to that level. However, you can add the option none to the access parameter. As a result, clients with an unlisted security style are mapped to the anonymous user ID instead. The -anon parameter determines what user ID is assigned to those clients. The user ID specified for the -anon parameter must be a valid user that is configured with permissions you deem appropriate for the anonymous user. Valid values for the -anon parameter range from 0 to 65535.

User ID assigned to -anon Resulting handling of client access requests 0 - 65533 The client access request is mapped to the anonymous user ID and gets access depending on the permissions configured for this user. 65534 The client access request is mapped to the user nobody and gets access depending on the permissions configured for this user. This is the default. 65535 The access request from any client is denied when mapped to this ID and the client presents itself with security type AUTH_NONE. The access request from clients with user ID 0 is denied when mapped to this ID and the client presents itself with any other security type.

When using the option none, it is important to remember that the read-only parameter is processed first. Consider the following guidelines when configuring export rules for clients with unlisted security types:

Read-only Read-write Resulting access for clients with unlisted security types includes none includes none No No Denied No Yes Denied because read-only is processed first Yes No Read-only as anonymous Yes Yes Read-write as anonymous

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule sys,none 36 | File Access Management Guide for NFS

• -rwrule any

• -anon 70

Client #1 has the IP address 10.1.16.207, sends an access request using the NFSv3 protocol, and authenticated with Kerberos v5. Client #2 has the IP address 10.1.16.211, sends an access request using the NFSv3 protocol, and authenticated with AUTH_SYS. Client #3 has the IP address 10.1.16.234, sends an access request using the NFSv3 protocol, and did not authenticate (meaning security type AUTH_NONE). The client access protocol and IP address matches for all three clients. The read-only parameter allows read-only access to clients with their own user ID that authenticated with AUTH_SYS. The read-only parameter allows read-only access as the anonymous user with user ID 70 to clients that authenticated using any other security type. The read-write parameter allows read-write access to any security type, but in this case only applies to clients already filtered by the read-only rule. Therefore, clients #1 and #3 get read-write access only as the anonymous user with user ID 70. Client #2 gets read-write access with its own user ID.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule sys,none

• -rwrule none

• -anon 70

Client #1 has the IP address 10.1.16.207, sends an access request using the NFSv3 protocol, and authenticated with Kerberos v5. Client #2 has the IP address 10.1.16.211, sends an access request using the NFSv3 protocol, and authenticated with AUTH_SYS. Client #3 has the IP address 10.1.16.234, sends an access request using the NFSv3 protocol, and did not authenticate (meaning security type AUTH_NONE). The client access protocol and IP address matches for all three clients. The read-only parameter allows read-only access to clients with their own user ID that authenticated with AUTH_SYS. The read-only parameter allows read-only access as the anonymous user with user ID 70 to clients that authenticated using any other security type. The read-write parameter allows read-write access only as the anonymous user. Therefore, client #1 and client #3 get read-write access only as the anonymous user with user ID 70. Client #2 gets read-only access with its own user ID but is denied read-write access.

How security types determine client access levels The security type that the client authenticated with plays a special role in export rules. You must understand how the security type determines the levels of access the client gets to a volume or qtree. The three possible access levels are as follows:

1. Read-only Setting up file access using NFS | 37

2. Read-write

3. Superuser (for clients with user ID 0) Because the access level by security type is evaluated in this order, you must observe the following rules when constructing access level parameters in export rules:

For a client to get access These access parameters must match the client's security level... type...

Normal user read-only Read-only (-rorule)

Normal user read-write Read-only (-rorule) and read-write (-rwrule)

Superuser read-only Read-only (-rorule) and -superuser

Superuser read-write Read-only (-rorule) and read-write (-rwrule) and - superuser

The following are valid security types for each of these three access parameters: • any

• none

• never This security type is not valid for use with the -superuser parameter.

• krb5

• krb5i

• ntlm

• sys

When matching a client's security type against each of the three access parameters, there are three possible outcomes:

If the client's security type... Then the client... Matches one specified in the access parameter. Gets access for that level with its own user ID. Does not match one specified, but the access Gets access for that level but as the anonymous parameter includes the option none. user with the user ID specified by the -anon parameter. Does not match one specified and the access Does not get any access for that level. parameter does not include the option none. This does not apply to the -superuser parameter because it always includes none even when not specified.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule any

• -rwrule sys,krb5 38 | File Access Management Guide for NFS

• -superuser krb5

Client #1 has the IP address 10.1.16.207, has user ID 0, sends an access request using the NFSv3 protocol, and authenticated with Kerberos v5. Client #2 has the IP address 10.1.16.211, has user ID 0, sends an access request using the NFSv3 protocol, and authenticated with AUTH_SYS. Client #3 has the IP address 10.1.16.234, has user ID 0, sends an access request using the NFSv3 protocol, and did not authenticate (AUTH_NONE). The client access protocol and IP address matches for all three clients. The read-only parameter allows read-only access to all clients regardless of security type. The read-write parameter allows read-write access to clients with their own user ID that authenticated with AUTH_SYS or Kerberos v5. The superuser parameter allows superuser access to clients with user ID 0 that authenticated with Kerberos v5. Therefore, client #1 gets superuser read-write access because it matches all three access parameters. Client #2 gets read-write access but not superuser access. Client #3 gets read-only access but not superuser access.

How to handle superuser access requests When you configure export policies, you need to consider what you want to happen if the storage system receives a client access request with user ID 0, meaning as a superuser, and set up your export rules accordingly. In the UNIX world, a user with the user ID 0 is known as the superuser, typically called root, who has unlimited access rights on a system. Using superuser privileges can be dangerous for several reasons, including breach of system and data security. By default, Data ONTAP maps clients presenting with user ID 0 to the anonymous user. However, you can specify the - superuser parameter in export rules to determine how to handle clients presenting with user ID 0 depending on their security type. The following are valid options for the - superuser parameter:

• any

• none This is the default setting if you do not specify the -superuser parameter.

• krb5

• ntlm

• sys

There are two different ways how clients presenting with user ID 0 are handled, depending on the - superuser parameter configuration:

If the -superuser parameter and the client's Then the client... security type... Match Gets superuser access with user ID 0. Do not match Gets access as the anonymous user with the user ID specified by the -anon parameter and its assigned permissions. This is regardless of whether the read-only or read-write parameter specifies the option none. Setting up file access using NFS | 39

If a client presents with user ID 0 to access a volume with NTFS security style and the -superuser parameter is set to none, Data ONTAP uses the name mapping for the anonymous user to obtain the proper credentials.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule any

• -rwrule krb5,ntlm

• -anon 127

Client #1 has the IP address 10.1.16.207, has user ID 746, sends an access request using the NFSv3 protocol, and authenticated with Kerberos v5. Client #2 has the IP address 10.1.16.211, has user ID 0, sends an access request using the NFSv3 protocol, and authenticated with AUTH_SYS. The client access protocol and IP address matches for both clients. The read-only parameter allows read-only access to all clients regardless of the security type they authenticated with. However, only client #1 gets read-write access because it used the approved security type Kerberos v5 to authenticate. Client #2 does not get superuser access. Instead, it gets mapped to anonymous because the - superuser parameter is not specified. This means it defaults to none and automatically maps user ID 0 to anonymous. Client #2 also only gets read-only access because its security type did not match the read-write parameter.

Example The export policy contains an export rule with the following parameters:

• -protocol nfs3

• -clientmatch 10.1.16.0/255.255.255.0

• -rorule any

• -rwrule krb5,ntlm

• -superuser krb5

• -anon 0

Client #1 has the IP address 10.1.16.207, has user ID 0, sends an access request using the NFSv3 protocol, and authenticated with Kerberos v5. Client #2 has the IP address 10.1.16.211, has user ID 0, sends an access request using the NFSv3 protocol, and authenticated with AUTH_SYS. The client access protocol and IP address matches for both clients. The read-only parameter allows read-only access to all clients regardless of the security type they authenticated with. However, only client #1 gets read-write access because it used the approved security type Kerberos v5 to authenticate. Client #2 does not get read-write access. The export rule allows superuser access for clients with user ID 0. Client #1 gets superuser access because it matches the user ID and security type for the read-only and -superuser 40 | File Access Management Guide for NFS

parameters. Client #2 does not get read-write or superuser access because its security type does not match the read-write parameter or the -superuser parameter. Instead, client #2 is mapped to the anonymous user, which in this case has the user ID 0.

Creating an export policy Before creating export rules, you must create an export policy to hold them. You can use the vserver export-policy create command to create an export policy.

Steps 1. To create an export policy, enter the following command: vserver export-policy create -vserver vserver_name -policyname policy_name

-vserver vserver_name specifies the Storage Virtual Machine (SVM) name. -policyname policy_name specifies the name of the new export policy. The policy name can be up to 256 characters long.

2. Assign the export policy to a volume or qtree by performing one of the following actions:

To assign the export policy Enter the command... to a...

Volume volume modify -vserver vserver_name -volume volume_name -policy export_policy_name

Qtree volume qtree modify -vserver vserver_name -volume volume_name -qtree qtree_name -export-policy export_policy_name

Example The following command creates an export policy named rs1 on the SVM named vs1:

vs1::> vserver export-policy create -vserver vs1 -policyname rs1

Related tasks Checking client access to exports on page 49

Related references Commands for managing export policies on page 101

Adding a rule to an export policy You can use the vserver export-policy rule create command to create an export rule for an export policy. This enables you to define client access to data.

Before you begin The export policy you want to add the export rules to must already exist.

Step 1. Create an export rule: Setting up file access using NFS | 41 vserver export-policy rule create -vserver vserver_name -policyname policy_name -ruleindex integer -protocol {any|nfs3|nfs|cifs|nfs4| flexcache},... -clientmatch text -rorule {any|none|never|krb5|krb5i| ntlm|sys},... -rwrule {any|none|never|krb5|krb5i|ntlm|sys},... -anon user_ID -superuser {any|none|krb5|krb5i|ntlm|sys},... -allow-suid {true| false} -allow-dev {true|false}

-vserver vserver_name specifies the Storage Virtual Machine (SVM) name. -policyname policy_name specifies the name of the existing export policy to add the rule to. -ruleindex integer specifies the index number for the rule. Rules are evaluated according to their order in the list of index numbers; rules with lower index numbers are evaluated first. For example, the rule with index number 1 is evaluated before the rule with index number 2. -protocol {any|nfs3|nfs|cifs|nfs4|flexcache} specifies the access protocols that the export rule applies to. You can specify a comma-separated list of multiple access protocols for an export rule. If you specify the protocol as any, do not specify any other protocols in the list. If you do not specify an access protocol, the default value of any is used. -clientmatch text specifies the client to which the rule applies. You can specify the match in any of the following formats:

Client match format Example Domain name preceded by the “.” character .example.com Host name host1 IPv4 address 10.1.12.24 IPv4 address with a subnet mask expressed as 10.1.12.10/4 a number of bits IPv4 address with a network mask 10.1.16.0/255.255.255.0 IPv6 address in dotted format ::1.2.3.4 IPv6 address with a subnet mask expressed as ff::00/32 a number of bits A single netgroup with the netgroup name @netgroup1 preceded by the @ character

Note: Entering an IP address range, such as 10.1.12.10-10.1.12.70, is not allowed. Entries in this format are interpreted as a text string and treated as a host name. Entering an IPv6 address with a network mask, such as ff::12/ff::00, is not allowed.

When specifying individual IP addresses in export rules for granular management of client access, do not specify IP addresses that are dynamically (for example, DHCP) or temporarily (for example, IPv6) assigned. Otherwise, the client loses access when its IP address changes. For certain client match formats, Data ONTAP performs DNS lookups using the DNS configuration of the data SVM. To avoid client data access issues due to export policy rule matching failures, it is important that the DNS configuration of the data SVM works correctly and that the DNS servers have correct entries for NFS clients. -rorule {any|none|never|krb5|krb5i|ntlm|sys|} provides read-only access to clients that authenticate with the specified security types. -rwrule {any|none|never|krb5|krb5i|ntlm|sys|} provides read-write access to clients that authenticate with the specified security types. Note: A client can only get read-write access for a specific security type if the export rule allows read-only access for that security type as well. If the read-only parameter is more 42 | File Access Management Guide for NFS

restrictive for a security type than the read-write parameter, the client might not get read-write access.

You can specify a comma-separated list of multiple security types for a rule. If you specify the security type as any or never, do not specify any other security types. Choose from the following valid security types: • any A matching client can access the data regardless of security type. • none If listed alone, clients with any security type are granted access as anonymous. If listed with other security types, clients with a specified security type are granted access and clients with any other security type are granted access as anonymous. • never A matching client cannot access the data regardless of security type. • krb5 A matching client can access the data if it is authenticated by Kerberos 5. • krb5i A matching client can access the data if it is authenticated by Kerberos 5i. • ntlm A matching client can access the data if it is authenticated by CIFS NTLM. • sys A matching client can access the data if it is authenticated by NFS AUTH_SYS.

-anon user_ID specifies a UNIX user ID or user name that is mapped to client requests that arrive with a user ID of 0 (zero), which is typically associated with the user name root. The default value is 65534. NFS clients typically associate user ID 65534 with the user name nobody. In clustered Data ONTAP, this user ID is associated with the user pcuser. To disable access by any client with a user ID of 0, specify a value of 65535. -superuser {any|none|krb5|krb5i|ntlm|sys|} provides superuser access to clients that authenticate with the specified security types. Note: A client can only get superuser access for a specific security type if the export rule allows read-only access for that security type as well. If the read-only parameter is more restrictive for a security type than the superuser parameter, the client might not get superuser access.

-allow-suid {true|false} specifies whether to allow access to set user ID (SUID) and set group ID (SGID). If this parameter is set to true, clients can modify the SUID or SGID of files, directories, and volumes. If this parameter is set to false, clients can modify the SUID or SGID of directories and volumes but not files. The default is true. -allow-dev {true|false} specifies whether to allow creation of devices. The default is true.

Examples The following command creates an export rule on the SVM named vs1 in an export policy named rs1. The rule has the index number 1. The rule matches all clients. The rule enables all NFS access. It enables read-only access by all clients and requires Kerberos authentication for read-write access. Clients with the UNIX user ID 0 (zero) are mapped to user ID 65534 (which typically maps to the user name nobody). The rule enables SUID and SGID modification but does not enable the creation of devices. Setting up file access using NFS | 43

vs1::> vserver export-policy rule create -vserver vs1 -policyname rs1 -ruleindex 1 -protocol nfs -clientmatch 0.0.0.0/0 -rorule any -rwrule krb5 -anon 65534 -allow-suid true -allow-dev false

The following command creates an export rule on the SVM named vs2 in an export policy named expol2. The rule has the index number 21. The rule matches clients to members of the netgroup dev_netgroup_main. The rule enables all NFS access. It enables read-only access for clients that authenticated with AUTH_SYS and requires Kerberos authentication for read-write access. Clients with the UNIX user ID 0 (zero) are mapped to user ID 65534 (which typically maps to the user name nobody). The rule enables SUID and SGID modification but does not enable the creation of devices.

vs1::> vserver export-policy rule create -vserver vs2 -policyname expol2 -ruleindex 21 -protocol nfs -clientmatch @dev_netgroup_main -rorule sys -rwrule krb5 -anon 65534 -allow-suid true -allow-dev false

Related tasks Checking client access to exports on page 49

Related references Commands for managing export rules on page 102

Loading netgroups into SVMs One of the methods you can use to match clients in export policy rules is by using hosts listed in netgroups. You can load netgroups from a uniform resource identifier (URI) into Storage Virtual Machines (SVMs) as an alternative to using netgroups stored in external name servers (vserver services name-service netgroup load).

About this task Data ONTAP supports netgroup-by-host searches for the local netgroup file. After you load the netgroup file, Data ONTAP automatically creates a netgroup.byhost map to enable netgroup-by-host searches. This can significantly speed up local netgroup searches when processing export policy rules to evaluate client access.

Step 1. Load netgroups into SVMs from a URI: vserver services name-service netgroup load -vserver vserver_name - source {ftp|http|ftps|https}://uri

-vserver vserver_name specifies the SVM name. -source {ftp|http|ftps|https}://uri specifies the URI to load from. The file must use the same proper netgroup text file format that is used to populate NIS. See the man page for the command for more information about formatting. Data ONTAP checks the netgroup text file format before loading it. If it contains errors, the console displays warning messages to indicate what corrections you have to make to the file. Neglecting to do so might result in client access issues. When you specify IPv6 addresses in netgroups, you must always shorten and compress each address as specified in RFC 5952. 44 | File Access Management Guide for NFS

Any alphabetic characters in host names in the netgroup file should be lowercase. The maximum supported file size is 5 MB. The maximum supported level for nesting netgroups is 1000. You must only use primary DNS host names when defining host names in the netgroup file. To avoid export access issues, do not define host names using DNS CNAME or round robin records. All hosts in netgroups must have both forward (A) and reverse (PTR) DNS records to ensure consistent forward and reverse DNS lookups. In addition, if an IP address of a client has multiple PTR records, all those host names must be members of the netgroup and have corresponding A records. The domain portion of triples in the netgroup file should be kept empty because Data ONTAP does not support them. Loading the netgroup file and building the netgroup.byhost map can take a short while. If you need to update the netgroup, you can edit the file and load the updated netgroup file into the SVM.

Example The following command loads netgroup definitions into the SVM named vs1 from the HTTP URL http://intranet/downloads/corp-netgroup:

vs1::> vserver services name-service netgroup load -vserver vs1 -source http://intranet/downloads/corp-netgroup

Related tasks Verifying the status of netgroup definitions on page 44 Checking whether a client IP address is a member of a netgroup on page 107 Performing stricter access checking for netgroups by verifying domains on page 89

Related references Commands for managing local netgroups on page 98

Related information IETF RFC 5952: A Recommendation for IPv6 Address Text Representation

Verifying the status of netgroup definitions After loading netgroups into the Storage Virtual Machine (SVM), you can use the vserver services name-service netgroup status command to verify the status of netgroup definitions. This enables you to determine whether netgroup definitions are consistent on all of the nodes that back the SVM.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Verify the status of netgroup definitions: vserver services name-service netgroup status The command displays the following information: Setting up file access using NFS | 45

• SVM name • Node name • Load time for netgroup definitions • Hash value of the netgroup definitions You can display additional information in a more detailed view. See the reference page for the command for details.

3. Return to the admin privilege level: set -privilege admin

Example The following command displays netgroup status for all SVMs.

vs1::> set -privilege advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by technical support. Do you wish to continue? (y or n): y

vs1::*> vserver services name-service netgroup status Virtual Server Node Load Time Hash Value ------vs1 node1 9/20/2006 16:04:53 e6cb38ec1396a280c0d2b77e3a84eda2 node2 9/20/2006 16:06:26 e6cb38ec1396a280c0d2b77e3a84eda2 node3 9/20/2006 16:08:08 e6cb38ec1396a280c0d2b77e3a84eda2 node4 9/20/2006 16:11:33 e6cb38ec1396a280c0d2b77e3a84eda2

Related tasks Loading netgroups into SVMs on page 43

Setting an export rule's index number You can use the vserver export-policy rule setindex command to manually set an existing export rule's index number. This enables you to rearrange the order in which Data ONTAP processes export rules.

About this task If the new index number is already in use, the command inserts the rule at the specified spot and reorders the list accordingly.

Step 1. To modify the index number of a specified export rule, enter the following command: vserver export-policy rule setindex -vserver virtual_server_name - policyname policy_name -ruleindex integer -newruleindex integer

-vserver virtual_server_name specifies the Storage Virtual Machine (SVM) name. -policyname policy_name specifies the policy name. -ruleindex integer specifies the current index number of the export rule. If the specified index number is already in use by an existing export policy rule, the index numbers of that and all successive export policy rules are incremented by one. -newruleindex integer specifies the new index number of the export rule. 46 | File Access Management Guide for NFS

Example The following command changes the index number of an export rule at index number 3 to index number 2 in an export policy named rs1 on the SVM named vs1.

vs1::> vserver export-policy rule setindex -vserver vs1 -policyname rs1 -ruleindex 3 -newruleindex 2

Associating an export policy to a FlexVol volume Each FlexVol volume contained in the Storage Virtual Machine (SVM) must be associated with an export policy that contains export rules for clients to access data in the volume.

About this task When you create the SVM, Data ONTAP creates a default export policy called default for the SVM. Data ONTAP assigns the default export policy to the SVM volumes. You can associate another custom export policy that you create instead of the default policy to a volume. Before you associate a custom export policy to a volume, you must create one or more export rules that allow the desired access to data in the volume and assign those export rules to the export policy that you want to associate with the volume. You can associate an export policy to a volume when you create the volume or at any time after you create the volume. You can associate one export policy to the volume.

Steps

1. Use either the volume create or volume modify command with the -policy option to associate an export policy to the volume.

Example

cluster::> volume create -vserver vs1 -volume vol1 -aggregate aggr2 -state online -policy cifs_policy -security-style ntfs -junction-path /dept/marketing -size 250g -space-guarantee volume

For more information about the volume create and volume modify commands, see the man pages.

2. Verify that the export policy associated with the volume has the desired access configuration using the vserver export-policy rule show command with the -policyname option.

Example

cluster::> vserver export-policy rule show -policyname cifs_policy Virtual Policy Rule Access Client RO Server Name Index Protocol Match Rule ------vs1 cifs_policy 1 cifs 0.0.0.0/0 any

For more information about the vserver export-policy rule show command, see the man pages. The command displays a summary for that export policy, including a list rules applied to that policy. Data ONTAP assigns each rule a rule index number. After you know the rule index number, you can use it to display detailed information about the specified export rule. Setting up file access using NFS | 47

3. Verify that the rules applied to the export policy are configured correctly by using the vserver export-policy rule show command and specify the -policyname, -vserver, and - ruleindex options.

Example

cluster::> vserver export-policy rule show -policyname cifs_policy - vserver vs1 -ruleindex 1 Virtual Server: vs1 Policy Name: cifs_policy Rule Index: 1 Access Protocol: cifs Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0 RO Access Rule: any RW Access Rule: any User ID To Which Anonymous Users Are Mapped: 0 Superuser Security Flavors: any Honor SetUID Bits In SETATTR: true Allow Creation of Devices: true

Assigning an export policy to a qtree Instead of exporting an entire volume, you can also export a specific qtree on a volume to make it directly accessible to clients. You can export a qtree by assigning an export policy to it. You can assign the export policy either when you create a new qtree or by modifying an existing qtree.

Before you begin The export policy must exist.

About this task By default, qtrees use the parent export policy of the containing volume if not otherwise specified at the time of creation.

Steps 1. Perform one of the following actions:

To assign an export policy Enter the command... to...

A new qtree volume qtree create -vserver vserver_name -qtree- path /vol/volume_name/qtree_name -export-policy export_policy_name

An existing qtree volume qtree modify -vserver vserver_name -qtree- path /vol/volume_name/qtree_name -export-policy export_policy_name

2. Verify that the export policy was assigned correctly by entering the following command: volume qtree show -qtree qtree_name -fields export-policy

Examples The following command creates a new qtree named dev1 on volume vol_eng on the SVM named vs1 and assigns it the export policy dev1_export.

vs1::> volume qtree create -vserver vs1 -qtree-path /vol/vol_eng/ dev1 -export-policy dev1_export 48 | File Access Management Guide for NFS

The following command modifies an existing qtree named dev2 on volume vol_eng on the SVM named vs1 and assigns it the export policy dev2_export.

vs1::> volume qtree modify -vserver vs1 -qtree-path /vol/vol_eng/ dev2 -export-policy dev2_export

Removing an export policy from a qtree If you decide you do not want a specific export policy assigned to a qtree any longer, you can remove the export policy by modifying the qtree to inherit the export policy of the containing volume instead. You can do this by using the volume qtree modify command with the -export-policy parameter and an empty name string ("").

About this task You must perform this task to remove all export policies assigned to qtrees before downgrading to a Data ONTAP release earlier than 8.2.1 that does not support qtree exports.

Steps 1. To remove an export policy from a qtree, enter the following command: volume qtree modify -vserver vserver_name -qtree-path /vol/volume_name/ qtree_name -export-policy ""

2. Verify that the qtree was modified accordingly by entering the following command: volume qtree show -qtree qtree_name -fields export-policy

Validating qtree IDs for qtree file operations Data ONTAP can perform an optional additional validation of qtree IDs. This validation ensures that client file operation requests use a valid qtree ID and that clients can only move files within the same qtree. You can enable or disable this validation by modifying the -validate-qtree-export parameter. This parameter is enabled by default.

About this task This parameter is only effective when you have assigned an export policy directly to one or more qtrees on the Storage Virtual Machine (SVM).

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform one of the following actions:

If you want qtree ID Enter the following command... validation to be...

Enabled vserver nfs modify -vserver vserver_name - validate-qtree-export enabled

Disabled vserver nfs modify -vserver vserver_name - validate-qtree-export disabled

3. Return to the admin privilege level: set -privilege admin Setting up file access using NFS | 49

Export policy restrictions and nested junctions for FlexVol volumes If you configured export policies to set a less restrictive policy on a nested junction but a more restrictive policy on a higher level junction, access to the lower level junction might fail. You should ensure that higher level junctions have less restrictive export policies than lower level junctions.

Checking client access to exports When you deploy export policies to manage client access to exports, you might want to first test the export policies to ensure that they work as intended. If you have deployed export policies and clients experience access issues, you might need to test the export policies to troubleshoot the issue. You can test export policies for these purposes by using the vserver export-policy check-access command.

About this task You can check access for a specific client to a specific volume or qtree export using a specific authentication method and file access protocol.

Steps

1. Check client access to exports by using the vserver export-policy check-access command. See the man page for the command for more information.

2. Examine the output to determine whether the export policy works as intended and the client access behaves as expected. Specifically, verify which export policy is used by the volume or qtree and the type of access the client has as a result.

3. If necessary, reconfigure the export policy rules as needed.

Example The following command checks read/write access for an NFSv3 client with the IP address 1.2.3.4 to the qtree qt1 on the volume flex_vol. The command output shows that the qtree uses the export policy primarynames and that access is denied.

cluster1::> vserver export-policy check-access -vserver vs1 -client-ip 1.2.3.4 -volume flex_vol -authentication-method sys -protocol nfs3 -access-type read-write -qtree qt1 Policy Policy Rule Path Policy Owner Owner Type Index Access ------/ default vs1_root volume 1 read /dir1 default vs1_root volume 1 read /dir1/dir2 default vs1_root volume 1 read /dir1/dir2/flex1 data flex_vol volume 10 read /dir1/dir2/flex1/qt1 primarynames qt1 qtree 0 denied 5 entries were displayed.

Related tasks Creating an export policy on page 40 Adding a rule to an export policy on page 40

Related references Commands for managing export policies on page 101 Commands for managing export rules on page 102 50 | File Access Management Guide for NFS

Using Kerberos with NFS for strong security You can use Kerberos to provide strong authentication between Storage Virtual Machines (SVMs) and NFS clients to ensure secure NFS communication. Configuring NFS with Kerberos increases the integrity and security of NFS client communications with the storage system.

Data ONTAP support for Kerberos Beginning with version 3, NFS supports generic security services for RPC (RPCSEC_GSS), which enables the use of Kerberos 5. Kerberos provides strong secure authentication for client/server applications. Authentication provides verification of user and process identities to a server. In the Data ONTAP environment, Kerberos provides authentication between Storage Virtual Machines (SVMs) and NFS clients. Beginning with Data ONTAP 8.3, the following additional Kerberos functionality is supported: • Kerberos 5 authentication with integrity checking (krb5i) Krb5i uses checksums to verify the integrity of each NFS message transferred between client and server. This is useful both for security reasons, for example to ensure that data has not been tampered with, and data integrity reasons, for example to prevent data corruption when using NFS over unreliable networks. • 128-bit and 256-bit AES encryption Advanced Encryption Standard (AES) is an encryption algorithm for securing electronic data. Data ONTAP now supports AES with 128-bit keys (AES-128) and AES with 256-bit keys (AES-256) encryption for Kerberos for stronger security. • SVM-level Kerberos realm configurations SVM administrators can now create Kerberos realm configurations at the SVM level. This means that SVM administrators no longer have to rely on the cluster administrator for Kerberos realm configuration and can create individual Kerberos realm configurations in a multi-tenancy environment.

Requirements for configuring Kerberos with NFS Before you configure Kerberos with NFS on your system, you must verify that certain items in your network and storage environment are properly configured. Note: The steps to configure your environment depend on what version and type of client operating system, domain controller, Kerberos, DNS, etc., that you are using. Documenting all these variables is beyond the scope of this document. For more information, see the respective documentation for each component. For a detailed example of how to set up clustered Data ONTAP and Kerberos 5 with NFSv3 and NFSv4 in an environment using R2 Active Directory and Linux hosts, see technical report 4073.

The following items should be configured first:

Network environment requirements • Kerberos You must have a working Kerberos setup with a key distribution center (KDC), such as Windows Active Directory based Kerberos or MIT Kerberos. NFS servers must use “nfs” as the primary component of their machine principal. • Directory service Setting up file access using NFS | 51

You must use a secure directory service in your environment, such as Active Directory or OpenLDAP, that is configured to use LDAP over SSL/TLS. • NTP You must have a working time server running NTP. This is necessary to prevent Kerberos authentication failure due to time skew. • Domain name resolution (DNS) Each UNIX client and each Storage Virtual Machine (SVM) LIF must have a proper service record (SRV) registered with the KDC under forward and reverse lookup zones. All participants must be properly resolvable via DNS. • User accounts Each client must have a user account in the Kerberos realm. NFS servers must use “nfs” as the primary component of their machine principal.

NFS client requirements • NFS Each client must be properly configured to communicate over the network using NFSv3 or NFSv4. Clients must support RFC1964 and RFC2203. • Kerberos Each client must be properly configured to use Kerberos authentication, including the following details: ◦ Encryption for TGS communication is enabled. AES-256 for strongest security. ◦ The most secure encryption type for TGT communication is enabled. ◦ The Kerberos realm and domain are configured correctly. ◦ GSS is enabled. When using machine credentials:

◦ Do not run gssd with the -n parameter.

◦ Do not run kinit as the root user.

• Each client must use the most recent and updated operating system version. This ensures best compatibility and reliability for AES encryption with Kerberos. • DNS Each client must be properly configured to use DNS for correct name resolution. • NTP Each client must be synchronizing with the NTP server. • Host and domain information Each client's /etc/hosts and /etc/resolv.conf files must contain the correct host name and DNS information, respectively. • Keytab files Each client must have a keytab file from the KDC. The realm must be in uppercase letters. The encryption type must be AES-256 for strongest security. • Optional: For best performance, clients benefit from having at least two network interfaces: one for communicating with the local area network and one for communicating with the storage network. 52 | File Access Management Guide for NFS

Storage system requirements • NFS license The storage system must have a valid NFS license installed. • CIFS license The CIFS license is optional. It is only required for checking Windows credentials when using multiprotocol name mapping. It is not required in a strict UNIX-only environment. • SVM You must have at least one SVM configured on the system. • DNS on the SVM You must have configured DNS on each SVM. • NFS server You must have configured NFS on the SVM. • AES encryption For strongest security, you must configure the NFS server to allow only AES-256 encryption for Kerberos. • CIFS server If you are running a multiprotocol environment, you must have configured CIFS on the SVM. The CIFS server is required for multiprotocol name mapping. • Volumes You must have a root volume and at least one data volume configured for use by the SVM. • Root volume The root volume of the SVM must have the following configuration:

Name... Setting... Security style UNIX UID root or ID 0 GID root or ID 0 UNIX permissions 777

In contrast to the root volume, data volumes can have either security style. • UNIX groups The SVM must have the following UNIX groups configured:

Group name Group ID daemon 1 root 0 pcuser 65534 (created automatically by Data ONTAP when you create the SVM)

• UNIX users The SVM must have the following UNIX users configured: Setting up file access using NFS | 53

User User ID Primary Comment name group ID nfs 500 0 Required for GSS INIT phase The first component of the NFS client user SPN is used as the user. pcuser 65534 65534 Required for NFS and CIFS multiprotocol use Created and added to the pcuser group automatically by Data ONTAP when you create the SVM. root 0 0 Required for mounting

The nfs user is not required if a Kerberos-UNIX name mapping exists for the SPN of the NFS client user. • Export policies and rules You must have configured export policies with the necessary export rules for the root and data volumes and qtrees. If all volumes of the SVM are accessed over Kerberos, you can set the export rule options -rorule, -rwrule, and -superuser for the root volume to krb5 or krb5i.

• Kerberos-UNIX name mapping If you want the user identified by the NFS client user SPN to have root permissions, you must create a name mapping to root.

Related information NetApp Technical Report 4073: Secure Unified Authentication with NetApp Storage Systems: Kerberos, NFSv4, and LDAP for User Authentication over NFS (with a Focus on Clustered Data ONTAP) NetApp Interoperability Matrix Tool Clustered Data ONTAP 8.3 System Administration Guide Clustered Data ONTAP 8.3 Logical Storage Management Guide

Configuring NFS Kerberos permitted encryption types By default, Data ONTAP supports the following encryption types for NFS Kerberos: DES, 3DES, AES-128, and AES-256. You can configure the permitted encryption types for each Storage Virtual Machine (SVM) to suit the security requirements for your particular environment by using the vserver nfs modify command with the -permitted-enc-types parameter.

About this task For greatest client compatibility, Data ONTAP supports both weak DES and strong AES encryption by default. This means, for example, that if you want to increase security and your environment supports it, you can use this procedure to disable DES and 3DES and require clients to use only AES encryption. • Enabling or disabling AES entirely (both AES-128 and AES-256) on SVMs is disruptive because it requires disabling the Kerberos configuration on all LIFs for the SVM. Before making this change, ensure that NFS clients do not rely on AES encryption on the SVM. • Enabling or disabling DES or 3DES does not require any changes to the Kerberos configuration on LIFs.

Step 1. Enable or disable the permitted encryption type you want: 54 | File Access Management Guide for NFS

If you want to enable or Follow these steps... disable... DES or 3DES a. Configure the NFS Kerberos permitted encryption types of the SVM: vserver nfs modify -vserver vserver_name - permitted-enc-types encryption_types Separate multiple encryption types with a comma.

b. Verify that the change was successful: vserver nfs show -vserver vserver_name -fields permitted-enc-types

AES-128 or AES-256 a. Identify on which SVM and LIF Kerberos is enabled: vserver nfs kerberos interface show

b. Disable Kerberos on all LIFs on the SVM whose NFS Kerberos permitted encryption type you want to modify: vserver nfs kerberos interface disable -lif lif_name

c. Configure the NFS Kerberos permitted encryption types of the SVM: vserver nfs modify -vserver vserver_name - permitted-enc-types encryption_types Separate multiple encryption types with a comma.

d. Verify that the change was successful: vserver nfs show -vserver vserver_name -fields permitted-enc-types

e. Reenable Kerberos on all LIFs on the SVM: vserver nfs kerberos interface enable -lif lif_name -spn service_principal_name

f. Verify that Kerberos is enabled on all LIFs: vserver nfs kerberos interface show

Specifying the user ID domain for NFSv4 To specify the user ID domain, you can set the -v4-id-domain option.

About this task By default, Data ONTAP uses the NIS domain for NFSv4 user ID mapping, if one is set. If an NIS domain is not set, the DNS domain is used. You might need to set the user ID domain if, for example, you have multiple user ID domains. The domain name must match the domain configuration on the domain controller. It is not required for NFSv3.

Step 1. Enter the following command: vserver nfs modify -vserver vserver_name -v4-id-domain NIS_domain_name Setting up file access using NFS | 55

Creating an NFS Kerberos realm configuration If you want to use Kerberos authentication for NFS client access, you must first configure the Storage Virtual Machine (SVM) to use an existing Kerberos realm. You can use the vserver nfs kerberos realm create command to configure the SVM to use a Kerberos realm.

Before you begin The cluster administrator should have configured NTP on the storage system, client, and KDC server to avoid authentication issues. Time differences between a client and server (clock skew) are a common cause of authentication failures.

About this task Cluster administrators can create, modify, and delete Kerberos realms for any SVM. SVM administrators can only create, modify, and delete Kerberos realms for their SVM.

Step

1. Use the vserver nfs kerberos realm create command to configure a Kerberos realm.

Examples The following command creates an NFS Kerberos realm configuration for the SVM vs1 that uses a Active Directory server as the KDC server. The Kerberos realm is AUTH.EXAMPLE.COM. The Active Directory server is named ad-1 and its IP address is 10.10.8.14. The permitted clock skew is 300 seconds (the default). The IP address of the KDC server is 10.10.8.14, and its port number is 88 (the default). “Microsoft Kerberos config” is the comment.

vs1::> vserver nfs kerberos realm create -vserver vs1 -realm AUTH.EXAMPLE.COM -adserver-name ad-1 -adserver-ip 10.10.8.14 -clock-skew 300 -kdc-ip 10.10.8.14 -kdc- port 88 -kdc-vendor Microsoft -comment "Microsoft Kerberos config"

The following command creates an NFS Kerberos realm configuration for the SVM vs1 that uses an MIT KDC. The Kerberos realm is SECURITY.EXAMPLE.COM. The permitted clock skew is 300 seconds. The IP address of the KDC server is 10.10.9.1, and its port number is 88. The KDC vendor is Other to indicate a UNIX vendor. The IP address of the administrative server is 10.10.9.1, and its port number is 749 (the default). The IP address of the password server is 10.10.9.1, and its port number is 464 (the default). “UNIX Kerberos config” is the comment.

vs1::> vserver nfs kerberos realm create -vserver vs1 -realm SECURITY.EXAMPLE.COM. -clock-skew 300 -kdc-ip 10.10.9.1 -kdc-port 88 -kdc-vendor Other -adminserver-ip 10.10.9.1 -adminserver-port 749 -passwordserver-ip 10.10.9.1 -passwordserver-port 464 -comment "UNIX Kerberos config"

Related references Commands for managing NFS Kerberos realm configurations on page 101 56 | File Access Management Guide for NFS

Creating an NFS Kerberos configuration You can use the vserver nfs kerberos interface enable command to enable Kerberos and create a Kerberos configuration for Storage Virtual Machines (SVMs). This enables the SVM to use Kerberos security services for NFS.

About this task For best performance, the SVM should have at least two data LIFs. One is for use by the service principal name (SPN) for NFS Kerberos traffic, and the other is for non-Kerberos traffic. Note: If you use pNFS with Kerberos on SVMs, you must enable Kerberos on every LIF on the SVM.

If you are using an Active Directory KDC, the first 15 characters of any SPNs used must be unique across SVMs within a realm or domain. When you create an NFS Kerberos configuration, Data ONTAP performs the following checks for the SPN: • If two different SVMs are in the same IPspace and use the same KDC account, Data ONTAP does not allow such a configuration. • If two different SVMs are in different IPspaces and use the same KDC account, Data ONTAP performs additional checks internally as well as with the external KDC and prevents such a configuration if KDC accounts are shared across SVMs.

Step 1. Create the NFS Kerberos configuration: vserver nfs kerberos interface enable -vserver vserver_name -lif logical_interface -spn service_principal_name If you need to create the SPN in a different OU of the Kerberos realm, you can specify the optional -ou parameter. Data ONTAP requires the secret key for the SPN from the KDC to enable the Kerberos interface and can obtain it using one of two methods:

If you... Include the following parameter with the command...

Have the KDC administrator credentials to -admin-username kdc_admin_username retrieve the key directly from the KDC Do not have the KDC administrator -keytab-uri {ftp|http}://uri credentials but a keytab file from the KDC containing the key

See the man page for the command for more information.

Example The following example creates an NFS Kerberos configuration for the SVM named vs1 on the logical interface ves03-d1, with the SPN nfs/ves03- [email protected] in the OU lab2ou. Setting up file access using NFS | 57

vs1::> vserver nfs kerberos interface enable -lif ves03-d1 -vserver vs2 -spn nfs/[email protected] -ou "ou=lab2ou"

Related references Commands for managing NFS Kerberos interface configurations on page 100

Configuring name services Depending on the configuration of your storage system, Data ONTAP needs to be able to look up host, user, group, or netgroup information to provide proper access to clients. You must configure name services to enable Data ONTAP to access local or external name services to obtain this information.

How Data ONTAP name service switch configuration works Data ONTAP stores name service configuration information in a table that is the equivalent of the /etc/nsswitch.conf file on UNIX systems. You must understand the function of the table and how Data ONTAP uses it so you can configure it appropriately for your environment. The Data ONTAP name service switch table determines which name service sources Data ONTAP consults in which order to retrieve information for a certain type of name service information. Data ONTAP maintains a separate name service switch table for each Storage Virtual Machine (SVM).

Database types The table stores a separate name service list for each of the following database types:

Database Defines name service sources for... Valid sources are... type hosts Converting host names to IP addresses files, dns group Looking up user group information files, nis, ldap passwd Looking up user information files, nis, ldap netgroup Looking up netgroup information files, nis, ldap namemap Mapping user names files, ldap

Source types The sources specify which name service source to use to retrieve the appropriate information. 58 | File Access Management Guide for NFS

Specify To look up information in... Managed by the command source families... type...

files Local source files vserver services name- service unix-user vserver services name- service unix-group vserver services name- service netgroup vserver services name- service dns hosts

nis External NIS servers as specified in the NIS vserver services name- domain configuration of the SVM service nis-domain

ldap External LDAP servers as specified in the vserver services name- LDAP client configuration of the SVM service ldap

dns External DNS servers as specified in the DNS vserver services name- configuration of the SVM service dns

Even if you plan to use NIS or LDAP for both data access and SVM administration authentication, you should still include files and configure local users as a fallback in case NIS or LDAP authentication fails.

Protocols used to access external sources To access the servers for external sources, Data ONTAP uses the following protocols:

External name service source Protocol used for access NIS UDP DNS UDP LDAP TCP

Example The following example displays the name service switch configuration for the SVM svm_1:

cluster1::*> vserver services name-service ns-switch show -vserver svm_1 Source Vserver Database Order ------svm_1 hosts files, dns svm_1 group files svm_1 passwd files svm_1 netgroup nis, files

To look up IP addresses for hosts, Data ONTAP first consults local source files. If the query does not return any results, DNS servers are checked next. To look up user or group information, Data ONTAP only consults local sources files. If the query does not return any results, the lookup fails. To look up netgroup information, Data ONTAP first consults external NIS servers. If the query does not return any results, the local netgroup file is checked next. Setting up file access using NFS | 59

There are no name service entries for name mapping in the table for this SVM. Therefore, Data ONTAP consults only local source files by default.

Related concepts How Data ONTAP uses name services on page 26

Related information NetApp Technical Report 4379: Name Services Best Practice Guide Clustered Data ONTAP

Configuring the name service switch table You must configure the name service switch table correctly to enable Data ONTAP to consult local or external name services to retrieve host, user, or group information.

Before you begin You must have decided which name services you want to use for host, user, group, netgroup, and name mapping as applicable to your environment. If you plan to use netgroups, all IPv6 addresses specified in netgroups must be shortened and compressed as specified in RFC 5952.

Steps 1. Add the necessary entries to the name service switch table: vserver services name-service ns-switch create -vserver vserver_name - database database_name -sources source_names

2. Verify that the name service switch table contains the expected entries in the desired order: vserver services name-service ns-switch show -vserver vserver_name

If you need to make any corrections, use the vserver services name-service ns-switch modify or vserver services name-service ns-switch delete commands.

Example The following example creates a new entry in the name service switch table for the SVM vs1 to use the local netgroup file, an external NIS server, and an external LDAP server to look up netgroup information in that order:

cluster::> vserver services name-service ns-switch create -vserver vs1 -database netgroup -sources files,nis,ldap

After you finish You must configure the name services you specified for the SVM. If you specify a name service but fail to configure it, client access to the storage system might not work as expected.

Related tasks Troubleshooting name service issues on page 93

Related references Commands for managing name service switch entries on page 95 60 | File Access Management Guide for NFS

Related information NetApp Technical Report 4379: Name Services Best Practice Guide Clustered Data ONTAP IETF RFC 5952: A Recommendation for IPv6 Address Text Representation

Using LDAP An LDAP server enables you to centrally maintain user information. If you store your user database on an LDAP server in your environment, you can configure your storage system to look up user information in your existing LDAP database.

Using LDAP over SSL/TLS to secure communication You can use LDAP over SSL/TLS to secure communication between the Storage Virtual Machine (SVM) LDAP client and the LDAP server. This allows LDAP to encrypt all traffic to and from the LDAP server.

LDAP over SSL/TLS concepts You must understand certain terms and concepts about how Data ONTAP uses SSL/TLS to secure LDAP communication. Data ONTAP can use LDAP over SSL/TLS for setting up authenticated sessions between Active Directory-integrated LDAP servers or UNIX-based LDAP servers.

Terminology There are certain terms that you should understand about how Data ONTAP uses LDAP over SSL to secure LDAP communication. LDAP (Lightweight Directory Access Protocol) A protocol for accessing and managing information directories. LDAP is used as an information directory for storing objects such as users, groups, and netgroups. LDAP also provides directory services that manage these objects and fulfill LDAP requests from LDAP clients. SSL (Secure Sockets Layer) A secure protocol developed for sending information securely over the Internet. SSL is used to provide either server or mutual (server and client) authentication. SSL provides encryption only. If a method to ensure data integrity is needed, it must be provided by the application using SSL. TLS (Transport Layer Security) An IETF standards track protocol that is based on the earlier SSL specifications. It is the successor to SSL. LDAP over SSL/TLS (Also known as LDAPS) A protocol that uses SSL or TLS to secure communication between LDAP clients and LDAP servers. The terms SSL and TLS are often used interchangeably unless referring to a specific version of the protocol. Start TLS (Also known as start_tls, STARTTLS, and StartTLS) A mechanism to provide secure communication by using the TLS/SSL protocols.

How Data ONTAP uses LDAP over SSL/TLS By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network monitoring device or software and view the communications between LDAP client and server computers. This is especially problematic when an Setting up file access using NFS | 61

LDAP simple bind is used because the credentials (user name and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. The SSL and TLS protocols run above TCP/IP and below higher-level protocols, such as LDAP. They use TCP/IP on behalf of the higher-level protocols, and in the process, permit an SSL-enabled server to authenticate itself to an SSL-enabled client and permit both machines to establish an encrypted connection. These capabilities address fundamental security concerns about communication over the Internet and other TCP/IP networks. Data ONTAP supports SSL server authentication, which enables the Storage Virtual Machine (SVM) LDAP client to confirm the LDAP server's identity during the bind operation. SSL/TLS-enabled LDAP clients can use standard techniques of public-key cryptography to check that a server's certificate and public ID are valid and have been issued by a certificate authority (CA) listed in the client's list of trusted CAs. This version of Data ONTAP supports the following: • LDAP over SSL/TLS for SMB-related traffic between the Active Directory-integrated LDAP servers and the SVM • LDAP over SSL/TLS for LDAP traffic for name mapping Either Active Directory-integrated LDAP servers or UNIX-based LDAP servers can be used to store information for LDAP name mapping. • Self-signed root CA certificates When using an Active-Directory integrated LDAP, the self-signed root certificate is generated when the Windows Server Certificate Service is installed in the domain. When using an UNIX- based LDAP server for LDAP name mapping, the self-signed root certificate is generated and saved by using means appropriate to that LDAP application. Data ONTAP does not support signing (integrity protection) and sealing (encryption) of the data. By default, LDAP over SSL/TLS is disabled.

Data ONTAP uses port 389 for LDAP over SSL/TLS LDAP supports two methods to encrypt communications using SSL/TLS: traditional LDAPS and STARTTLS. LDAPS communication usually occurs over a special port, commonly 636. However, STARTTLS begins as a plaintext connection over the standard LDAP port (389), and that connection is then upgraded to SSL/TLS. Data ONTAP uses STARTTLS for securing LDAP communication, and uses the default LDAP port (389) to communicate with the LDAP server. LDAP over SSL/TLS on the SVM should not be configured to use port 636 because this causes LDAP connections to fail. The LDAP server must be configured to allow connections over LDAP port 389; otherwise, LDAP SSL/TLS connections from the SVM to the LDAP server fail.

Installing the self-signed root CA certificate on the SVM Before you can use secure LDAP authentication when binding to LDAP servers, you must install the self-signed root CA certificate on the Storage Virtual Machine (SVM).

About this task When LDAP over SSL/TLS is enabled, the Data ONTAP LDAP client on the SVM does not support revoked certificates. The LDAP client treats revoked certificates as if they are not revoked.

Steps 1. Install the self-signed root CA certificate: a. Begin the certificate installation: 62 | File Access Management Guide for NFS

security certificate install -vserver vserver_name -type server-ca The console output displays the following message: Please enter Certificate: Press when done

b. Open the certificate .pem file with a text editor, copy the certificate, including the lines beginning with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----, and then paste the certificate on the console.

c. Verify that the certificate is displayed after the console prompt. d. Complete the installation by pressing Enter.

2. Verify that the certificate is installed: security certificate show -vserver vserver_name

Enabling LDAP over SSL/TLS Before Storage Virtual Machines (SVMs) can use secure LDAP communication to LDAP servers, you must modify the LDAP client of the SVM to enable LDAP over SSL/TLS.

Before you begin You must have installed the self-signed root CA certificate of the LDAP server on the SVM.

Steps 1. Configure the LDAP client to allow secure LDAP communication with LDAP servers: vserver services name-service ldap client modify -vserver vserver_name - client-config ldap_client_config_name -use-start-tls true

2. Verify that the LDAP over SSL/TLS security setting is set to true: vserver services name-service ldap client show -vserver vserver_name

Creating a new LDAP client schema Data ONTAP provides three LDAP schemas: one for Active Directory Services for UNIX compatibility, one for Active Directory Identity Management for UNIX compatibility, and one for RFC-2307 LDAP compatibility. If the LDAP schema in your environment differs from these, you must create a new LDAP client schema for Data ONTAP.

About this task The default LDAP schemas provided by Data ONTAP cannot be modified. To create a new schema, you create a copy and then modify the copy accordingly.

Steps 1. Display the existing LDAP client schema templates to identify the one you want to copy: vserver services name-service ldap client schema show

2. Set the privilege level to advanced: set -privilege advanced

3. Make a copy of an existing LDAP client schema: vserver services name-service ldap client schema copy -vserver vserver_name -schema existing_schema_name -new-schema-name new_schema_name Setting up file access using NFS | 63

4. Use the vserver services name-service ldap client schema modify command to modify the new schema and customize it for your environment. See the man page for the command for more information.

5. Return to the admin privilege level: set -privilege admin

Related tasks Enabling LDAP RFC2307bis support on page 63

Related references Commands for managing LDAP client schema templates on page 100

Enabling LDAP RFC2307bis support If you want to use LDAP and require the additional capability to use nested group memberships, you can configure Data ONTAP to enable LDAP RFC2307bis support.

Before you begin You must have created a copy of one of the default LDAP client schemas that you want to use.

About this task In LDAP client schemas, group objects use the memberUid attribute. This attribute can contain multiple values and lists the names of the users that belong to that group. In RFC2307bis enabled LDAP client schemas, group objects use the uniqueMember attribute. This attribute can contain the full distinguished name (DN) of another object in the LDAP directory. This enables you to use nested groups because groups can have other groups as members. The user should not be a member of more than 256 groups including nested groups. Data ONTAP ignores any groups over the 256 group limit. By default, RFC2307bis support is disabled.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Modify the copied RFC2307 LDAP client schema to enable RFC2307bis support: vserver services name-service ldap client schema modify -vserver vserver_name -schema schema-name -enable-rfc2307bis true

3. Modify the schema to match the object class supported in the LDAP server: vserver services name-service ldap client schema modify -vserver vserver-name -schema schema_name -group-of-unique-names-object-class object_class

4. Modify the schema to match the attribute name supported in the LDAP server: vserver services name-service ldap client schema modify -vserver vserver-name -schema schema_name -unique-member-attribute attribute_name

5. Return to the admin privilege level: set -privilege admin 64 | File Access Management Guide for NFS

Related tasks Creating an LDAP client configuration on page 65 Creating a new LDAP client schema on page 62

Related references Commands for managing LDAP client configurations on page 99

Configuration options for LDAP directory searches You can optimize LDAP directory searches, including user, group, and netgroup information, by configuring the Data ONTAP LDAP client to connect to LDAP servers in the most appropriate way for your environment. You need to understand when the default LDAP base and scope search values suffice and which parameters to specify when custom values are more appropriate. LDAP client search options for user, group, and netgroup information can help avoid failed LDAP queries, and therefore failed client access to storage systems. They also help ensure that the searches are as efficient as possible to avoid client performance issues.

Default base and scope search values The LDAP base is the default base DN that the LDAP client uses to perform LDAP queries. All searches, including user, group, and netgroup searches, are done using the base DN. This option is appropriate when your LDAP directory is relatively small and all relevant entries are located in the same DN. If you do not specify a custom base DN, the default is root. This means that each query searches the entire directory. Although this maximizes the chances of success of the LDAP query, it can be inefficient and result in significantly decreased performance with large LDAP directories. The LDAP base scope is the default search scope that the LDAP client uses to perform LDAP queries. All searches, including user, group, and netgroup searches, are done using the base scope. It determines whether the LDAP query searches only the named entry, entries one level below the DN, or the entire subtree below the DN. If you do not specify a custom base scope, the default is subtree. This means that each query searches the entire subtree below the DN. Although this maximizes the chances of success of the LDAP query, it can be inefficient and result in significantly decreased performance with large LDAP directories.

Custom base and scope search values Optionally, you can specify separate base and scope values for user, group, and netgroup searches. Limiting the search base and scope of queries this way can significantly improve performance because it limits the search to a smaller subsection of the LDAP directory. If you specify custom base and scope values, they override the general default search base and scope for user, group, and netgroup searches. The parameters to specify custom base and scope values are available at the advanced privilege level.

LDAP client parameter... Specifies custom...

-base-dn Base DN for all LDAP searches

-base-scope Base scope for all LDAP searches

-user-dn Base DNs for all LDAP user searches This parameter also applies to user name-mapping searches. -user-scope Base scope for all LDAP user searches This parameter also applies to user name-mapping searches. Setting up file access using NFS | 65

LDAP client parameter... Specifies custom...

-group-dn Base DNs for all LDAP group searches

-group-scope Base scope for all LDAP group searches

-netgroup-dn Base DNs for all LDAP netgroup searches

-netgroup-scope Base scope for all LDAP netgroup searches

Multiple custom base DN values If your LDAP directory structure is more complex, it might be necessary for you to specify multiple base DNs to search multiple parts of your LDAP directory for certain information. You can specify multiple DNs for the user, group, and netgroup DN parameters by separating them with a semicolon (;) and enclosing the entire DN search list with double quotes ("). If a DN contains a semicolon, you must add an escape character (\) immediately before the semicolon in the DN. Note that the scope applies to the entire list of DNs specified for the corresponding parameter. For example, if you specify a list of three different user DNs and subtree for the user scope, then LDAP user searches search the entire subtree for each of the three specified DNs.

Related tasks Creating an LDAP client configuration on page 65

Related references Commands for managing LDAP client configurations on page 99

Creating an LDAP client configuration If you want Data ONTAP to be able to access external LDAP servers in your environment to look up information stored in the LDAP directory, you must set up an LDAP client first on the storage system. You can use the vserver services name-service ldap client create command to create such an LDAP client configuration.

Steps 1. Create an LDAP client configuration: vserver services name-service ldap client create -vserver vserver_name - client-config client_config_name {-servers LDAP_server_list | -ad-domain ad_domain -preferred-ad-servers preferred_ad_server_list -bind-as-cifs- server {true|false}} -schema schema -port port -query-timeout integer - min-bind-level {anonymous|simple|sasl} -bind-dn LDAP_DN -bind-password password -base-dn LDAP_DN -base-scope {base|onelevel|subtree}

-vserver vserver_name specifies the name of the SVM for which you want to create an LDAP client configuration. -client-config client_config_name specifies the name of the new LDAP client configuration. If the SVM resides in a non-default IPspace, you cannot assign an LDAP client configuration that was created by a cluster administrator. -servers LDAP_server_list specifies one or more LDAP servers by IP address in a comma- delimited list. -ad-domain ad_domain specifies the Active Directory domain. -preferred-ad-servers preferred_ad_server_list specifies one or more preferred Active Directory servers by IP address in a comma-delimited list. 66 | File Access Management Guide for NFS

-bind-as-cifs-server {true|false} specifies whether to bind using the CIFS credentials of the SVM. The default is false. If you want to set this parameter to true, you must first install a CIFS license and create a CIFS server. -schema schema specifies the schema template to use. You can either use one of the provided default schemas or create your own schema by copying a default schema (they are read-only) and modifying the copy. -port port specifies the LDAP server port. The default is 389. If you plan to use Start TLS to secure the LDAP connection, you must use the default port 389. Start TLS begins as a plaintext connection over the LDAP default port 389, and that connection is then upgraded to SSL/TLS. If you change the port, Start TLS fails. -query-timeout integer specifies the query timeout in seconds. The allowed range is 0-10 seconds. The default is 3 seconds. -min-bind-level {anonymous|simple|sasl} specifies the minimum bind authentication level. The default is anonymous. -bind-dn LDAP_DN specifies the Bind user. For Active Directory servers, specify the user in the account (DOMAIN\user) or principal ([email protected]) form. Otherwise, specify the user in distinguished name (CN=user,DC=domain,DC=com) form. -bind-password password specifies the Bind password. -base-dn LDAP_DN specifies the base DN. The default is "" (root). -base-scope {base|onelevel|subtree} specifies the base search scope. The default is subtree. -use-start-tls {true|false} specifies whether to use Start TLS for the LDAP connection. If set to true and the LDAP server supports it, the LDAP client uses an encrypted TLS/SSL connection to the server. The default is false. You must install a self-signed root CA certificate of the LDAP server to use this option.

2. Verify that the LDAP client configuration was created successfully: vserver services name-service ldap client show -client-config client_config_name

Examples The following command creates a new LDAP client configuration named “ldap1” for the SVM vs1 to work with an Active Directory server for LDAP:

cluster1::> vserver services name-service ldap client create - vserver vs1 -client-config ldapclient1 –ad-domain addomain.example.com -bind-as-cifs-server true -schema AD-SFU -port 389 -query-timeout 3 -min-bind-level simple -base-dn DC=addomain,DC=example,DC=com -base-scope subtree -preferred-ad- servers 172.17.32.100

The following command modifies the LDAP client configuration named “ldap1” for the SVM vs1 to work with an Active Directory server for LDAP by specifying an Active Directory domain, and to bind to the server by using the CIFS credentials of the SVM:

cluster1::> vserver services name-service ldap client modify - vserver vs1 -client-config ldap1 -bind-as-cifs-server true -ad- domain addomain.example.com

The following command modifies the LDAP client configuration named “ldap1” for the SVM vs1 by specifying the base DN: Setting up file access using NFS | 67

cluster1::> vserver services name-service ldap client modify - vserver vs1 -client-config ldap1 -base-dn CN=Users,DC=addomain,DC=example,DC=com

Related concepts Configuration options for LDAP directory searches on page 64

Related tasks Improving performance of LDAP directory netgroup-by-host searches on page 67

Related references Commands for managing LDAP client configurations on page 99

Improving performance of LDAP directory netgroup-by-host searches If your LDAP environment is configured to allow netgroup-by-host searches, you can configure Data ONTAP to take advantage of this and perform netgroup-by-host searches. This can significantly speed up netgroup searches and reduce possible NFS client access issues due to latency during netgroup searches.

Before you begin Your LDAP directory must contain a netgroup.byhost map. Your DNS servers should contain both forward (A) and reverse (PTR) lookup records for NFS clients. When you specify IPv6 addresses in netgroups, you must always shorten and compress each address as specified in RFC 5952.

About this task NIS servers store netgroup information in three separate maps called netgroup, netgroup.byuser, and netgroup.byhost. The purpose of the netgroup.byuser and netgroup.byhost maps is to speed up netgroup searches. Beginning with Data ONTAP 8.3, Data ONTAP can perform netgroup-by-host searches on NIS servers for improved mount response times. By default, LDAP directories do not have such a netgroup.byhost map like NIS servers. It is possible, though, with the help of third-party tools, to import a NIS netgroup.byhost map into LDAP directories to enable fast netgroup-by-host searches. If you have configured your LDAP environment to allow netgroup-by-host searches, you can configure the Data ONTAP LDAP client with the netgroup.byhost map name, DN, and search scope for faster netgroup-by-host searches. Receiving the results for netgroup-by-host searches faster enables Data ONTAP to process export rules faster when NFS clients request access to exports. This reduces the chance of delayed access due to netgroup search latency issues.

Steps

1. Obtain the exact full distinguished name of the NIS netgroup.byhost map you imported into your LDAP directory. The map DN can vary depending on the third-party tool you used for import. For best performance, you should specify the exact map DN.

2. Set the privilege level to advanced: set -privilege advanced 68 | File Access Management Guide for NFS

3. Enable netgroup-by-host searches in the LDAP client configuration of the Storage Virtual Machine (SVM): vserver services name-service ldap client modify -vserver vserver_name - client-config config_name -is-netgroup-byhost-enabled true -netgroup- byhost-dn netgroup-by-host_map_distinguished_name -netgroup-byhost-scope netgroup-by-host_search_scope

-is-netgroup-byhost-enabled {true|false} enables or disables netgroup-by-host search for LDAP directories. The default is false. -netgroup-byhost-dn netgroup-by-host_map_distinguished_name specifies the distinguished name of the netgroup.byhost map in the LDAP directory. It overrides the base DN for netgroup-by-host searches. If you do not specify this parameter, Data ONTAP uses the base DN instead. -netgroup-byhost-scope {base|onelevel|subtree} specifies the search scope for netgroup-by-host searches. If you do not specify this parameter, the default is subtree. If the LDAP client configuration does not exist yet, you can enable netgroup-by-host searches by specifying these parameters when creating a new LDAP client configuration using the vserver services name-service ldap client create command.

4. Return to the admin privilege level: set -privilege admin

Example The following command modifies the existing LDAP client configuration named “ldap_corp” to enable netgroup-by-host searches using the netgroup.byhost map named “nisMapName="netgroup.byhost",dc=corp,dc=example,dc=com” and the default search scope subtree:

cluster1::*> vserver services name-service ldap client modify - vserver vs1 -client-config ldap_corp -is-netgroup-byhost-enabled true -netgroup-byhost-dn nisMapName="netgroup.byhost",dc=corp,dc=example,dc=com

After you finish The netgroup.byhost and netgroup maps in the directory must be kept in sync at all times to avoid client access issues.

Related tasks Creating an LDAP client configuration on page 65

Related information IETF RFC 5952: A Recommendation for IPv6 Address Text Representation

Enabling LDAP on SVMs You can use the vserver services name-service ldap create command to enable LDAP on Storage Virtual Machines (SVMs) and configure it to use an LDAP client you created. This enables SVMs to use LDAP for name mapping.

Before you begin An LDAP client configuration must exist on the SVM. Setting up file access using NFS | 69

Step 1. Enable LDAP on the SVMs: vserver services name-service ldap create -vserver vserver_name -client- config client_config_name -client-enabled true

-vserver vserver_name specifies the SVM name. -client-config client_config_name specifies the client configuration name. -client-enabled specifies whether the LDAP client is enabled. The default is true.

Example The following command enables LDAP on the “vs1” SVM and configures it to use the “ldap1” LDAP client configuration:

cluster1::> vserver services name-service ldap create -vserver vs1 - client-config ldap1 -client-enabled true

Related references Commands for managing LDAP configurations on page 99

Configuring SVMs to use LDAP You can configure Storage Virtual Machines (SVMs) to use LDAP servers in your environment for obtaining network information. You modify the name service configuration of SVMs to configure the network information and name mapping sources.

Step 1. Perform one of the following actions:

If you want to configure Enter the command... SVMs to use LDAP to look up...

User information vserver services name-service ns-switch create - vserver vserver_name -database passwd -sources ldap,files

Group information vserver services name-service ns-switch create - vserver vserver_name -database group -sources ldap,files

Netgroup information vserver services name-service ns-switch create - vserver vserver_name -database netgroup -sources ldap,files

Name mapping information vserver services name-service ns-switch create - vserver vserver_name -database namemap -sources ldap,files

If you plan to use LDAP for netgroup searches: • All hosts in netgroups must have both forward (A) and reverse (PTR) DNS records to ensure consistent forward and reverse DNS lookups. In addition, if an IP address of a client has multiple PTR records, all those host names must be members of the netgroup and have corresponding A records. 70 | File Access Management Guide for NFS

• All IPv6 addresses specified in netgroups must be shortened and compressed as specified in RFC 5952.

namemap specifies the sources to search for name mapping information and in what order. In a UNIX-only environment, this entry is not necessary. Name mapping is only required in a mixed environment using both UNIX and Windows. Even if you plan to use LDAP for both data access and SVM administration authentication, you should still include files and configure local users as a fallback in case LDAP authentication fails. See the man page for the command for more information.

Related concepts How Data ONTAP uses name services on page 26 How name mappings are used on page 76

Related information IETF RFC 5952: A Recommendation for IPv6 Address Text Representation

Creating a NIS domain configuration If you specified NIS as a name service option during Storage Virtual Machine (SVM) setup, you must create a NIS domain configuration for the SVM. You can use the vserver services name- service nis-domain create command to create a NIS domain configuration.

About this task You can create multiple NIS domains. However, you can only use one that is set to active. If your NIS database contains a netgroup.byhost map, Data ONTAP can use it for quicker searches. The netgroup.byhost and netgroup maps in the directory must be kept in sync at all times to avoid client access issues.

Step

1. Use the vserver services name-service nis-domain create command to create a NIS domain configuration. You can specify up to 10 NIS servers. All configured NIS servers should be available and reachable. If any of the NIS servers you configure become permanently unavailable later, you should remove them again from the NIS domain configuration (with the vserver services name-service nis-domain modify command) to avoid possible NIS functionality delays. If you plan to use NIS for directory searches, the maps in your NIS servers cannot have more than 1024 characters for each entry. Do not specify a NIS server that does not comply with this limit. Otherwise, client access dependent on NIS entries might fail. If you plan to use NIS for netgroup searches: • All hosts in netgroups must have both forward (A) and reverse (PTR) DNS records to ensure consistent forward and reverse DNS lookups. In addition, if an IP address of a client has multiple PTR records, all those host names must be members of the netgroup and have corresponding A records. • All IPv6 addresses specified in netgroups must be shortened and compressed as specified in RFC 5952. Setting up file access using NFS | 71

Example The following command creates and makes active a NIS domain configuration for a NIS domain called nisdomain on the SVM named vs1 with a NIS server at IP address 192.0.2.180:

vs1::> vserver services name-service nis-domain create -vserver vs1 -domain nisdomain -active true -servers 192.0.2.180

Related references Commands for managing NIS domain configurations on page 98

Related information IETF RFC 5952: A Recommendation for IPv6 Address Text Representation

Configuring local UNIX users and groups You can use local UNIX users and groups for authentication and name mappings.

Creating a local UNIX user You can use the vserver services name-service unix-user create command to create local UNIX users. A local UNIX user is a UNIX user you create on the Storage Virtual Machine (SVM) as a UNIX name services option and to be used in the processing of name mappings.

About this task There is a default maximum limit of 32,768 local UNIX users in the cluster. The cluster administrator can modify this limit.

Step 1. Create a local UNIX user: vserver services name-service unix-user create -vserver vserver_name - user user_name -id integer -primary-gid integer -full-name full_name

-vserver vserver_name specifies the SVM name. -user user_name specifies the user name. The length of the user name must be 64 characters or less. -id integer specifies the user ID. -primary-gid integer specifies the primary group ID. Be aware that this does not actually add the user to the group. After creating the user, you must manually add the user to any desired group. -full-name full_name specifies the full name of the user.

Example The following command creates a local UNIX user named johnm (full name “John Miller”) on the SVM named vs1. The user has the ID 123 and the primary group ID 100. 72 | File Access Management Guide for NFS

node::> vserver services name-service unix-user create -vserver vs1 -user johnm -id 123 -primary-gid 100 -full-name "John Miller"

After you finish Add the user to any local UNIX group you want them to be a member of.

Related concepts Limits for local UNIX users, groups, and group members on page 97

Related tasks Managing limits for local UNIX users and groups on page 97

Related references Commands for managing local UNIX users on page 96

Adding a user to a local UNIX group You can use the vserver services name-service unix-group adduser command to add a user to a UNIX group that is local to the Storage Virtual Machine (SVM).

About this task There is a default maximum limit of 32,768 local UNIX user groups and group members combined in the cluster. The cluster administrator can modify this limit.

Step 1. Add a user to a local UNIX group: vserver services name-service unix-group adduser -vserver vserver_name - name group_name -username user_name

-vserver vserver_name specifies the SVM name. -name group_name specifies the name of the UNIX group to add the user to. -username user_name specifies the user name of the user to add to the group.

Example The following command adds a user named max to a local UNIX group named eng on the SVM named vs1.

vs1::> vserver services name-service unix-group adduser -vserver vs1 -name eng -username max

Related concepts Limits for local UNIX users, groups, and group members on page 97

Related tasks Managing limits for local UNIX users and groups on page 97 Setting up file access using NFS | 73

Loading local UNIX users from a URI As an alternative to manually creating individual local UNIX users in Storage Virtual Machines (SVMs), you can simplify the task by loading a list of local UNIX users into SVMs from a uniform resource identifier (URI) (vserver services name-service unix-user load-from-uri).

About this task There is a default maximum limit of 32,768 local UNIX users in the cluster. The cluster administrator can modify this limit.

Steps 1. Create a file containing the list of local UNIX users you want to load. The file must contain user information in the UNIX /etc/passwd format: user_name: password: user_ID: group_ID: full_name The command discards the value of the password field and of the fields after the full_name field (home_directory and shell). The maximum supported file size is 2.5 MB. There is a default maximum limit of 32,768 local UNIX users in the cluster. The cluster administrator can modify this limit.

2. Verify that the list does not contain any duplicate information. If the list contains duplicate entries, loading the list fails with an error message.

3. Copy the file to a server. The server must be reachable by the storage system over HTTP, HTTPS, FTP, or FTPS.

4. Determine what the URI for the file is. The URI is the address you provide to the storage system to indicate where the file is located.

Example ftp://ftp.example.com/passwd

5. Load the file containing the list of local UNIX users into SVMs from the URI: vserver services name-service unix-user load-from-uri -vserver vserver_name -uri {ftp|http|ftps|https}://uri -overwrite {true|false}

-vserver vserver_name specifies the SVM name. -uri {ftp|http|ftps|https}://uri specifies the URI to load the file from. -overwrite {true|false} specifies whether to overwrite entries. The default is false.

Example The following command loads a list of local UNIX users from the URI ftp:// ftp.example.com/passwd into the SVM named vs1. Existing users on the SVM are not overwritten by information from the URI.

node::> vserver services name-service unix-user load-from-uri - vserver vs1 -uri ftp://ftp.example.com/passwd -overwrite false 74 | File Access Management Guide for NFS

Related concepts Limits for local UNIX users, groups, and group members on page 97

Related tasks Managing limits for local UNIX users and groups on page 97

Creating a local UNIX group You can use the vserver services name-service unix-group create command to create UNIX groups that are local to the Storage Virtual Machine (SVM). Local UNIX groups are used with local UNIX users.

About this task There is a default maximum limit of 32,768 local UNIX user groups and group members combined in the cluster. The cluster administrator can modify this limit.

Step 1. Create a local UNIX group: vserver services name-service unix-group create -vserver vserver_name - name group_name -id integer

-vserver vserver_name specifies the SVM name. -name group_name specifies the group name. The length of the group name must be 64 characters or less. -id integer specifies the group ID.

Example The following command creates a local group named eng on the SVM named vs1. The group has the ID 101.

vs1::> vserver services name-service unix-group create -vserver vs1 -name eng -id 101

Related concepts Limits for local UNIX users, groups, and group members on page 97

Related tasks Managing limits for local UNIX users and groups on page 97

Related references Commands for managing local UNIX groups on page 96 Setting up file access using NFS | 75

Loading local UNIX groups from a URI As an alternative to manually creating individual local UNIX groups, you can also load a list of local UNIX groups into Storage Virtual Machines (SVMs) from a uniform resource identifier (URI) by using the vserver services name-service unix-group load-from-uri command.

About this task There is a default maximum limit of 32,768 local UNIX user groups and group members combined in the cluster. The cluster administrator can modify this limit.

Steps 1. Create a file containing the list of local UNIX groups you want to load. The file must contain group information in the UNIX /etc/group format: group_name: password: group_ID: comma_separated_list_of_users The command discards the value of the password field. The maximum supported file size is 1 MB. The maximum length of each line in the group file is 32,768 characters.

2. Verify that the list does not contain any duplicate information. The list must not contain duplicate entries, or else loading the list fails. If there are entries already present in the SVM, you must either set the -overwrite parameter to true to overwrite all existing entries with the new file, or ensure that the new file does not contain any entries that duplicate existing entries.

3. Copy the file to a server. The server must be reachable by the storage system over HTTP, HTTPS, FTP, or FTPS.

4. Determine what the URI for the file is. The URI is the address you provide to the storage system to indicate where the file is located.

Example ftp://ftp.example.com/group

5. Load the file containing the list of local UNIX groups into SVMs from the URI: vserver services name-service unix-group load-from-uri -vserver vserver_name -uri {ftp|http|ftps|https}://uri -overwrite {true|false}

-vserver vserver_name specifies the SVM name. -uri {ftp|http|ftps|https}://uri specifies the URI to load the file from. -overwrite {true|false} specifies whether to overwrite entries. The default is false. If you specify this parameter as true, Data ONTAP replaces the entire existing local UNIX group database of the specified SVM with the entries from the file you are loading.

Example The following command loads a list of local UNIX groups from the URI ftp:// ftp.example.com/group into the SVM named vs1. Existing groups on the SVM are not overwritten by information from the URI. 76 | File Access Management Guide for NFS

vs1::> vserver services name-service unix-group load-from-uri - vserver vs1 -uri ftp://ftp.example.com/group -overwrite false

Related concepts Limits for local UNIX users, groups, and group members on page 97

Related tasks Managing limits for local UNIX users and groups on page 97

How name mappings are used Data ONTAP uses name mapping to map CIFS identities to UNIX identities, Kerberos identities to UNIX identities, and UNIX identities to CIFS identities. It needs this information to obtain user credentials and provide proper file access regardless of whether they are connecting from an NFS client or a CIFS client. Name mapping is usually required due to the multiprotocol nature of Data ONTAP, which supports CIFS and NFS client access to the same data. Data stored on Storage Virtual Machines (SVMs) with FlexVol volumes uses either UNIX- or NTFS-style permissions. To authorize a client, the credentials must match the security style. Consider the following scenarios: If a CIFS client wants to access data with UNIX-style permissions, Data ONTAP cannot directly authorize the client because it cannot use CIFS credentials with UNIX-style permissions. To properly authorize the client request, Data ONTAP must first map the CIFS credentials to the appropriate UNIX credentials so that it can then use the UNIX credentials to compare them to the UNIX-style permissions. If an NFS client wants to access data with NTFS-style permissions, Data ONTAP cannot directly authorize the client because it cannot use UNIX credentials with NTFS-style permissions. To properly authorize the client request, Data ONTAP must first map the UNIX credentials to the appropriate NTFS credentials so that it can then use the NTFS credentials to compare them to the NTFS-style permissions. There are two exceptions where you do not have to use name mapping: • You configure a pure UNIX environment and do not plan to use CIFS access or NTFS security style on volumes. In this scenario, name mapping is not required because Data ONTAP can use the UNIX credentials of the NFS clients to directly compare them to the UNIX-style permissions. • You configure the default user to be used instead. In this scenario, name mapping is not required because instead of mapping every individual client credential all client credentials are mapped to the same default user. Note that you can use name mapping only for users, not for groups. It is not possible to map CIFS users to a group ID (GID), or UNIX users to a group in the Active Directory (AD). Similarly, it is not possible to map a GID to a group or a user in AD, or an AD group to a UNIX UID or GID. However, you can map a group of individual users to a specific user. For example, you can map all AD users that start or end with the word SALES to a specific UNIX user and to the user’s UID. As a result, you can rename certain users in AD and use regular expressions to effectively emulate group actions. This type of mapping also works in reverse. Setting up file access using NFS | 77

Related concepts How name mapping works on page 77

How name mapping works Data ONTAP goes through a number of steps when attempting to map user names. They include checking the local name mapping database and LDAP, trying the user name, and using the default user if configured. When Data ONTAP has to map credentials for a user, it first checks the local name mapping database and LDAP server for an existing mapping. Whether it checks one or both and in which order is determined by the name service configuration of the Storage Virtual Machine (SVM). • For Windows to UNIX mapping If no mapping is found, Data ONTAP checks whether the lowercase Windows user name is a valid user name in the UNIX domain. If this does not work, it uses the default UNIX user provided that it is configured. If the default UNIX user is not configured and Data ONTAP cannot obtain a mapping this way either, mapping fails and an error is returned. • For UNIX to Windows mapping If no mapping is found, Data ONTAP tries to find a Windows account that matches the UNIX name in the CIFS domain. If this does not work, it uses the default CIFS user, provided that it is configured. If the default CIFS user is not configured and Data ONTAP cannot obtain a mapping this way either, mapping fails and an error is returned.

Related concepts How name mappings are used on page 76

Multidomain searches for UNIX user to Windows user name mappings Data ONTAP supports multidomain searches when mapping UNIX users to Windows users. All discovered trusted domains are searched for matches to the replacement pattern until a matching result is returned. Alternatively, you can configure a list of preferred trusted domains, which is used instead of the discovered trusted domain list and is searched in order until a matching result is returned.

How domain trusts affect UNIX user to Windows user name mapping searches To understand how multidomain user name mapping works, you must understand how domain trusts work with Data ONTAP. Active Directory trust relationships with the CIFS server's home domain can be a bidirectional trust or can be one of two types of unidirectional trusts, either an inbound trust or an outbound trust. The home domain is the domain to which the CIFS server on the Storage Virtual Machine (SVM) belongs.

• Bidirectional trust With bidirectional trusts, both domains trust each other. If the CIFS server's home domain has a bidirectional trust with another domain, the home domain can authenticate and authorize a user belonging to the trusted domain and vice versa. UNIX user to Windows user name mapping searches can be performed only on domains with bidirectional trusts between the home domain and the other domain. • Outbound trust With an outbound trust, the home domain trusts the other domain. In this case, the home domain can authenticate and authorize a user belonging to the outbound trusted domain. A domain with an outbound trust with the home domain is not searched when performing UNIX user to Windows user name mapping searches. • Inbound trust 78 | File Access Management Guide for NFS

With an inbound trust, the other domain trusts the CIFS server's home domain. In this case, the home domain cannot authenticate or authorize a user belonging to the inbound trusted domain. A domain with an inbound trust with the home domain is not searched when performing UNIX user to Windows user name mapping searches.

How wildcards (*) are used to configure multidomain searches for name mapping Multidomain name mapping searches are facilitated by the use of wildcards in the domain section of the Windows user name. The following table illustrates how to use wildcards in the domain part of a name mapping entry to enable multidomain searches:

Pattern Replacement Result root *\\administrator The UNIX user “root” is mapped to the user named “administrator”. All trusted domains are searched in order until the first matching user named “administrator” is found. * *\\* Valid UNIX users are mapped to the corresponding Windows users. All trusted domains are searched in order until the first matching user with that name is found. Note: The pattern *\\* is only valid for name mapping from UNIX to Windows, not the other way around.

How multidomain name searches are performed You can choose one of two methods for determining the list of trusted domains used for multidomain name searches: • Use the automatically discovered bidirectional trust list compiled by Data ONTAP ◦ The advantage to this method is that there is no management overhead and that the list is made of trusted domains that Data ONTAP has determined are valid. ◦ The disadvantage is that you cannot choose the order that the trusted domains are searched. • Use the preferred trusted domain list that you compile ◦ The advantage to this method is that you can configure the list of trusted domains in the order that you want them searched. ◦ The disadvantage is that there is more management overhead and that the list might become outdated, with some listed domains not being valid, bidirectionally trusted domains. If a UNIX user is mapped to a Windows user with a wildcard used for the domain section of the user name, the Windows user is looked up in all the trusted domains as follows:

• If a preferred trusted-domain list is configured, the mapped Windows user is looked up in this search list only, in order. The search ends as soon as the Windows user is found. If the same Windows user name exists in two different trusted domains, then the user belonging to the domain listed first in the preferred trusted-domain list is returned. If the Windows user is not found in any domains in the preferred list, an error is returned. If you want the home domain to be included in the search, it must be included in the preferred trusted domain list. • If a preferred list of trusted domains is not configured, then the Windows user is looked up in all the bidirectional trusted domains of the home domain. The search ends as soon as the Windows user is found. If the same Windows user name exists in two different trusted domains, the user belonging to the domain listed first in the automatically discovered trusted-domain list is returned. You cannot control the order of the trusted domains in Setting up file access using NFS | 79

the automatically discovered list. If the Windows user is not found in any of the discovered trusted domains, the user is then looked up in the home domain. • If there are no bidirectionally trusted domains for the home domain, the user is looked up in the home domain. If a UNIX user is mapped to a Windows user without a domain section in the user name, the Windows user is looked up in the home domain. For more information about managing the list of bidirectional trusted domains, see the Clustered Data ONTAP File Access Management Guide for CIFS.

Name mapping conversion rules A Data ONTAP system keeps a set of conversion rules for each Storage Virtual Machine (SVM). Each rule consists of two pieces: a pattern and a replacement. Conversions start at the beginning of the appropriate list and perform a substitution based on the first matching rule. The pattern is a UNIX-style regular expression. The replacement is a string containing escape sequences representing subexpressions from the pattern, as in the UNIX sed program.

It is possible to allow NFS access to volumes with NTFS security style for users in a different domain from the one that the storage system belongs to, provided that the proper name mapping rule exists. If a user matches a rule to map to a user in a different domain, the domain must be trusted. To ensure successful mapping to users in other domains for both SMB and NFS access, there must be a bidirectional trust relationship between the domains. If a user matches a rule but the user cannot authenticate in the other domain because it is untrusted, the mapping fails. The SVM automatically discovers all bidirectional trusted domains, which are used for multi-domain user mapping searches. Alternatively, you can configure a list of preferred trusted domains that are used for name mapping searches instead of the list of automatically discovered trusted domains. Regular expressions are not case-sensitive when mapping from Windows to UNIX. However, they are case-sensitive for Kerberos-to-UNIX and UNIX-to-Windows mappings. As an example, the following rule converts the Windows user named “jones” in the domain named “ENG” into the UNIX user named “jones”.

Pattern Replacement ENG\\jones jones

Note that the backslash is a special character in regular expressions and must be escaped with another backslash. The caret (^), underscore (_), and ampersand (&) characters can be used as prefixes for digits in replacement patterns. These characters specify uppercase, lowercase, and initial-case transformations, respectively. For instance: • If the initial pattern is (.+) and the replacement pattern is \1, then the string jOe is mapped to jOe (no change). • If the initial pattern is (.+) and the replacement pattern is \_1, then the string jOe is mapped to joe. • If the initial pattern is (.+) and the replacement pattern is \^1, then the string jOe is mapped to JOE. • If the initial pattern is (.+) and the replacement pattern is \&1, then the string jOe is mapped to Joe. 80 | File Access Management Guide for NFS

If the character following a backslash-underscore (\_), backslash-caret (\^), or backslash-ampersand (\&) sequence is not a digit, then the character following the backslash is used verbatim. The following example converts any Windows user in the domain named “ENG” into a UNIX user with the same name in NIS.

Pattern Replacement ENG\\(.+) \1

The double backslash (\\) matches a single backslash. The parentheses denote a subexpression but do not match any characters themselves. The period matches any single character. The asterisk matches zero or more of the previous expression. In this example, you are matching ENG\ followed by one or more of any character. In the replacement, \1 refers to whatever the first subexpression matched. Assuming the Windows user ENG\jones, the replacement evaluates to jones; that is, the portion of the name following ENG\. Note: If you are using the CLI, you must delimit all regular expressions with double quotation marks ("). For instance, to enter the regular expression (.+) in the CLI, type "(.+)" at the command prompt. Quotation marks are not required in the Web UI.

For further information about regular expressions, see your UNIX system administration documentation, the online UNIX documentation for sed or regex, or Mastering Regular Expressions, published by O'Reilly and Associates.

Creating a name mapping You can use the vserver name-mapping create command to create a name mapping. You use name mappings to enable Windows users to access UNIX security style volumes and the reverse.

About this task For each Storage Virtual Machine (SVM), Data ONTAP supports up to 1,024 name mappings for each direction.

Step 1. To create a name mapping, enter the following command: vserver name-mapping create -vserver vserver_name -direction {krb-unix| win-unix|unix-win} -position integer -pattern text -replacement text

-vserver vserver_name specifies the SVM name. -direction {krb-unix|win-unix|unix-win} specifies the mapping direction. -position integer specifies the desired position in the priority list of a new mapping. -pattern text specifies the pattern to be matched, up to 256 characters in length. -replacement text specifies the replacement pattern, up to 256 characters in length. When Windows-to-UNIX mappings are created, any CIFS clients that have open connections to the Data ONTAP system at the time the new mappings are created must log out and log back in to see the new mappings.

Examples The following command creates a name mapping on the SVM named vs1. The mapping is a mapping from UNIX to Windows at position 1 in the priority list. The mapping maps the UNIX user johnd to the Windows user ENG\John. Setting up file access using NFS | 81

vs1::> vserver name-mapping create -vserver vs1 -direction unix-win -position 1 -pattern johnd -replacement "ENG\\John"

The following command creates another name mapping on the SVM named vs1. The mapping is a mapping from Windows to UNIX at position 1 in the priority list. The mapping maps every CIFS user in the domain ENG to users in the NIS domain associated with the SVM.

vs1::> vserver name-mapping create -vserver vs1 -direction win-unix -position 1 -pattern "ENG\\(.+)" -replacement "\1"

Related references Commands for managing name mappings on page 95

Configuring the default user You can configure a default user to use if all other mapping attempts fail for a user, or if you do not want to map individual users between UNIX and Windows. Alternatively, if you want authentication of non-mapped users to fail, you should not configure a default user.

About this task For CIFS authentication, if you do not want to map each Windows user to an individual UNIX user, you can instead specify a default UNIX user. For NFS authentication, if you do not want to map each UNIX user to an individual Windows user, you can instead specify a default Windows user.

Step 1. Perform one of the following actions:

If you want to... Enter the following command...

Configure the default UNIX vserver cifs options modify -default-unix-user user user_name

Configure the default vserver nfs modify -default-win-user user_name Windows user

Related concepts How name mappings are used on page 76 How name mapping works on page 77

Related tasks Handling NFS access to NTFS volumes or qtrees for unknown UNIX users on page 87 82 | File Access Management Guide for NFS

Support for NFS over IPv6 Starting with Data ONTAP 8.2, NFS clients can access files on your storage system over an IPv6 network.

Enabling IPv6 for NFS If you want NFS clients to be able to access data on the storage system over an IPv6 network, Data ONTAP requires several configuration changes.

Steps 1. Enable IPv6 on the cluster. This step must be performed by a cluster administrator. For more information about enabling IPv6 on the cluster, see the Clustered Data ONTAP Network Management Guide.

2. Configure data LIFs for IPv6. This step must be performed by a cluster administrator. For more information about configuring LIFs, see the Clustered Data ONTAP Network Management Guide.

3. Configure NFS export policies and rules for NFS client access. If you want to match clients by IPv6 address, be sure to configure the export rules accordingly.

Enabling access for Windows NFS clients Data ONTAP supports file access from Windows NFSv3 clients. This means that clients running Windows operating systems with NFSv3 support can now access files on NFSv3 exports on the cluster. To successfully use this functionality, you must properly configure the Storage Virtual Machine (SVM) and be aware of certain requirements and limitations.

Before you begin NFSv3 must be enabled on the SVM.

About this task By default, Windows NFSv3 client support is disabled. Windows NFSv3 clients do not support the network status monitor (NSM) protocol. As a result, Windows NFSv3 client sessions might experience disruptions during storage failover and volume move operations.

Steps 1. Modify the SVM to allow NFS clients to see the list of exports: vserver nfs modify -vserver vserver_name -showmount enabled

2. Enable Windows NFSv3 client support: vserver nfs modify -vserver vserver_name -v3-ms-dos-client enabled

3. On all SVMs that support Windows NFSv3 clients, disable the -enable-ejukebox and -v3- connection-drop parameters: vserver nfs modify -vserver vserver_name -enable-ejukebox false -v3- connection-drop disabled Setting up file access using NFS | 83

Windows NFSv3 clients can now mount exports on the storage system.

4. Ensure that each Windows NFSv3 client uses hard mounts by specifying the -o mtype=hard option. This is required to ensure reliable mounts.

Example mount -o mtype=hard \\10.53.33.10\vol\vol1 z:\

Related tasks Enabling NFS clients to view exports of SVMs on page 85

Where to find information about setting up file access to Infinite Volumes For more information about how to set up NFS file access to Infinite Volumes, see the Clustered Data ONTAP Infinite Volumes Management Guide. 84

Managing file access using NFS

After you have enabled NFS on the Storage Virtual Machine (SVM) and configured it, there are a number of tasks you might want to perform to manage file access using NFS.

Enabling or disabling NFSv3 You can enable or disable NFSv3 by modifying the -v3 option. This allows file access for clients using the NFSv3 protocol. By default, NFSv3 is enabled.

Step 1. Perform one of the following actions:

If you want to... Enter the command...

Enable NFSv3 vserver nfs modify -vserver vserver_name -v3 enabled

Disable NFSv3 vserver nfs modify -vserver vserver_name -v3 disabled

Enabling or disabling NFSv4.0 You can enable or disable NFSv4.0 by modifying the -v4.0 option. This allows file access for clients using the NFSv4.0 protocol. By default, NFSv4.0 is disabled.

About this task NFSv4.0 is not supported for Storage Virtual Machines (SVMs) with Infinite Volume.

Step 1. Perform one of the following actions:

If you want to... Enter the following command...

Enable NFSv4.0 vserver nfs modify -vserver vserver_name -v4.0 enabled

Disable NFSv4.0 vserver nfs modify -vserver vserver_name -v4.0 disabled

Enabling or disabling NFSv4.1 You can enable or disable NFSv4.1 by modifying the -v4.1 option. This allows file access for clients using the NFSv4.1 protocol. By default, NFSv4.1 is disabled.

Step 1. Perform one of the following actions: Managing file access using NFS | 85

If you want to... Enter the following command...

Enable NFSv4.1 vserver nfs modify -vserver vserver_name -v4.1 enabled

Disable NFSv4.1 vserver nfs modify -vserver vserver_name -v4.1 disabled

Enabling or disabling pNFS pNFS improves performance by allowing NFS clients to perform read/write operations on storage devices directly and in parallel, bypassing the NFS server as a potential bottleneck. To enable or disable pNFS (parallel NFS), you can modify the -v4.1-pnfs option. By default pNFS is enabled.

Before you begin NFSv4.1 support is required to be able to use pNFS. If you want to enable pNFS, you must first disable NFS referrals. They cannot both be enabled at the same time. If you use pNFS with Kerberos on SVMs, you must enable Kerberos on every LIF on the SVM.

Step 1. Perform one of the following actions:

If you want to... Enter the command...

Enable pNFS vserver nfs modify -vserver vserver_name -v4.1- pnfs enabled

Disable pNFS vserver nfs modify -vserver vserver_name -v4.1- pnfs disabled

Related tasks Enabling or disabling NFSv4 referrals on page 121 Enabling or disabling NFSv4.1 on page 84

Enabling NFS clients to view exports of SVMs NFS clients can use the showmount -e command to see a list of exports available on an NFS server. This can help users identify the file system they want to mount on the NFS server. By default, Data ONTAP does not allow NFS clients to view the exports list of NFS servers, but you can enable this functionality individually for each Storage Virtual Machine (SVM).

Before you begin NFSv3 must be enabled on the SVM. The client must have access to the SVM over port 635.

About this task The output on the client displays the list of exports for volumes and qtrees for the SVM. However, it does not display the client access list and shows everyone as having access. To check client access to exports, you should use the vserver export-policy check-access command instead. 86 | File Access Management Guide for NFS

Data ONTAP caches the list of exports. If the export list changes, for example due to an unmounted volume on the cluster, the export path remains in the cache until it expires or is flushed by the administrator using the vserver export-policy cache flush command.

Step 1. Modify the SVM to allow NFS clients to see the list of exports: vserver nfs modify -vserver vserver_name -showmount enabled

Example The following command enables the showmount feature on the SVM named vs1:

cluster1::> vserver nfs modify -vserver vs1 -showmount enabled

The following command entered on an NFS client displays the list of exports on an NFS server with the IP address 10.63.21.9:

# showmount -e 10.63.21.9 Export list for 10.63.21.9: /unix (everyone) /unix/unix1 (everyone) /unix/unix2 (everyone) / (everyone)

Related tasks Checking client access to exports on page 49 Flushing export policy caches on page 106

Controlling NFS access over TCP and UDP You can enable or disable NFS access to Storage Virtual Machines (SVMs) over TCP and UDP by modifying the -tcp and -udp parameters, respectively. This enables you to control whether NFS clients can access data over TCP or UDP in your environment.

About this task These parameters only apply to NFS. They do not affect auxiliary protocols. For example, if NFS over TCP is disabled, mount operations over TCP still succeed. To completely block TCP or UDP traffic, you can use export policy rules.

Step 1. Perform one of the following actions:

If you want NFS access to Enter the command... be...

Enabled over TCP vserver nfs modify -vserver vserver_name -tcp enabled

Disabled over TCP vserver nfs modify -vserver vserver_name -tcp disabled

Enabled over UDP vserver nfs modify -vserver vserver_name -udp enabled Managing file access using NFS | 87

If you want NFS access to Enter the command... be...

Disabled over UDP vserver nfs modify -vserver vserver_name -udp disabled

Related concepts NFSv3 performance improvement by modifying the TCP maximum read and write size on page 124

Controlling NFS requests from nonreserved ports You can reject NFS mount requests from nonreserved ports by enabling the -mount-rootonly option. To reject all NFS requests from nonreserved ports, you can enable the -nfs-rootonly option.

About this task By default, the option -mount-rootonly is enabled. By default, the option -nfs-rootonly is disabled. These options do not apply to the NULL procedure.

Step 1. Perform one of the following actions: If you want to... Enter the command...

Allow NFS mount requests vserver nfs modify -vserver vserver_name -mount- from nonreserved ports rootonly disabled

Reject NFS mount requests vserver nfs modify -vserver vserver_name -mount- from nonreserved ports rootonly enabled

Allow all NFS requests from vserver nfs modify -vserver vserver_name -nfs- nonreserved ports rootonly disabled

Reject all NFS requests from vserver nfs modify -vserver vserver_name -nfs- nonreserved ports rootonly enabled

Handling NFS access to NTFS volumes or qtrees for unknown UNIX users If Data ONTAP cannot identify UNIX users attempting to connect to volumes or qtrees with NTFS security style, it therefore cannot explicitly map the user to a Windows user. You can configure Data ONTAP to either deny access to such users for stricter security or map them to a default Windows user to ensure a minimum level of access for all users.

Before you begin A default Windows user must be configured if you want to enable this option.

About this task If a UNIX user tries to access volumes or qtrees with NTFS security style, the UNIX user must first be mapped to a Windows user so that Data ONTAP can properly evaluate the NTFS permissions. 88 | File Access Management Guide for NFS

However, if Data ONTAP cannot look up the name of the UNIX user in the configured user information name service sources, it cannot explicitly map the UNIX user to a specific Windows user. You can decide how to handle such unknown UNIX users in the following ways: • Deny access to unknown UNIX users. This enforces stricter security by requiring explicit mapping for all UNIX users to gain access to NTFS volumes or qtrees. • Map unknown UNIX users to a default Windows user. This provides less security but more convenience by ensuring that all users get a minimum level of access to NTFS volumes or qtrees through a default Windows user.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform one of the following actions:

If you want the default Enter the command... Windows user for unknown UNIX users...

Enabled vserver nfs modify -vserver vserver_name -map- unknown-uid-to-default-windows-user enabled

Disabled vserver nfs modify -vserver vserver_name -map- unknown-uid-to-default-windows-user disabled

3. Return to the admin privilege level: set -privilege admin

Related concepts How name mappings are used on page 76 How name mapping works on page 77

Related tasks Configuring the default user on page 81

Considerations for clients that mount NFS exports using a nonreserved port The -mount-rootonly option must be disabled on a storage system that must support clients that mount NFS exports using a nonreserved port even when the user is logged in as root. Such clients include Hummingbird clients and Solaris NFS/IPv6 clients. If the -mount-rootonly option is enabled, Data ONTAP does not allow NFS clients that use nonreserved ports, meaning ports with numbers higher than 1,023, to mount NFS exports. Managing file access using NFS | 89

Performing stricter access checking for netgroups by verifying domains By default, Data ONTAP performs an additional verification when evaluating client access for a netgroup. The additional check ensures that the client's domain matches the domain configuration of the Storage Virtual Machine (SVM). Otherwise, Data ONTAP denies client access.

About this task When Data ONTAP evaluates export policy rules for client access and an export policy rule contains a netgroup, Data ONTAP must determine whether a client's IP address belongs to the netgroup. For this purpose, Data ONTAP converts the client's IP address to a host name using DNS and obtains a fully qualified domain name (FQDN). If the netgroup file only lists a short name for the host and the short name for the host exists in multiple domains, it is possible for a client from a different domain to obtain access without this check. To prevent this, Data ONTAP compares the domain that was returned from DNS for the host against the list of DNS domain names configured for the SVM. If it matches, access is allowed. If it does not match, access is denied. This verification is enabled by default. You can manage it by modifying the -netgroup-dns- domain-search parameter, which is available at the advanced privilege level.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform the desired action:

If you want domain Enter... verification for netgroups to be...

Enabled vserver nfs modify -vserver vserver_name - netgroup-dns-domain-search enabled

Disabled vserver nfs modify -vserver vserver_name - netgroup-dns-domain-search disabled

3. Set the privilege level to admin: set -privilege admin

Securing file access by using Storage-Level Access Guard In addition to securing access by using native file-level and export and share security, you can configure Storage-Level Access Guard, a third layer of security applied by Data ONTAP at the volume level. Storage-Level Access Guard applies to access from all NAS protocols to the storage object to which it is applied.

Storage-Level Access Guard behavior • Storage-Level Access Guard applies to all the files or all the directories in a storage object. 90 | File Access Management Guide for NFS

Because all files or directories in a volume are subject to Storage-Level Access Guard settings, inheritance through propagation is not required. • You can configure Storage-Level Access Guard to apply to files only, to directories only, or to both files and directories within a volume. ◦ File and directory security Applies to every directory and file within the storage object. This is the default setting. ◦ File security Applies to every file within the storage object. Applying this security does not affect access to, or auditing of, directories. ◦ Directory security Applies to every directory within the storage object. Applying this security does not affect access to, or auditing of, files. • Storage-Level Access Guard is used to restrict permissions. It will never give extra access permissions. • If you view the security settings on a file or directory from an NFS or SMB client, you do not see the Storage-Level Access Guard security. It’s applied at the storage object level and stored in the metadata used to determine the effective permissions. • Storage-level security cannot be revoked from a client, even by a system (Windows or UNIX) administrator. It is designed to be modified by storage administrators only. • You can apply Storage-Level Access Guard to volumes with NTFS or mixed security style. • You can apply Storage-Level Access Guard to volumes with UNIX security style as long as the Storage Virtual Machine (SVM) containing the volume has a CIFS server configured. • When volumes are mounted under a volume junction path and if Storage-Level Access Guard is present on that path , it wont be propagated to volumes mounted under it. • The Storage-Level Access Guard is replicated with SnapMirror data replication and with Storage Virtual Machine (SVM) replication. • There is special dispensation for virus scanners. Exceptional access is allowed to these servers to screen files and directories, even if Storage- Level Access Guard denies access to the object. • FPolicy notifications are not sent if access is denied because of Storage-Level Access Guard.

Storage-Level Access Guard for NFS access Only NTFS access permissions are supported for Storage-Level Access Guard. For Data ONTAP to perform security checks on UNIX users for access to data on volumes where Storage-Level Access Guard is applied, the UNIX user must map to a Windows user on the SVM that owns the volume. Storage-Level Access Guard does not apply to SVMs that are UNIX only SVMs and that do not contain CIFS servers.

Order of access checks Access to a file or directory is determined by the combined effect of the export or share permissions, the Storage-Level Access Guard permissions set on volumes, and the native file permissions applied to files and/or directories. All levels of security are evaluated to determine what the effective permissions a file or directory has. The security access checks are performed in the following order: Managing file access using NFS | 91

1. SMB share or NFS export-level permissions

2. Storage-Level Access Guard

3. NTFS file/folder access control lists (ACLs), NFSv4 ACLs, or UNIX mode bits

For more information about configuring Storage-Level Access Guard, see the Clustered Data ONTAP File Access Management Guide for CIFS.

Modifying ports used for NFSv3 services The NFS server on the storage system uses services such as mount daemon and Network Lock Manager to communicate with NFS clients over specific default network ports. In most NFS environments the default ports work correctly and do not require modification, but if you want to use different NFS network ports in your NFSv3 environment, you can do so.

Before you begin Changing NFS ports on the storage system requires that all NFS clients reconnect to the system, so you should communicate this information to your users in advance of making the change.

About this task You can set the ports used by the NFS mount daemon, Network Lock Manager, Network Status Monitor, and NFS quota daemon services for each Storage Virtual Machine (SVM). The port number change affects NFS clients accessing data over both TCP and UDP. Ports for NFSv4 and NFSv4.1 cannot be changed.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Disable access to NFS: vserver nfs modify -vserver vserver_name -access false

3. Set the NFS port for the specific NFS service: vserver nfs modify -vserver vserver_name nfs_port_parameter port_number

NFS port parameter Description Default port

-mountd-port NFS mount daemon 635

-nlm-port Network Lock Manager 4045

-nsm-port Network Status Monitor 4046

-rquotad-port NFS quota daemon 4049

Besides the default port, the allowed range of port numbers is 1024 through 65535. Each NFS service must use a unique port.

4. Enable access to NFS: vserver nfs modify -vserver vserver_name -access true

5. Use the network connections listening show command to verify the port number changes.

6. Return to the admin privilege level: 92 | File Access Management Guide for NFS

set -privilege admin

Example The following commands set the NFS Mount Daemon port to 1113 on the SVM named vs1:

vs1::> set -privilege advanced Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y|n}: y

vs1::*> vserver nfs modify -vserver vs1 -access false

vs1::*> vserver nfs modify -vserver vs1 -mountd-port 1113

vs1::*> vserver nfs modify -vserver vs1 -access true

vs1::*> network connections listening show Vserver Name Interface Name:Local Port Protocol/Service ------Node: cluster1-01 Cluster cluster1-01_clus_1:7700 TCP/ctlopcp vs1 data1:4046 TCP/sm vs1 data1:4046 UDP/sm vs1 data1:4045 TCP/nlm-v4 vs1 data1:4045 UDP/nlm-v4 vs1 data1:1113 TCP/mount vs1 data1:1113 UDP/mount ....

vs1::*> set -privilege admin

Commands for managing NFS servers There are specific Data ONTAP commands for managing NFS servers.

If you want to... Use this command...

Create an NFS server vserver nfs create

Display NFS servers vserver nfs show

Modify an NFS server vserver nfs modify

Delete an NFS server vserver nfs delete

See the man page for each command for more information.

Related tasks Creating an NFS server on page 32 Managing file access using NFS | 93

Troubleshooting name service issues When clients experience access failures due to name service issues, you can use the vserver services name-service getxxbyyy command family to manually perform various name service lookups and examine the details and results of the lookup to help with troubleshooting.

About this task • For each command, you can specify the following: ◦ Name of the node or Storage Virtual Machine (SVM) to perform the lookup on. This enables you to test name service lookups for a specific node or SVM to narrow the search for a potential name service configuration issue. ◦ Whether to show the source used for the lookup. This enables you to check whether the correct source was used. • Data ONTAP selects the service for performing the lookup based on the configured name service switch order. • These commands are available at the advanced privilege level.

Steps 1. Perform one of the following actions:

To retrieve the... Use the command... IP address of a host name vserver services name-service getxxbyyy getaddrinfo vserver services name-service getxxbyyy gethostbyname (IPv4 addresses only)

Members of a group by group vserver services name-service getxxbyyy ID getgrbygid Members of a group by group vserver services name-service getxxbyyy name getgrbyname List of groups a user belongs vserver services name-service getxxbyyy getgrlist to Host name of an IP address vserver services name-service getxxbyyy getnameinfo vserver services name-service getxxbyyy gethostbyaddr (IPv4 addresses only)

User information by user vserver services name-service getxxbyyy name getpwbyname You can test name resolution of RBAC users by specifying the -use- rbac parameter as true.

User information by user ID vserver services name-service getxxbyyy getpwbyuid You can test name resolution of RBAC users by specifying the -use- rbac parameter as true.

Netgroup membership of a vserver services name-service getxxbyyy netgrp client 94 | File Access Management Guide for NFS

To retrieve the... Use the command... Netgroup membership of a vserver services name-service getxxbyyy client using netgroup-by-host netgrpbyhost search

Example The following example shows a DNS lookup test for the SVM vs1 by attempting to obtain the IP address for the host acast1.eng.example.com:

cluster1::*> vserver services name-service getxxbyyy getaddrinfo - vserver vs1 -hostname acast1.eng.example.com -address-family all - show-source true Source used for lookup: DNS Host name: acast1.eng.example.com Canonical Name: acast1.eng.example.com IPv4: 10.72.8.29

The following example shows a NIS lookup test for the SVM vs1 by attempting to retrieve user information for a user with the UID 501768:

cluster1::*> vserver services name-service getxxbyyy getpwbyuid - vserver vs1 -userID 501768 -show-source true Source used for lookup: NIS pw_name: jsmith pw_passwd: $1$y8rA4XX7$/DDOXAvc2PC/IsNFozfIN0 pw_uid: 501768 pw_gid: 501768 pw_gecos: pw_dir: /home/jsmith pw_shell: /bin/bash

The following example shows an LDAP lookup test for the SVM vs1 by attempting to retrieve user information for a user with the name ldap1:

cluster1::*> vserver services name-service getxxbyyy getpwbyname - vserver vs1 -username ldap1 -use-rbac false -show-source true Source used for lookup: LDAP pw_name: ldap1 pw_passwd: {crypt}JSPM6yc/ilIX6 pw_uid: 10001 pw_gid: 3333 pw_gecos: ldap1 user pw_dir: /u/ldap1 pw_shell: /bin/csh

The following example shows a netgroup lookup test for the SVM vs1 by attempting to find out whether the client dnshost0 is a member of the netgroup lnetgroup136:

cluster1::*> vserver services name-service getxxbyyy netgrp -vserver vs1 -netgroup lnetgroup136 -client dnshost0 -show-source true Source used for lookup: LDAP dnshost0 is a member of lnetgroup136

2. Analyze the results of the test you performed and take the necessary action. Managing file access using NFS | 95

Example

If the... Check the... Host name or IP address lookup failed or DNS configuration yielded incorrect results Lookup queried an incorrect source Name service switch configuration User or group lookup failed or yielded Name service switch configuration incorrect results Source configuration (local files, NIS domain, LDAP client) Network configuration (for example, LIFs and routes)

Related tasks Configuring the name service switch table on page 59

Related information NetApp Technical Report 4379: Name Services Best Practice Guide Clustered Data ONTAP

Commands for managing name service switch entries You can manage name service switch entries by creating, displaying, modifying, and deleting them.

If you want to... Use this command...

Create a name service switch entry vserver services name-service ns- switch create

Display name service switch entries vserver services name-service ns- switch show

Modify a name service switch entry vserver services name-service ns- switch modify

Delete a name service switch entry vserver services name-service ns- switch delete

See the man page for each command for more information.

Related tasks Configuring the name service switch table on page 59

Related information NetApp Technical Report 4379: Name Services Best Practice Guide Clustered Data ONTAP

Commands for managing name mappings There are specific Data ONTAP commands for managing name mappings.

If you want to... Use this command...

Create a name mapping vserver name-mapping create 96 | File Access Management Guide for NFS

If you want to... Use this command...

Insert a name mapping at a specific position vserver name-mapping insert

Display name mappings vserver name-mapping show

Exchange the position of two name mappings vserver name-mapping swap

Modify a name mapping vserver name-mapping modify

Delete a name mapping vserver name-mapping delete

See the man page for each command for more information.

Related tasks Creating a name mapping on page 80

Commands for managing local UNIX users There are specific Data ONTAP commands for managing local UNIX users.

If you want to... Use this command...

Create a local UNIX user vserver services name-service unix-user create

Load local UNIX users from a URI vserver services name-service unix-user load-from-uri

Display local UNIX users vserver services name-service unix-user show

Modify a local UNIX user vserver services name-service unix-user modify

Delete a local UNIX user vserver services name-service unix-user delete

See the man page for each command for more information.

Related tasks Creating a local UNIX user on page 71

Commands for managing local UNIX groups There are specific Data ONTAP commands for managing local UNIX groups.

If you want to... Use this command...

Create a local UNIX group vserver services name-service unix-group create

Add a user to a local UNIX group vserver services name-service unix-group adduser

Load local UNIX groups from a URI vserver services name-service unix-group load-from-uri Managing file access using NFS | 97

If you want to... Use this command...

Display local UNIX groups vserver services name-service unix-group show

Modify a local UNIX group vserver services name-service unix-group modify

Delete a user from a local UNIX group vserver services name-service unix-group deluser

Delete a local UNIX group vserver services name-service unix-group delete

See the man page for each command for more information.

Related tasks Creating a local UNIX group on page 74

Limits for local UNIX users, groups, and group members Beginning with Data ONTAP 8.3, Data ONTAP introduces limits for the maximum number of UNIX users and groups in the cluster, and commands to manage these limits. These limits can help avoid performance issues by preventing administrators from creating too many local UNIX users and groups in the cluster. There is a limit for the combined number of local UNIX user groups and group members. There is a separate limit for local UNIX users. The limits are cluster-wide. Each of these new limits is set to a default value that you can modify but not set higher than a preassigned hard limit.

Database Default limit Hard limit Local UNIX users 32,768 65,536 Local UNIX groups and group 32,768 65,536 members

Managing limits for local UNIX users and groups There are specific Data ONTAP commands for managing limits for local UNIX users and groups. Cluster administrators can use these commands to troubleshoot performance issues in the cluster believed to be related to excessive numbers of local UNIX users and groups.

About this task These commands are available to the cluster administrator at the advanced privilege level.

Step 1. Perform one of the following actions:

If you want to... Use the command... Display information about vserver services unix-user max-limit show local UNIX user limits Display information about vserver services unix-group max-limit show local UNIX group limits 98 | File Access Management Guide for NFS

If you want to... Use the command... Modify local UNIX user vserver services unix-user max-limit modify limits Modify local UNIX group vserver services unix-group max-limit modify limits

See the man page for each command for more information.

Commands for managing local netgroups You can manage local netgroups by loading them from a URI, verifying their status across nodes, displaying them, and deleting them.

If you want to... Use the command...

Load netgroups from a URI vserver services name-service netgroup load

Verify the status of netgroups vserver services name-service netgroup status across nodes Available at the advanced privilege level and higher. Display local netgroups vserver services name-service netgroup file show

Delete a local netgroup vserver services name-service netgroup file delete

See the man page for each command for more information.

Related tasks Loading netgroups into SVMs on page 43 Checking whether a client IP address is a member of a netgroup on page 107 Performing stricter access checking for netgroups by verifying domains on page 89

Commands for managing NIS domain configurations There are specific Data ONTAP commands for managing NIS domain configurations.

If you want to... Use this command...

Create a NIS domain configuration vserver services name-service nis-domain create

Display NIS domain configurations vserver services name-service nis-domain show

Display binding status of a NIS domain vserver services name-service nis-domain configuration show-bound

Display NIS statistics vserver services name-service nis-domain show-statistics Available at the advanced privilege level and higher. Clear NIS statistics vserver services name-service nis-domain clear-statistics Available at the advanced privilege level and higher. Managing file access using NFS | 99

If you want to... Use this command...

Modify a NIS domain configuration vserver services name-service nis-domain modify

Delete a NIS domain configuration vserver services name-service nis-domain delete

See the man page for each command for more information.

Related tasks Creating a NIS domain configuration on page 70

Commands for managing LDAP client configurations There are specific Data ONTAP commands for managing LDAP client configurations. Note: SVM administrators cannot modify or delete LDAP client configurations that were created by cluster administrators.

If you want to... Use this command...

Create an LDAP client configuration vserver services name-service ldap client create

Display LDAP client configurations vserver services name-service ldap client show

Modify an LDAP client vserver services name-service ldap client configuration modify

Change the LDAP client BIND vserver services name-service ldap client password modify-bind-password

Delete an LDAP client configuration vserver services name-service ldap client delete

See the man page for each command for more information.

Related concepts Configuration options for LDAP directory searches on page 64

Related tasks Creating an LDAP client configuration on page 65

Commands for managing LDAP configurations There are specific Data ONTAP commands for managing LDAP configurations.

If you want to... Use this command...

Create an LDAP configuration vserver services name-service ldap create

Display LDAP configurations vserver services name-service ldap show 100 | File Access Management Guide for NFS

If you want to... Use this command...

Modify an LDAP configuration vserver services name-service ldap modify

Delete an LDAP configuration vserver services name-service ldap delete

See the man page for each command for more information.

Related tasks Enabling LDAP on SVMs on page 68

Commands for managing LDAP client schema templates There are specific Data ONTAP commands for managing LDAP client schema templates. Note: SVM administrators cannot modify or delete LDAP client schemas that were created by cluster administrators.

If you want to... Use this command...

Copy an existing LDAP schema vserver services name-service ldap client template schema copy Available at the advanced privilege level and higher.

Display LDAP schema templates vserver services name-service ldap client schema show

Modify an LDAP schema template vserver services name-service ldap client schema modify Available at the advanced privilege level and higher.

Delete an LDAP schema template vserver services name-service ldap client schema delete Available at the advanced privilege level and higher.

See the man page for each command for more information.

Related tasks Creating a new LDAP client schema on page 62

Commands for managing NFS Kerberos interface configurations There are specific Data ONTAP commands for managing NFS Kerberos interface configurations.

If you want to... Use this command...

Enable NFS Kerberos on a LIF vserver nfs kerberos interface enable

Display NFS Kerberos interface configurations vserver nfs kerberos interface show Managing file access using NFS | 101

If you want to... Use this command...

Modify an NFS Kerberos interface vserver nfs kerberos interface configuration modify

Disable NFS Kerberos on a LIF vserver nfs kerberos interface disable

See the man page for each command for more information.

Related tasks Creating an NFS Kerberos configuration on page 56

Commands for managing NFS Kerberos realm configurations There are specific Data ONTAP commands for managing NFS Kerberos realm configurations.

If you want to... Use this command...

Create an NFS Kerberos realm configuration vserver nfs kerberos realm create

Display NFS Kerberos realm configurations vserver nfs kerberos realm show

Modify an NFS Kerberos realm configuration vserver nfs kerberos realm modify

Delete an NFS Kerberos realm configuration vserver nfs kerberos realm delete

See the man page for each command for more information.

Related tasks Creating an NFS Kerberos realm configuration on page 55

Commands for managing export policies There are specific Data ONTAP commands for managing export policies.

If you want to... Use this command...

Display information about export policies vserver export-policy show

Rename an export policy vserver export-policy rename

Copy an export policy vserver export-policy copy

Delete an export policy vserver export-policy delete

See the man page for each command for more information.

Related tasks Creating an export policy on page 40 Checking client access to exports on page 49 102 | File Access Management Guide for NFS

Commands for managing export rules There are specific Data ONTAP commands for managing export rules.

If you want to... Use this command...

Create an export rule vserver export-policy rule create

Display information about export rules vserver export-policy rule show

Modify an export rule vserver export-policy rule modify

Delete an export rule vserver export-policy rule delete

Note: If you have configured multiple identical export rules matching different clients, be sure to keep them in sync when managing export rules.

See the man page for each command for more information.

Related tasks Adding a rule to an export policy on page 40 Checking client access to exports on page 49

Fencing or unfencing exports in clustered Data ONTAP With Data ONTAP operating in 7-Mode, you could fence or unfence clients using the exportfs -b command. Although there is no direct equivalent command in clustered Data ONTAP, you can accomplish the same task by creating export policy rules.

About this task Fencing means restricting the access of one or more clients to an export to read-only by removing any read-write access and placing the clients at the top of the read-only access list. Unfencing means allowing guaranteed read-write access to one or more clients to an export by placing the client at the top of the read-write access list.

Steps 1. Identify the export policy for the volume that you want to fence or unfence the client from: volume show -vserver svm_name -volume volume_name

2. Create a new export policy rule by using the vserver export-policy rule create command. Include the appropriate parameters depending on whether you want to fence or unfence clients:

To... Use these parameters in the export rule...

Fence clients -rwrule none -rorule any -ruleindex 1

Unfence clients -rwrule any -rorule any -ruleindex 1

After you add the new rule to the export policy of the export, the clients will be fenced or unfenced equivalent to the 7-Mode functionality. Managing file access using NFS | 103

Configuring the NFS credential cache Data ONTAP uses a credential cache to store information needed for user authentication for NFS export access to provide faster access and improve performance. You can configure how long information is stored in the credential cache to customize it for your environment.

How the NFS credential cache works When an NFS user requests access to NFS exports on the storage system, Data ONTAP must retrieve the user credentials either from external name servers or from local files to authenticate the user. Data ONTAP then stores these credentials in an internal credential cache for later reference. Understanding how the NFS credential caches works enables you to handle potential performance and access issues. Without the credential cache, Data ONTAP would have to query name services every time an NFS user requested access. On a busy storage system that is accessed by many users, this can quickly lead to serious performance problems, causing unwanted delays or even denials to NFS client access. With the credential cache, Data ONTAP retrieves the user credentials and then stores them for a predetermined amount of time for quick and easy access should the NFS client send another request. This method offers the following advantages: • It eases the load on the storage system by handling fewer requests to external name servers (such as NIS or LDAP). • It eases the load on external name servers by sending fewer requests to them. • It speeds up user access by eliminating the wait time for obtaining credentials from external sources before the user can be authenticated. Data ONTAP stores both positive and negative credentials in the credential cache. Positive credentials means that the user was authenticated and granted access. Negative credentials means that the user was not authenticated and was denied access. By default, Data ONTAP stores positive credentials for 24 hours; that is, after initially authenticating a user, Data ONTAP uses the cached credentials for any access requests by that user for 24 hours. If the user requests access after 24 hours, the cycle starts over: Data ONTAP discards the cached credentials and obtains the credentials again from the appropriate name service source. If the credentials changed on the name server during the previous 24 hours, Data ONTAP caches the updated credentials for use for the next 24 hours. By default, Data ONTAP stores negative credentials for two hours; that is, after initially denying access to a user, Data ONTAP continues to deny any access requests by that user for two hours. If the user requests access after 2 hours, the cycle starts over: Data ONTAP obtains the credentials again from the appropriate name service source. If the credentials changed on the name server during the previous two hours, Data ONTAP caches the updated credentials for use for the next two hours.

Reasons for modifying the NFS credential cache time-to-live There are several scenarios when modifying the NFS credential cache time-to-live (TTL) can help resolve issues. You should understand what these scenarios are as well as the consequences of making these modifications.

Reasons Consider changing the default TTL under the following circumstances: 104 | File Access Management Guide for NFS

Issue Remedial action The name servers in your environment are Increase the TTL for cached positive and experiencing performance degradation due to a negative credentials to reduce the number of high load of requests from Data ONTAP. requests from Data ONTAP to name servers. The name server administrator made changes to Decrease the TTL for cached negative allow access to NFS users that were previously credentials to reduce the time NFS users have to denied. wait for Data ONTAP to request fresh credentials from external name servers so they can get access. The name server administrator made changes to Reduce the TTL for cached positive credentials deny access to NFS users that were previously to reduce the time before Data ONTAP requests allowed. fresh credentials from external name servers so the NFS users are now denied access.

Consequences You can modify the length of time individually for caching positive and negative credentials. However, you should be aware of both the advantages and disadvantages of doing so.

If you... The advantage is... The disadvantage is... Increase the positive Data ONTAP sends requests for It takes longer to deny access credential cache time credentials to name servers less to NFS users that previously frequently, reducing the load on name were allowed access but are servers. not anymore. Decrease the positive It takes less time to deny access to NFS Data ONTAP sends requests credential cache time users that previously were allowed for credentials to name servers access but are not anymore. more frequently, increasing the load on name servers. Increase the negative Data ONTAP sends requests for It takes longer to grant access credential cache time credentials to name servers less to NFS users that previously frequently, reducing the load on name were not allowed access but servers. are now. Decrease the negative It takes less time to grant access to NFS Data ONTAP sends requests credential cache time users that previously were not allowed for credentials to name servers access but are now. more frequently, increasing the load on name servers.

Configuring the time-to-live for cached NFS user credentials You can configure the length of time that Data ONTAP stores credentials for NFS users in its internal cache (time-to-live, or TTL) by modifying the NFS server of the Storage Virtual Machine (SVM). This enables you to alleviate certain issues related to high load on name servers or changes in credentials affecting NFS user access.

About this task These parameters are available at the advanced privilege level.

Steps 1. Set the privilege level to advanced: set -privilege advanced Managing file access using NFS | 105

2. Perform the desired action:

If you want to modify the Use the command... TTL for cached...

Positive credentials vserver nfs modify -vserver vserver_name -cached- cred-positive-ttl time_to_live The TTL is measured in milliseconds. The default is 24 hours (86,400,000 milliseconds). The allowed range for this value is 1 minute (60000 milliseconds) through 7 days (604,800,000 milliseconds).

Negative credentials vserver nfs modify -vserver vserver_name -cached- cred-negative-ttl time_to_live The TTL is measured in milliseconds. The default is 2 hours (7,200,000 milliseconds). The allowed range for this value is 1 minute (60000 milliseconds) through 7 days (604,800,000 milliseconds).

3. Return to the admin privilege level: set -privilege admin

Managing export policy caches Data ONTAP uses several export policy caches to store information related to export policies for faster access. There are certain tasks you can perform to manage export policy caches for troubleshooting purposes.

How Data ONTAP uses export policy caches To improve system performance, Data ONTAP uses local caches to store information such as host names and netgroups. This enables Data ONTAP to process export policy rules more quickly than retrieving the information from external sources. Understanding what the caches are and what they do can help you troubleshoot client access issues. You configure export policies to control client access to NFS exports. Each export policy contains rules, and each rule contains parameters to match the rule to clients requesting access. Some of these parameters require Data ONTAP to contact an external source, such as DNS or NIS servers, to resolve objects such as domain names, host names, or netgroups. These communications with external sources take a small amount of time. To increase performance, Data ONTAP reduces the amount of time it takes to resolve export policy rule objects by storing information locally on each node in several caches.

Cache name Type of information stored Access Mappings of clients to corresponding export policies Name Mappings of UNIX user names to corresponding UNIX user IDs ID Mappings of UNIX user IDs to corresponding UNIX user IDs and extended UNIX group IDs Host Mappings of host names to corresponding IP addresses Netgroup Mappings of netgroups to corresponding IP addresses of members Showmount List of exported directories from SVM namespace

If you change information on the external name servers in your environment after Data ONTAP retrieved and stored it locally, the caches might now contain outdated information. Although Data 106 | File Access Management Guide for NFS

ONTAP refreshes caches automatically after certain time periods, different caches have different expiration and refresh times and algorithms. Another possible reason for caches to contain outdated information is when Data ONTAP attempts to refresh cached information but encounters a failure when attempting to communicate with name servers. If this happens, Data ONTAP continues to use the information currently stored in the local caches to prevent client disruption. As a result, client access requests that are supposed to succeed might fail, and client access requests that are supposed to fail might succeed. You can view and manually flush some of the export policy caches when troubleshooting such client access issues.

Flushing export policy caches Flushing export policy caches manually (vserver export-policy cache flush) removes potentially outdated information and forces Data ONTAP to retrieve current information from the appropriate external resources. This can help resolve a variety of issues related to client access to NFS exports.

About this task Export policy cache information might be outdated due to the following reasons: • A recent change to export policy rules • A recent change to host name records in name servers • A recent change to netgroup entries in name servers • Recovering from a network outage that prevented netgroups from being fully loaded

Step 1. Perform one of the following actions: If you want to flush... Enter the command...

All export policy caches vserver export-policy cache flush -vserver (except for showmount) vserver_name

The export policy rules access vserver export-policy cache flush -vserver cache vserver_name -cache access You can include the optional -node parameter to specify the node on which you want to flush the access cache.

The host name cache vserver export-policy cache flush -vserver vserver_name -cache host

The netgroup cache vserver export-policy cache flush -vserver vserver_name -cache netgroup Processing of netgroups is resource intensive. You should only flush the netgroup cache if you are trying to resolve a client access issue that is caused by a stale netgroup.

The showmount cache vserver export-policy cache flush -vserver vserver_name -cache showmount

Displaying the export policy netgroup queue and cache Data ONTAP uses the netgroup queue when importing and resolving netgroups and it uses the netgroup cache to store the resulting information. When troubleshooting export policy netgroup related issues, you can use the vserver export-policy netgroup queue show and vserver Managing file access using NFS | 107

export-policy netgroup cache show commands to display the status of the netgroup queue and the contents of the netgroup cache.

Step 1. Perform one of the following actions:

To display the export policy Enter the command... netgroup...

Queue vserver export-policy netgroup queue show

Cache vserver export-policy netgroup cache show - vserver vserver_name

See the man page for each command for more information.

Checking whether a client IP address is a member of a netgroup When troubleshooting NFS client access issues related to netgroups, you can use the vserver export-policy netgroup check-membership command to help determine whether a client IP is a member of a certain netgroup.

About this task Checking netgroup membership enables you to determine whether Data ONTAP is aware that a client is or is not member of a netgroup. It also lets you know whether the Data ONTAP netgroup cache is in a transient state while refreshing netgroup information. This information can help you understand why a client might be unexpectedly granted or denied access.

Step 1. Check the netgroup membership of a client IP address: vserver export-policy netgroup check-membership -vserver vserver_name - netgroup netgroup_name -client-ip client_ip The command can return the following results: • The client is a member of the netgroup. This was confirmed through a reverse lookup scan or a netgroup-by-host search. • The client is a member of the netgroup. It was found in the Data ONTAP netgroup cache. • The client is not a member of the netgroup.

• The membership of the client cannot yet be determined because Data ONTAP is currently refreshing the netgroup cache. Until this is done, membership cannot be explicitly ruled in or out. Use the vserver export-policy netgroup queue show command to monitor the loading of the netgroup and retry the check after it is finished.

Example The following example checks whether a client with the IP address 172.17.16.72 is a member of the netgroup mercury on the SVM vs1:

cluster1::> vserver export-policy netgroup check-membership - vserver vs1 -netgroup mercury -client-ip 172.17.16.72 108 | File Access Management Guide for NFS

Related tasks Performing stricter access checking for netgroups by verifying domains on page 89

Related references Commands for managing local netgroups on page 98

How the access cache works Data ONTAP uses an access cache to store the results of export policy rule evaluation for client access operations to a volume or qtree. This results in performance improvements because the information can be retrieved much faster from the access cache than going through the export policy rule evaluation process every time a client sends an I/O request. Whenever an NFS client sends an I/O request to access data on a volume or qtree, Data ONTAP must evaluate each I/O request to determine whether to grant or deny the I/O request. This evaluation involves checking every export policy rule of the export policy associated with the volume or qtree. If the path to the volume or qtree involves crossing one or more junction points, this might require performing this check for multiple export policies along the path. Note that this evaluation occurs for every I/O request sent from an NFS client, such as read, write, list, copy and other operations; not just for initial mount requests. After Data ONTAP has identified the applicable export policy rules and decided whether to allow or deny the request, Data ONTAP then creates an entry in the access cache to store this information. When an NFS client sends an I/O request, Data ONTAP notes the IP address of the client, the ID of the SVM, and the export policy associated with the target volume or qtree, and first checks the access cache for a matching entry. If a matching entry exists in the access cache, Data ONTAP uses the stored information to allow or deny the I/O request. If a matching entry does not exist, Data ONTAP then goes through the normal process of evaluating all applicable policy rules as explained above. Retrieving the information from the access cache is much faster than going through the entire export policy rule evaluation process for every I/O request. Therefore, using the access cache greatly improves performance by reducing the overhead of client access checks.

How access cache parameters work There are several parameters that control the refresh periods for entries in the access cache. Understanding how these parameters work enables you to modify them to tune the access cache and balance performance with how recent the stored information is. The access cache stores entries consisting of one or more export rules that apply to clients attempting to access volumes or qtrees. These entries are stored for a certain amount of time before they are refreshed. The refresh time is determined by access cache parameters and depends on the type of access cache entry.

Access cache Description Refresh period in seconds entry type Positive entries Access cache entries that have not Minimum: 300 resulted in access denial to clients. Maximum: 86,400 Default: 3,600 Negative entries Access cache entries that have resulted Minimum: 60 in access denial to clients. Maximum: 86,400 Default: 3,600 Managing file access using NFS | 109

Example An NFS client attempts to access a volume on a cluster. Data ONTAP matches the client to an export policy rule and determines that the client gets access based on the export policy rule configuration. Data ONTAP stores the export policy rule in the access cache as a positive entry. By default, Data ONTAP keeps the positive entry in the access cache for one hour (3,600 seconds), and then automatically refreshes the entry to keep the information current.

To prevent the access cache from filling up unnecessarily, there is an additional parameter to clear existing access cache entries that have not been used for a certain time period to decide client access. This -harvest-timeout parameter has an allowed range of 60 through 2,592,000 seconds and a default setting of 86,400 seconds.

Optimizing access cache performance You can configure several parameters to optimize the access cache and find the right balance between performance and how current the information stored in the access cache is.

About this task When you configure the access cache refresh periods, keep the following in mind: • Higher values mean entries stay longer in the access cache. The advantage is better performance because Data ONTAP spends less resources on refreshing access cache entries. The disadvantage is that if export policy rules change and access cache entries become stale as a result, it takes longer to update them. As a result, clients that should get access might get denied, and clients that should get denied might get access. • Lower values mean Data ONTAP refreshes access cache entries more often. The advantage is that entries are more current and clients are more likely to be correctly granted or denied access. The disadvantage is a decrease in performance because Data ONTAP spends more resources refreshing access cache entries.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform the desired action:

To modify the... Enter...

Refresh period for positive vserver export-policy access-cache config modify- entries all-vservers -refresh-period-positive timeout_value

Refresh period for negative vserver export-policy access-cache config modify- entries all-vservers -refresh-period-negative timeout_value

Timeout period for old entries vserver export-policy access-cache config modify- all-vservers -harvest-timeout timeout_value

3. Verify the new parameter settings: vserver export-policy access-cache config show-all-vservers

4. Return to the admin privilege level: set -privilege admin 110 | File Access Management Guide for NFS

Managing file locks You can display information about the current locks for a Storage Virtual Machine (SVM) as a first step to determining why a client cannot access a volume or file. You can use this information if you need to break file locks. For information about how file locks affect Infinite Volumes, see the Clustered Data ONTAP Infinite Volumes Management Guide.

Related concepts Managing NFSv4 file delegations on page 117

Related tasks Configuring NFSv4 file and record locking on page 119

About file locking between protocols File locking is a method used by client applications to prevent a user from accessing a file previously opened by another user. How Data ONTAP locks files depends on the protocol of the client. If the client is an NFS client, locks are advisory; if the client is an SMB client, locks are mandatory. Because of differences between the NFS and SMB file locks, an NFS client might fail to access a file previously opened by an SMB application. The following occurs when an NFS client attempts to access a file locked by an SMB application:

• In mixed or NTFS volumes, file manipulation operations such as rm, rmdir, and mv can cause the NFS application to fail. • NFS read and write operations are denied by SMB deny-read and deny-write open modes, respectively. • NFS write operations fail when the written range of the file is locked with an exclusive SMB bytelock. In UNIX security-style volumes, NFS unlink and rename operations ignore SMB lock state and allow access to the file. All other NFS operations on UNIX security-style volumes honor SMB lock state.

How Data ONTAP treats read-only bits The read-only bit is a binary digit, which holds a value of 0 or 1, that is set on a file-by-file basis to reflect whether a file is writable (disabled) or read-only (enabled). SMB clients that use MS-DOS and Windows can set a per-file read-only bit. NFS clients do not set a per-file read-only bit because NFS clients do not have any protocol operations that use a per-file read- only bit. Data ONTAP can set a read-only bit on a file when an SMB client that uses MS-DOS or Windows creates that file. Data ONTAP can also set a read-only bit when a file is shared between NFS clients and SMB clients. Some software, when used by NFS clients and SMB clients, requires the read-only bit to be enabled. For Data ONTAP to keep the appropriate read and write permissions on a file shared between NFS clients and SMB clients, it treats the read-only bit according to the following rules: • NFS treats any file with the read-only bit enabled as if it has no write permission bits enabled. Managing file access using NFS | 111

• If an NFS client disables all write permission bits and at least one of those bits had previously been enabled, Data ONTAP enables the read-only bit for that file. • If an NFS client enables any write permission bit, Data ONTAP disables the read-only bit for that file. • If the read-only bit for a file is enabled and an NFS client attempts to discover permissions for the file, the permission bits for the file are not sent to the NFS client; instead, Data ONTAP sends the permission bits to the NFS client with the write permission bits masked. • If the read-only bit for a file is enabled and an SMB client disables the read-only bit, Data ONTAP enables the owner’s write permission bit for the file. • Files with the read-only bit enabled are writable only by root.

Note: Changes to file permissions take effect immediately on SMB clients, but might not take effect immediately on NFS clients if the NFS client enables attribute caching.

Displaying information about locks You can display information about the current file locks, including what types of locks are held and what the lock state is, details about byte-range locks, sharelock modes, delegation locks, and opportunistic locks, and whether locks are opened with durable or persistent handles.

About this task The client IP address cannot be displayed for locks established through NFSv4 or NFSv4.1. By default, the command displays information about all locks. You can use command parameters to display information about locks for a specific Storage Virtual Machine (SVM) or to filter the command's output by other criteria. If you do not specify any parameter, the command displays the following information: • SVM name • Volume name of the FlexVol volume or the name of the namespace constituent for the Infinite Volume • Path of the locked object • Logical interface name • Protocol by which the lock was established • Type of lock • Client

The vserver locks show command displays information about four types of locks:

• Byte-range locks, which lock only a portion of a file. • Share locks, which lock open files. • Opportunistic locks, which control client-side caching over SMB. • Delegations, which control client-side caching over NFSv4.x. By specifying optional parameters, you can determine important information about each of these lock types. See the man page for the command for more information.

Step

1. Display information about locks by using the vserver locks show command. 112 | File Access Management Guide for NFS

Examples The following example displays summary information for an NFSv4 lock on a file with the path /vol1/file1. The sharelock access mode is write-deny_none, and the lock was granted with write delegation:

cluster1::> vserver locks show

Vserver: vs0 Volume Object Path LIF Protocol Lock Type Client ------vol1 /vol1/file1 lif1 nfsv4 share-level - Sharelock Mode: write-deny_none delegation - Delegation Type: write

The following example displays detailed oplock and sharelock information about the SMB lock on a file with the path /data2/data2_2/intro.pptx. A durable handle is granted on the file with a share lock access mode of write-deny_none to a client with an IP address of 10.3.1.3. A lease oplock is granted with a batch oplock level:

cluster1::> vserver locks show -instance -path /data2/data2_2/intro.pptx

Vserver: vs1 Volume: data2_2 Logical Interface: lif2 Object Path: /data2/data2_2/intro.pptx Lock UUID: 553cf484-7030-4998-88d3-1125adbba0b7 Lock Protocol: cifs Lock Type: share-level Node Holding Lock State: node3 Lock State: granted Bytelock Starting Offset: - Number of Bytes Locked: - Bytelock is Mandatory: - Bytelock is Exclusive: - Bytelock is Superlock: - Bytelock is Soft: - Oplock Level: - Shared Lock Access Mode: write-deny_none Shared Lock is Soft: false Delegation Type: - Client Address: 10.3.1.3 SMB Open Type: durable SMB Connect State: connected SMB Expiration Time (Secs): - SMB Open Group ID: 78a90c59d45ae211998100059a3c7a00a007f70da0f8ffffcd445b0300000000

Vserver: vs1 Volume: data2_2 Logical Interface: lif2 Object Path: /data2/data2_2/test.pptx Lock UUID: 302fd7b1-f7bf-47ae-9981-f0dcb6a224f9 Lock Protocol: cifs Lock Type: op-lock Node Holding Lock State: node3 Lock State: granted Bytelock Starting Offset: - Number of Bytes Locked: - Bytelock is Mandatory: - Bytelock is Exclusive: - Bytelock is Superlock: - Bytelock is Soft: - Oplock Level: batch Shared Lock Access Mode: - Shared Lock is Soft: - Delegation Type: - Client Address: 10.3.1.3 SMB Open Type: - SMB Connect State: connected SMB Expiration Time (Secs): - SMB Open Group ID: 78a90c59d45ae211998100059a3c7a00a007f70da0f8ffffcd445b0300000000 Managing file access using NFS | 113

Breaking locks When file locks are preventing client access to files, you can display information about currently held locks, and then break specific locks. Examples of scenarios in which you might need to break locks include debugging applications.

About this task The vserver locks break command is available only at the advanced privilege level and higher. The man page for the command contains detailed information.

Steps

1. To find the information you need to break a lock, use the vserver locks show command. The man page for the command contains detailed information.

2. Set the privilege level to advanced: set -privilege advanced

3. Perform one of the following actions:

If you want to break a lock Enter the command... by specifying...

The SVM name, volume vserver locks break -vserver vserver_name -volume name, LIF name, and file path volume_name -path path -lif lif

The lock ID vserver locks break -lockid UUID

-vserver vserver_name specifies the SVM name. -volume volume_name specifies the volume name of the FlexVol volume or the name of the namespace constituent for the Infinite Volume. -path path specifies the path. -lif lif specifies the logical interface. -lockid specifies the universally unique identifier for the lock.

4. Return to the admin privilege level: set -privilege admin

Modifying the NFSv4.1 server implementation ID The NFSv4.1 protocol includes a server implementation ID that documents the server domain, name, and date. You can modify the server implementation ID default values. Changing the default values can be useful, for example, when gathering usage statistics or troubleshooting interoperability issues. For more information, see RFC 5661.

About this task The default values for the three options are as follows:

Option Option name Default value

NFSv4.1 -v4.1-implementation- netapp.com Implementation ID domain Domain 114 | File Access Management Guide for NFS

Option Option name Default value

NFSv4.1 -v4.1-implementation-name Cluster version name Implementation ID Name NFSv4.1 -v4.1-implementation-date Cluster version date Implementation ID Date

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform one of the following actions:

If you want to modify the Enter the command... NFSv4.1 implementation ID...

Domain vserver nfs modify -v4.1-implementation-domain domain

Name vserver nfs modify -v4.1-implementation-name name

Date vserver nfs modify -v4.1-implementation-date date

3. Return to the admin privilege level: set -privilege admin

Managing NFSv4 ACLs You can enable, disable, set, modify, and view NFSv4 access control lists (ACLs).

Benefits of enabling NFSv4 ACLs There are many benefits to enabling NFSv4 ACLs. The benefits of enabling NFSv4 ACLs include the following: • Finer-grained control of user access for files and directories • Better NFS security

• Improved interoperability with CIFS • Removal of the NFS limitation of 16 groups per user

How NFSv4 ACLs work A client using NFSv4 ACLs can set and view ACLs on files and directories on the system. When a new file or subdirectory is created in a directory that has an ACL, the new file or subdirectory inherits all ACL Entries (ACEs) in the ACL that have been tagged with the appropriate inheritance flags. For access checking, CIFS users are mapped to UNIX users. The mapped UNIX user and that user’s group membership are checked against the ACL. If a file or directory has an ACL, that ACL is used to control access no matter what protocol— NFSv3, NFSv4, or CIFS—is used to access the file or directory and is used even if NFSv4 is no longer enabled on the system. Managing file access using NFS | 115

Files and directories inherit ACEs from NFSv4 ACLs on parent directories (possibly with appropriate modifications) as long as the ACEs have been tagged with the appropriate inheritance flags. The maximum number of ACEs for each ACL is 1,024, as defined by the -v4-acl-max-aces parameter. The default for this parameter is 400. When the -v4-acl-preserve parameter is enabled and an object has both UNIX mode bits and ACLs, the ACL requires three mandatory ACEs to grant the proper permissions to OWNER, GROUP, and EVERYONE. These mandatory ACEs are applied automatically by Data ONTAP when a user performs a chmod operation to set mode bits. If you set an ACL that is missing any of the mandatory ACEs, Data ONTAP limits the number of ACEs for that ACL to reserve space for any missing mandatory ACEs to ensure that they can be added later should you decide to set mode bits as well. When a file or directory is created as the result of an NFSv4 request, the ACL on the resulting file or directory depends on whether the file creation request includes an ACL or only standard UNIX file access permissions, and whether the parent directory has an ACL: • If the request includes an ACL, that ACL is used. • If the request includes only standard UNIX file access permissions but the parent directory has an ACL, the ACEs in the parent directory's ACL are inherited by the new file or directory as long as the ACEs have been tagged with the appropriate inheritance flags. Note: A parent ACL is inherited even if -v4.0-acl is set to off.

• If the request includes only standard UNIX file access permissions and the parent directory does not have an ACL, the client file mode is used to set standard UNIX file access permissions. • If the request includes only standard UNIX file access permissions and the parent directory has a non-inheritable ACL, the new object is created only with mode bits.

Enabling or disabling modification of NFSv4 ACLs When Data ONTAP receives a chmod command for a file or directory with an ACL, by default the ACL is retained and modified to reflect the mode bit change. You can disable the -v4-acl- preserve parameter to change the behavior if you want the ACL to be dropped instead.

About this task When using unified security style, this parameter also specifies whether NTFS file permissions are preserved or dropped when a client sends a chmod, chgroup, or chown command for a file or directory. The default for this parameter is enabled.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform one of the following actions:

If you want to... Enter the following command

Enable retention and vserver nfs modify -vserver vserver_name -v4-acl- modification of existing preserve enabled NFSv4 ACLs (default)

Disable retention and drop vserver nfs modify -vserver vserver_name -v4-acl- NFSv4 ACLs when changing preserve disabled mode bits

3. Return to the admin privilege level: 116 | File Access Management Guide for NFS

set -privilege admin

How Data ONTAP uses NFSv4 ACLs to determine whether it can delete a file To determine whether it can delete a file, Data ONTAP uses a combination of the file's DELETE bit, and the containing directory's DELETE_CHILD bit. For more information, see the NFS 4.1 RFC 5661.

Enabling or disabling NFSv4 ACLs To enable or disable NFSv4 ACLs, you can modify the -v4.0-acl and -v4.1-acl options. These options are disabled by default.

About this task The -v4.0-acl or -v4.1-acl option controls the setting and viewing of NFSv4 ACLs; it does not control enforcement of these ACLs for access checking. NFSv4.0 ACLs are not supported for Storage Virtual Machines (SVMs) with Infinite Volume.

Step 1. Perform one of the following actions:

If you want to... Then... Enable NFSv4.0 ACLs Enter the following command: vserver nfs modify -vserver vserver_name -v4.0- acl enabled

Disable NFSv4.0 ACLs Enter the following command: vserver nfs modify -vserver vserver_name -v4.0- acl disabled

Enable NFSv4.1 ACLs Enter the following command: vserver nfs modify -vserver vserver_name -v4.1- acl enabled

Disable NFSv4.1 ACLs Enter the following command: vserver nfs modify -vserver vserver_name -v4.1- acl disabled

Modifying the maximum ACE limit for NFSv4 ACLs You can modify the maximum number of allowed ACEs for each NFSv4 ACL by modifying the parameter -v4-acl-max-aces. By default, the limit is set to 400 ACEs for each ACL. Increasing this limit can help ensure successful migration of data with ACLs containing over 400 ACEs to storage systems running Data ONTAP.

About this task Increasing this limit might impact performance for clients accessing files with NFSv4 ACLs.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Modify the maximum ACE limit for NFSv4 ACLs by entering the following command: Managing file access using NFS | 117

vserver nfs modify -v4-acl-max-aces max_ace_limit The valid range of max_ace_limit is 192 to 1024.

3. Return to the admin privilege level: set -privilege admin

Managing NFSv4 file delegations You can enable and disable NFSv4 file delegations and retrieve NFSv4 file delegation statistics.

Related concepts Managing file locks on page 110

How NFSv4 file delegations work Data ONTAP supports read and write file delegations in accordance with RFC 3530. As specified in RFC 3530, when an NFSv4 client opens a file, Data ONTAP can delegate further handling of opening and writing requests to the opening client. There are two types of file delegations: • Read A read file delegation allows a client to locally handle requests to open a file for reading that do not deny read access to others. Byte range read locks are also handled locally. • Write A write file delegation allows the client to handle all open requests. All byte range lock requests are handled locally. Delegation works on files within any style of qtree, whether or not opportunistic locks (oplocks) have been enabled. Delegation of file operations to a client can be recalled when the lease expires, or when the storage system receives the following requests from another client: • Write to file, open file for writing, or open file for “deny read” • Change file attributes • Rename file • Delete file When a lease expires, the delegation state is revoked and all of the associated states are marked “soft”. This means that if the storage system receives a conflicting lock request for this same file from another client before the lease has been renewed by the client previously holding the delegation, the conflicting lock is granted. If there is no conflicting lock and the client holding the delegation renews the lease, the soft locks are changed to hard locks and are not removed in the case of a conflicting access. However, the delegation is not granted again upon a lease renewal. When the server reboots, the delegation state is lost. Clients can reclaim the delegation state upon reconnection instead of going through the entire delegation request process again. When a client holding a read delegation reboots, all delegation state information is flushed from the storage system cache upon reconnection. The client must issue a delegation request to establish a new delegation. Note: NFSv4 file delegations are not supported on Storage Virtual Machines (SVMs) with Infinite Volume. 118 | File Access Management Guide for NFS

Enabling or disabling NFSv4 read file delegations To enable or disable NFSv4 read file delegations, you can modify the -v4.0-read-delegation or -v4.1-read-delegation option. By enabling read file delegations, you can eliminate much of the message overhead associated with the opening and closing of files.

About this task By default, read file delegations are disabled. The disadvantage of enabling read file delegations is that the server and its clients must recover delegations after the server reboots or restarts, a client reboots or restarts, or a network partition occurs. NFSv4 file delegations are not supported on Storage Virtual Machines (SVMs) with Infinite Volume.

Step 1. Perform one of the following actions:

If you want to... Then... Enable NFSv4 read file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.0- read-delegation enabled

Enable NFSv4.1 read file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.1- read-delegation enabled

Disable NFSv4 read file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.0- read-delegation disabled

Disable NFSv4.1 read file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.1- read-delegation disabled

Result The file delegation options take effect as soon as they are changed. There is no need to reboot or restart NFS.

Enabling or disabling NFSv4 write file delegations To enable or disable write file delegations, you can modify the -v4.0-write-delegation or - v4.1-write-delegation option. By enabling write file delegations, you can eliminate much of the message overhead associated with file and record locking in addition to opening and closing of files.

About this task By default, write file delegations are disabled. The disadvantage of enabling write file delegations is that the server and its clients must perform additional tasks to recover delegations after the server reboots or restarts, a client reboots or restarts, or a network partition occurs. NFSv4 file delegations are not supported on Storage Virtual Machines (SVMs) with Infinite Volume. Managing file access using NFS | 119

Step 1. Perform one of the following actions:

If you want to... Then... Enable NFSv4 write file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.0- write-delegation enabled

Enable NFSv4.1 write file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.1- write-delegation enabled

Disable NFSv4 write file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.0- write-delegation disabled

Disable NFSv4.1 write file Enter the following command: delegations vserver nfs modify -vserver vserver_name -v4.1- write-delegation disabled

Result The file delegation options take effect as soon as they are changed. There is no need to reboot or restart NFS.

Configuring NFSv4 file and record locking You can configure NFSv4 file and record locking by specifying the locking lease period and grace period.

Related concepts Managing file locks on page 110

About NFSv4 file and record locking For NFSv4 clients, Data ONTAP supports the NFSv4 file-locking mechanism, maintaining the state of all file locks under a lease-based model. In accordance with RFC 3530, Data ONTAP “defines a single lease period for all state held by an NFS client. If the client does not renew its lease within the defined period, all states associated with the client's lease may be released by the server.” The client can renew its lease explicitly or implicitly by performing an operation, such as reading a file. Furthermore, Data ONTAP defines a grace period, which is a period of special processing in which clients attempt to reclaim their locking state during a server recovery.

Term Definition (see RFC 3530) Lease The time period in which Data ONTAP irrevocably grants a lock to a client. Grace period The time period in which clients attempt to reclaim their locking state from Data ONTAP during server recovery. Lock Refers to both record (byte-range) locks as well as file (share) locks unless specifically stated otherwise. 120 | File Access Management Guide for NFS

Specifying the NFSv4 locking lease period To specify the NFSv4 locking lease period (that is, the time period in which Data ONTAP irrevocably grants a lock to a client), you can modify the -v4-lease-seconds option. Shorter lease periods speed up server recovery while longer lease periods are beneficial for servers handling a very large amount of clients.

About this task By default, this option is set to 30. The minimum value for this option is 10. The maximum value for this option is the locking grace period, which you can set with the locking.lease_seconds option.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Enter the following command: vserver nfs modify -vserver vserver_name -v4-lease-seconds number_of_seconds

3. Return to the admin privilege level: set -privilege admin

Specifying the NFSv4 locking grace period To specify the NFSv4 locking grace period (that is, the time period in which clients attempt to reclaim their locking state from Data ONTAP during server recovery), you can modify the -v4- grace-seconds option.

About this task By default, this option is set to 45.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Enter the following command: vserver nfs modify -vserver vserver_name -v4-grace-seconds number_of_seconds

3. Return to the admin privilege level: set -privilege admin

How NFSv4 referrals work When you enable NFSv4 referrals, Data ONTAP provides “intra-SVM” referrals to NFSv4 clients. Intra-SVM referral is when a cluster node receiving the NFSv4 request refers the NFSv4 client to another LIF on the Storage Virtual Machine (SVM). The NFSv4 client should access the path that received the referral at the target LIF from that point onward. The original cluster node gives such a referral when it determines that there exists a LIF in Managing file access using NFS | 121

the SVM that is resident on the cluster node on which the data volume resides, thereby allowing the clients faster access to the data and avoiding extra cluster communication. Support for NFSv4 referrals is not uniformly available in all NFSv4 clients. In an environment where not all clients support this feature, you should not enable referrals. If the feature is enabled and a client that does not support it receives a referral from the server, the client cannot access the volume and experiences failures. See RFC3530 for details about referrals. Note: Referrals are not supported on SVMs with Infinite Volume.

Enabling or disabling NFSv4 referrals You can enable NFSv4 referrals on Storage Virtual Machines (SVMs) with FlexVol volumes by enabling the options -v4-fsid-change and -v4.0-referrals or -v4.1-referrals. Enabling NFSV4 referrals can result in faster data access for NFSv4 clients that support this feature.

Before you begin If you want to enable NFS referrals, you must first disable parallel NFS. You cannot enable both at the same time.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform one of the following actions:

If you want to... Enter the command...

Enable NFSv4 referrals vserver nfs modify -vserver vserver_name -v4- fsid-change enabled vserver nfs modify -vserver vserver_name -v4.0- referrals enabled

Disable NFSv4 referrals vserver nfs modify -vserver vserver_name -v4.0- referrals disabled

Enable NFSv4.1 referrals vserver nfs modify -vserver vserver_name -v4- fsid-change enabled vserver nfs modify -vserver vserver_name -v4.1- referrals enabled

Disable NFSv4.1 referrals vserver nfs modify -vserver vserver_name -v4.1- referrals disabled

3. Return to the admin privilege level: set -privilege admin

Related tasks Enabling or disabling pNFS on page 85 122 | File Access Management Guide for NFS

Displaying NFS statistics You can display NFS statistics for Storage Virtual Machines (SVMs) on the storage system to monitor performance and diagnose issues.

Steps

1. Use the statistics catalog object show command to identify the NFS objects from which you can view data.

Example statistics catalog object show -object nfs* For more information about the statistics commands, see the Clustered Data ONTAP System Administration Guide for Cluster Administrators.

2. Use the statistics start and optional statistics stop commands to collect a data sample from one or more objects.

3. Use the statistics show command to view the sample data.

Example: Monitoring NFSv3 performance The following example shows performance data for the NFSv3 protocol. The following command starts data collection for a new sample:

vs1::> statistics start -object nfsv3 -sample-id nfs_sample

The following command shows data from the sample by specifying counters that show the number of successful read and write requests versus the total number of read and write requests:

vs1::> statistics show -sample-id nfs_sample -counter read_total| write_total|read_success|write_success

Object: nfsv3 Instance: vs1 Start-time: 2/11/2013 15:38:29 End-time: 2/11/2013 15:38:41 Cluster: cluster1

Counter Value ------read_success 40042 read_total 40042 write_success 1492052 write_total 1492052 Managing file access using NFS | 123

Support for VMware vStorage over NFS Data ONTAP supports certain VMware vStorage APIs for Array Integration (VAAI) features in an NFS environment.

Supported features The following features are supported: • Copy offload Enables an ESXi host to copy virtual machines or virtual machine disks (VMDKs) directly between the source and destination data store location without involving the host. This conserves ESXi host CPU cycles and network bandwidth. Copy offload preserves space efficiency if the source volume is sparse. • Space reservation Guarantees storage space for a VMDK file by reserving space for it.

Limitations VMware vStorage over NFS has the following limitations: • vStorage is not supported on Storage Virtual Machines (SVMs) with Infinite Volume. • Copy offload operations can fail in the following scenarios: ◦ While running wafliron on the source or destination volume because it temporarily takes the volume offline ◦ While moving either the source or destination volume ◦ While moving either the source or destination LIF ◦ While performing takeover or giveback operations • Server-side copy can fail due to file handle format differences in the following scenario: You attempt to copy data from SVMs that have currently or had previously exported qtrees to SVMs that have never had exported qtrees. To work around this limitation, you can export at least one qtree on the destination SVM.

Related information NetApp KB Article 3013572: What VAAI configurations are supported on Data ONTAP storage controllers?

Enabling or disabling VMware vStorage over NFS You can enable or disable support for VMware vStorage over NFS on Storage Virtual Machines (SVMs) with FlexVol volumes by using the vserver nfs modify command.

About this task By default, support for VMware vStorage over NFS is disabled.

Steps 1. Display the current vStorage support status for SVMs by entering the following command: 124 | File Access Management Guide for NFS

vserver nfs show -vserver vserver_name -instance

2. Perform one of the following actions: If you want to... Enter the following command...

Enable VMware vStorage vserver nfs modify -vserver vserver_name - support vstorage enabled

Disable VMware vStorage vserver nfs modify -vserver vserver_name - support vstorage disabled

After you finish You must install the NFS Plug-in for VMware VAAI before you can use this functionality. For more information, see Installing the NetApp NFS Plug-in for VMware VAAI.

Related information NetApp Documentation: NetApp NFS Plug-in for VMware VAAI

Enabling or disabling rquota support Data ONTAP supports the remote quota protocol version 1 (rquota v1). The rquota protocol enables NFS clients to obtain quota information for users and groups from a remote machine. You can enable rquota on Storage Virtual Machines (SVMs) with FlexVol volumes by using the vserver nfs modify command.

About this task By default, rquota is disabled.

Step 1. Perform one of the following actions: If you want to... Enter the following command...

Enable rquota support for vserver nfs modify -vserver vserver_name -rquota SVMs enable

Disable rquota support for vserver nfs modify -vserver vserver_name -rquota SVMs disable

For more information about quotas, see the Clustered Data ONTAP Logical Storage Management Guide.

NFSv3 performance improvement by modifying the TCP maximum read and write size You might be able to improve the performance of NFSv3 clients connecting to storage systems over a high-latency network by modifying the TCP maximum read and write size. When clients access storage systems over a high-latency network, such as a wide area network (WAN) or metro area network (MAN) with a latency over 10 milliseconds, you might be able to improve the connection performance by modifying the TCP maximum read and write size. Clients accessing storage systems in a low-latency network, such as a local area network (LAN), can expect little to no benefit from modifying these parameters. If the throughput improvement does not outweigh the latency impact, you should not use these parameters. Managing file access using NFS | 125

To determine whether your storage environment would benefit from modifying these parameters, you should first conduct a comprehensive performance evaluation of a poorly performing NFS client. Review whether the low performance is due to excessive round trip latency and small request on the client. Under these conditions, the client and server cannot fully use the available bandwidth because they spend the majority of their duty cycles waiting for small requests and responses to be transmitted over the connection. By increasing the NFSv3 request size, the client and server can use the available bandwidth more effectively to move more data per unit time, therefore increasing the overall efficiency of the connection. Keep in mind that the configuration between the storage system and the client might vary. If you configure the storage system to support 1 MB maximum read size but the client only supports 64 KB, then the mount read size is limited to 64 KB or less. Before modifying these parameters, you must be aware that it results in additional memory consumption on the storage system for the period of time necessary to assemble and transmit a large response. The more high-latency connections to the storage system, the higher the additional memory consumption. Storage systems with high memory capacity might experience very little effect from this change. Storage systems with low memory capacity might experience noticeable performance degradation. The successful use of these parameter relies on the ability to retrieve data from multiple nodes of a cluster. The inherent latency of the cluster network might increase the overall latency of the response. Overall latency tends to increase when using these parameters. As a result, latency sensitive workloads might show negative impact.

Modifying the NFSv3 TCP maximum read and write size You can modify the -v3-tcp-max-read-size and -v3-tcp-max-write-size options to change the NFSv3 TCP maximum read and write size. Modifying these options can help improve NFSv3 performance over TCP in some storage environments.

Before you begin All nodes in the cluster must be running Data ONTAP 8.1 or later.

About this task You can modify these options individually for each Storage Virtual Machine (SVM).

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform one of the following actions:

If you want to... Enter the command...

Modify the NFSv3 TCP vserver nfs modify -vserver vserver_name -v3-tcp- maximum read size max-read-size integer_max_read_size

Modify the NFSv3 TCP vserver nfs modify -vserver vserver_name -v3-tcp- maximum write size max-write-size integer_max_write_size

Option Range Default

-v3-tcp-max-read-size 8192 to 1048576 bytes 65536 bytes 126 | File Access Management Guide for NFS

Option Range Default

-v3-tcp-max-write-size 8192 to 65536 bytes 65536 bytes

Note: The maximum read or write size you enter must be a multiple of 4 KB (4096 bytes). Requests that are not properly aligned negatively affect performance.

3. Return to the admin privilege level: set -privilege admin

4. Use the vserver nfs show command to verify the changes.

5. If any clients use static mounts, unmount and remount for the new parameter size to take effect.

Example The following command sets the NFSv3 TCP maximum read size to 1048576 bytes on the SVM named vs1:

vs1::> vserver nfs modify -vserver vs1 -v3-tcp-max-read-size 1048576

Configuring the number of group IDs allowed for NFS users By default, Data ONTAP supports up to 32 group IDs when handling NFS user credentials using Kerberos (RPCSEC_GSS) authentication. When using AUTH_SYS authentication, the default maximum number of group IDs is 16, as defined in RFC 5531. You can increase the maximum up to 1,024 if you have users who are members of more than the default number of groups.

About this task If a user has more than the default number of group IDs in their credentials, the remaining group IDs are truncated and the user might receive errors when attempting to access files from the storage system. You should set the maximum number of groups, per SVM, to a number that represents the maximum groups in your environment. The following table shows the two parameters of the vserver nfs modify command that determine the maximum number of group IDs in three sample configurations:

Parameters Settings Resulting group IDs limit

-extended-groups-limit 32 RPCSEC_GSS: 32 -auth-sys-extended-groups disabled AUTH_SYS: 16 These are the default settings. -extended-groups-limit 256 RPCSEC_GSS: 256 -auth-sys-extended-groups disabled AUTH_SYS: 16

-extended-groups-limit 512 RPCSEC_GSS: 512 -auth-sys-extended-groups enabled AUTH_SYS: 512

Note: Some older NFS clients might not be compatible with AUTH_SYS extended groups. Managing file access using NFS | 127

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform the desired action:

If you want to set the Enter the command... maximum number of allowed auxiliary groups...

Only for RPCSEC_GSS and vserver nfs modify -vserver vserver_name - leave AUTH_SYS set to the extended-groups-limit {32-1024} -auth-sys- default value of 16 extended-groups disabled

For both RPCSEC_GSS and vserver nfs modify -vserver vserver_name - AUTH_SYS extended-groups-limit {32-1024} -auth-sys- extended-groups enabled

3. Verify the -extended-groups-limit value and verify whether AUTH_SYS is using extended groups: vserver nfs show -vserver vserver_name -fields auth-sys-extended- groups,extended-groups-limit

4. Return to the admin privilege level: set -privilege admin

Example The following example enables extended groups for AUTH_SYS authentication and sets the maximum number of extended groups to 512 for both AUTH_SYS and RPCSEC_GSS authentication. These changes are made only for clients who access the SVM named vs1:

vs1::> set -privilege advanced Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel. Do you want to continue? {y|n}: y

vs1::*> vserver nfs modify -vserver vs1 -auth-sys-extended-groups enabled -extended-groups-limit 512

vs1::*> vserver nfs show -vserver vs1 -fields auth-sys-extended- groups,extended-groups-limit vserver auth-sys-extended-groups extended-groups-limit ------vs1 enabled 512

vs1::*> set -privilege admin

Controlling root user access to NTFS security-style data You can configure Data ONTAP to allow NFS clients access to NTFS security-style data and NTFS clients to access NFS security-style data. When using NTFS security style on an NFS data store, you must decide how to treat access by the root user and configure the Storage Virtual Machine (SVM) accordingly.

About this task When a root user accesses NTFS security-style data, you have two options: 128 | File Access Management Guide for NFS

• Map the root user to a Windows user like any other NFS user and manage access according to NTFS ACLs. • Ignore NTFS ACLs and provide full access to root.

Steps 1. Set the privilege level to advanced: set -privilege advanced

2. Perform the desired action:

If you want the root user Enter the command... to...

Be mapped to a Windows vserver nfs modify -vserver vserver_name -ignore- user nt-acl-for-root disabled

Bypass the NT ACL check vserver nfs modify -vserver vserver_name -ignore- nt-acl-for-root enabled

By default, this parameter is disabled. If this parameter is enabled but there is no name mapping for the root user, Data ONTAP uses a default SMB administrator credential for auditing.

3. Return to the admin privilege level: set -privilege admin 129

Auditing NAS events on SVMs with FlexVol volumes

Auditing for NAS events is a security measure that enables you to track and log certain CIFS and NFS events on Storage Virtual Machines (SVMs) with FlexVol volumes. This helps you track potential security problems and provides evidence of any security breaches. You can also stage and audit Active Directory central access policies to see what the result of implementing them would be.

CIFS events You can audit the following events: • SMB file and folder access events You can audit SMB file and folder access events on objects stored on FlexVol volumes belonging to the auditing-enabled SVMs. • CIFS logon and logoff events You can audit CIFS logon and logoff events for CIFS servers on SVMs with FlexVol volumes. • Central access policy staging events You can audit the effective access of objects on CIFS servers using permissions applied through proposed central access policies. Auditing through the staging of central access policies enables you to see what the effects are of central access policies before they are deployed. Auditing of central access policy staging is set up using Active Directory GPOs; however, the SVM auditing configuration must be configured to audit central access policy staging events. Although you can enable central access policy staging in the auditing configuration without enabling Dynamic Access Control on the CIFS server, central access policy staging events are generated only if Dynamic Access Control is enabled. Dynamic Access Control is enabled through a CIFS server option. It is not enabled by default.

NFS events You can audit file and directory NFSv4 access events on objects stored on SVMs with FlexVol volumes.

How auditing works Before you plan and configure your auditing configuration, you should understand how auditing works.

Basic auditing concepts To understand auditing in Data ONTAP, you should be aware of some basic auditing concepts. Staging files The intermediate binary files on individual nodes where audit records are stored prior to consolidation and conversion. Staging files are contained in staging volumes. Staging volume A dedicated volume created by Data ONTAP to store staging files. There is one staging volume per aggregate. Staging volumes are shared by all audit-enabled Storage Virtual Machines (SVMs) to store audit records of data access for data volumes in that particular aggregate. Each SVM's audit records are stored in a separate directory within the staging volume. 130 | File Access Management Guide for NFS

Cluster administrators can view information about staging volumes, but most other volume operations are not permitted. Only clustered Data ONTAP can create staging volumes. Clustered Data ONTAP automatically assigns a name to staging volumes. All staging volume names begin with MDV_aud_ followed by the UUID of the aggregate containing that staging volume (for example: MDV_aud_1d0131843d4811e296fc123478563412.) System volumes A FlexVol volume that contains special metadata, such as metadata for file services audit logs. The admin SVM owns system volumes, which are visible across the cluster. Staging volumes are a type of system volume. Consolidation task A task that gets created when auditing is enabled. This long-running task on each SVM takes the audit records from staging files across the member nodes of the SVM. This task merges the audit records in sorted chronological order, and then converts them to a user- readable event log format specified in the auditing configuration—either the EVTX or XML file format. The converted event logs are stored in the audit event log directory that is specified in the SVM auditing configuration.

How the Data ONTAP auditing process works The Data ONTAP auditing process is different than the Microsoft auditing process. Before you configure auditing, you should understand how the Data ONTAP auditing process works. Audit records are initially stored in binary staging files on individual nodes. If auditing is enabled on an SVM, every member node maintains staging files for that SVM. Periodically, they are consolidated and converted to user-readable event logs, which are stored in the audit event log directory for the SVM.

Process when auditing is enabled on an SVM Auditing can only be enabled on SVMs with FlexVol volumes. When the storage administrator enables auditing on the SVM, the auditing subsystem checks whether staging volumes are present. A staging volume must exist for each aggregate that contains data volumes owned by the SVM. The auditing subsystem creates any needed staging volumes if they do not exist. The auditing subsystem also completes other prerequisite tasks before auditing is enabled: • The auditing subsystem verifies that the log directory path is available and does not contain symlinks. The log directory must already exist. The auditing subsystem does not assign a default log file location. If the log directory path specified in the auditing configuration is not a valid path, auditing configuration creation fails with the following error: The specified path "/" does not exist in the namespace belonging to Vserver "" Configuration creation fails if the directory exists but contains symlinks. • Auditing schedules the consolidation task. After this task is scheduled, auditing is enabled. The SVM auditing configuration and the log files persist across a reboot or if the NFS or CIFS servers are stopped or restarted.

Event log consolidation Log consolidation is a scheduled task that runs on a routine basis until auditing is disabled. When auditing is disabled, the consolidation task ensures that all the remaining logs are consolidated. Auditing NAS events on SVMs with FlexVol volumes | 131

Guaranteed auditing By default, auditing is guaranteed. Data ONTAP guarantees that all auditable file access events (as specified by configured audit policy ACLs) are recorded, even if a node is unavailable. A requested file operation cannot be completed until the audit record for that operation is saved to the staging volume on persistent storage. If audit records cannot be committed to the disk in the staging files, either because of insufficient space or because of other issues, client operations are denied.

Consolidation process when a node is unavailable If a node containing volumes belonging to an SVM with auditing enabled is unavailable, the behavior of the auditing consolidation task depends on whether the node's SFO partner (or the HA partner in the case of a two-node cluster) is available. • If the staging volume is available through the SFO partner, the staging volumes last reported from the node are scanned, and consolidation proceeds normally. • If the SFO partner is not available, the task creates a partial log file. When a node is not reachable, the consolidation task consolidates the audit records from the other available nodes of that SVM. To identify that it is not complete, the task adds the suffix .partial to the consolidated file name.

• After the unavailable node is available, the audit records in that node are consolidated with the audit records from the other nodes at that point of time. • All audit records are preserved.

Event log rotation Audit event log files are rotated when they reach a configured threshold log size or on a configured schedule. When an event log file is rotated, the scheduled consolidation task first renames the active converted file to a time-stamped archive file, and then creates a new active converted event log file.

Process when auditing is disabled on the SVM When auditing is disabled on the SVM, the consolidation task is triggered one final time. All outstanding, recorded audit records are logged in user-readable format. Existing event logs stored in the event log directory are not deleted when auditing is disabled on the SVM and are available for viewing. After all existing staging files for that SVM are consolidated, the consolidation task is removed from the schedule. Disabling the auditing configuration for the SVM does not remove the auditing configuration. A storage administrator can reenable auditing at any time. The auditing consolidation job, which gets created when auditing is enabled, monitors the consolidation task and re-creates it if the consolidation task exits because of an error. Previously, users could delete the auditing consolidation job by using job manager commands such as job delete. Users are no longer allowed to delete the auditing consolidation job.

Related concepts Basic auditing concepts on page 129 What the supported audit event log formats are on page 133 SMB events that can be audited on page 134

Related tasks Creating a file and directory auditing configuration on SVMs on page 143 132 | File Access Management Guide for NFS

Related references NFS file and directory access events that can be audited on page 138

Aggregate space considerations when enabling auditing When an auditing configuration is created and auditing is enabled on at least one Storage Virtual Machine (SVM) in the cluster, the auditing subsystem creates staging volumes on all existing aggregates and on all new aggregates that are created. You need to be aware of certain aggregate space considerations when you enable auditing on the cluster. Staging volume creation might fail due to non-availability of space in an aggregate. This might happen if you create an auditing configuration and existing aggregates do not have enough space to contain the staging volume. You should ensure that there is enough space on existing aggregates for the staging volumes before enabling auditing on an SVM.

Related concepts Troubleshooting auditing and staging volume space issues on page 158

Auditing requirements and considerations Before you configure and enable auditing on your Storage Virtual Machine (SVM), you need to be aware of certain requirements and considerations. • The maximum number of auditing-enabled SVMs supported in a cluster is 50. • Auditing is not tied to CIFS or NFS licensing. You can configure and enable auditing even if CIFS and NFS licenses are not installed on the cluster. • NFS auditing supports security ACEs (type U). • For NFS auditing, there is no mapping between mode bits and auditing ACEs. When converting ACLs to mode bits, auditing ACEs are skipped. When converting mode bits to ACLs, auditing ACEs are not generated. • The directory specified in the auditing configuration must exist. If it does not exist, the command to create the auditing configuration fails. • The directory specified in the auditing configuration must meet the following requirements: ◦ The directory must not contain symbolic links. If the directory specified in the auditing configuration contains symbolic links, the command to create the auditing configuration fails. ◦ You must specify the directory by using an absolute path. You should not specify a relative path, for example, /vs1/../.

• Auditing is dependent on having available space in the staging volumes. You must be aware of and have a plan for ensuring that there is sufficient space for the staging volumes in aggregates that contain audited volumes. • Auditing is dependent on having available space in the volume containing the directory where converted event logs are stored. You must be aware of and have a plan for ensuring that there is sufficient space in the volumes used to store event logs. You can specify the number of event logs to retain in the auditing directory by using the -rotate-limit parameter when creating an auditing configuration, which can help to ensure that there is enough available space for the event logs in the volume. Auditing NAS events on SVMs with FlexVol volumes | 133

• Although you can enable central access policy staging in the auditing configuration without enabling Dynamic Access Control on the CIFS server, Dynamic Access Control must be enabled to generate central access policy staging events. Dynamic Access Control is not enabled by default.

Related concepts Planning the auditing configuration on page 139

What the supported audit event log formats are Supported file formats for the converted audit event logs are EVTX and XML file formats. You can specify the type of file format when you create the auditing configuration. By default, Data ONTAP converts the binary logs to the EVTX file format.

Related tasks Creating a file and directory auditing configuration on SVMs on page 143

Viewing audit event logs You can use audit event logs to determine whether you have adequate file security and whether there have been improper file and folder access attempts. You can view and process audit event logs saved in the EVTX or XML file formats.

• EVTX file format You can open the converted EVTX audit event logs as saved files using Microsoft Event Viewer. There are two options that you can use when viewing event logs using Event Viewer: ◦ General view Information that is common to all events is displayed for the event record. In this version of Data ONTAP, the event-specific data for the event record is not displayed. You can use the detailed view to display event-specific data. ◦ Detailed view A friendly view and a XML view are available. The friendly view and the XML view display both the information that is common to all events and the event-specific data for the event record.

• XML file format You can view and process XML audit event logs on third-party applications that support the XML file format. XML viewing tools can be used to view the audit logs provided you have the XML schema and information about definitions for the XML fields. For more information about obtaining the XML schema and documents related to XML definitions, contact technical support or your account team.

Related concepts How the Data ONTAP auditing process works on page 130

Related tasks Manually rotating the audit event logs on page 154 134 | File Access Management Guide for NFS

How active audit logs are viewed using Event Viewer If the audit consolidation process is running on the cluster, the consolidation process appends new records to the active audit log file for audit-enabled Storage Virtual Machines (SVMs). This active audit log can be accessed and opened over an SMB share in Microsoft Event Viewer. In addition to viewing existing audit records, Event Viewer has a refresh option that enables you to refresh the content in the console window. Whether the newly appended logs are viewable in Event Viewer depends on whether oplocks are enabled on the share used to access the active audit log.

Oplocks setting on the share Behavior Enabled Event Viewer opens the log that contains events written to it up to that point in time. The refresh operation does not refresh the log with new events appended by the consolidation process. Disabled Event Viewer opens the log that contains events written to it up to that point in time. The refresh operation refreshes the log with new events appended by the consolidation process.

Note: This information is applicable only for EVTX event logs. XML event logs can be viewed through SMB in a browser or through NFS using any XML editor or viewer.

SMB events that can be audited Data ONTAP can audit certain SMB events, including certain file and folder access events, certain logon and logoff events, and central access policy staging events. Knowing which access events can be audited is helpful when interpreting results from the event logs. The following SMB events can be audited:

Event ID Event Description Category (EVT/EVTX) 540/4624 An account was LOGON/LOGOFF: Network (CIFS) logon. Logon and successfully Logoff logged on 529/4625 An account LOGON/LOGOFF: Unknown user name or Logon and failed to log on bad password. Logoff 530/4625 An account LOGON/LOGOFF: Account logon time Logon and failed to log on restriction. Logoff 531/4625 An account LOGON/LOGOFF: Account currently Logon and failed to log on disabled. Logoff 532/4625 An account LOGON/LOGOFF: User account has Logon and failed to log on expired. Logoff 533/4625 An account LOGON/LOGOFF: User cannot log on to Logon and failed to log on this computer. Logoff 534/4625 An account LOGON/LOGOFF: User not granted logon Logon and failed to log on type here. Logoff 535/4625 An account LOGON/LOGOFF: User's password has Logon and failed to log on expired. Logoff Auditing NAS events on SVMs with FlexVol volumes | 135

Event ID Event Description Category (EVT/EVTX) 537/4625 An account LOGON/LOGOFF: Logon failed for Logon and failed to log on reasons other than above. Logoff 539/4625 An account LOGON/LOGOFF: Account locked out. Logon and failed to log on Logoff 538/4634 An account was LOGON/LOGOFF: Local or network user Logon and logged off logoff. Logoff 560/4656 Open Object/ OBJECT ACCESS: Object (file or File Access Create Object directory) open. 563/4659 Open Object OBJECT ACCESS: A handle to an object File Access with the Intent (file or directory) was requested with the to Delete Intent to Delete. 564/4660 Delete Object OBJECT ACCESS: Delete Object (file or File Access directory). Data ONTAP generates this event when a Windows client attempts to delete the object (file or directory). 567/4663 Read Object/ OBJECT ACCESS: Object access attempt File Access Write (read, write, get attribute, set attribute). Object/Get Note: For this event, Data ONTAP audits Object only the first SMB read and first SMB Attributes/Set write operation (success or failure) on an Object object. This prevents Data ONTAP from Attributes creating excessive log entries when a single client opens an object and performs many successive read or write operations to the same object.

NA/4664 Hard link OBJECT ACCESS: An attempt was made to File Access create a hard link. NA/4818 Proposed central OBJECT ACCESS: Central Access Policy File Access access policy Staging. does not grant the same access permissions as the current central access policy NA/NA Data Rename Object OBJECT ACCESS: Object renamed. This is File Access ONTAP Event a Data ONTAP event. It is not currently ID 9999 supported by Windows as a single event. NA/NA Data Unlink Object OBJECT ACCESS: Object unlinked. This is File Access ONTAP Event a Data ONTAP event. It is not currently ID 9998 supported by Windows as a single event.

Additional information about Event 4656 The HandleID tag in the audit XML event contains the handle of the object (file or directory) accessed. The HandleID tag for the EVTX 4656 event contains different information depending on whether the open event is for creating a new object or for opening an existing object: 136 | File Access Management Guide for NFS

• If the open event is an open request to create a new object (file or directory), the HandleID tag in the audit XML event shows an empty HandleID (for example: 00000000000000;00;00000000;00000000 ). The HandleID is empty because the OPEN (for creating a new object) request gets audited before the actual object creation happens and before a handle exists. Subsequent audited events for the same object have the right object handle in the HandleID tag.

• If the open event is an open request to open an existing object, the audit event will have the assigned handle of that object in the HandleID tag (for example: 00000000000401;00;000000ea;00123ed4 ).

Related concepts Configuring audit policies on NTFS security-style files and directories on page 146

Determining what the complete path to the audited object is The object path printed in the tag for an audit record contains the name of the volume (in parentheses) and the relative path from the root of the containing volume. If you want to determine the complete path of the audited object, including the junction path, there are certain steps you must take.

Steps 1. Determine what the volume name and relative path to audited object is by looking at the tag in the audit event.

Example In this example, the volume name is “data1” and the relative path to the file is /dir1/file.txt: (data1);/dir1/file.txt

2. Using the volume name determined in the previous step, determine what the junction path is for the volume containing the audited object:

Example volume show -junction -volume data1

Junction Junction Vserver Volume Language Active Junction Path Path Source ------vs1 data1 en_US.UTF-8 true /data/data1 RW_volume

3. Determine the full path to the audited object by appending the relative path found in the tag to the junction path for the volume:

Example /data/data1/dir1/file.text

Considerations when auditing symlinks and hard links There are certain considerations you must keep in mind when auditing symlinks and hard links. An audit record contains information about the object being audited including the path to the audited object, which is identified in the ObjectName tag. You should be aware of how paths for symlinks and hard links are recorded in the ObjectName tag. Auditing NAS events on SVMs with FlexVol volumes | 137

Symlinks A symlink is a file with a separate inode that contains a pointer to the location of a destination object, known as the target. When accessing an object through a symlink, clustered Data ONTAP automatically interprets the symlink and follows the actual canonical protocol agnostic path to the target object in the volume. In the following example output, there are two symlinks, both pointing to a file named target.txt. One of the symlinks is a relative symlink and one is an absolute symlink. If either of the symlinks are audited, the ObjectName tag in the audit event contains the path to the file target.txt:

[root@host1 audit]# ls -l total 0 lrwxrwxrwx 1 user1 group1 37 Apr 2 10:09 softlink_fullpath.txt -> /data/ audit/target.txt lrwxrwxrwx 1 user1 group1 10 Apr 2 09:54 softlink.txt -> target.txt -rwxrwxrwx 1 user1 group1 16 Apr 2 10:05 target.txt

Hard links A hard link is a directory entry that associates a name with an existing file on a file system. The hard link points to the inode location of the original file. Similar to how clustered Data ONTAP interprets symlinks, clustered Data ONTAP interprets the hard link and follows the actual canonical path to the target object in the volume. When access to a hard link object is audited, the audit event records this absolute canonical path in the ObjectName tag rather than the hard link path.

Considerations when auditing alternate NTFS data streams There are certain considerations you must keep in mind when auditing files with NTFS alternate data streams. The location of an object being audited is recorded in an event record using two tags, the ObjectName tag (the path) and the HandleID tag (the handle). To properly identify which stream requests are being logged, you must be aware of what clustered Data ONTAP records in these fields for NTFS alternate data streams: • EVTX ID: 4656 events (open and create audit events) ◦ The path of the alternate data stream is recorded in the ObjectName tag.

◦ The handle of the alternate data stream is recorded in the HandleID tag.

• EVTX ID: 4663 events (all other audit events, such as read, write, getattr, and so on) ◦ The path of the base file, not the alternate data stream, is recorded in the ObjectName tag.

◦ The handle of the alternate data stream is recorded in the HandleID tag.

Example The following example illustrates how to identify EVTX ID: 4663 events for alternate data streams using the HandleID tag. Even though the ObjectName tag (path) recorded in the read audit event is to the base file path, the HandleID tag can be used to identify the event as an audit record for the alternate data stream. Stream file names take the form base_file_name:stream_name. In this example, the dir1 directory contains a base file with an alternate data stream having the following paths:

/dir1/file1.txt /dir1/file1.txt:stream1 138 | File Access Management Guide for NFS

Note: The output in the following event example is truncated as indicated; the output does not display all of the available output tags for the events.

For an EVTX ID 4656 (open audit event), the audit record output for the alternate data stream records the alternate data stream name in the ObjectName tag:

- - 4656 Open Object [...] - [...] Stream 00000000000401;00;000001e4;00176767 (data1);/dir1/file1.txt:stream1 [...] -

For an EVTX ID 4663 (read audit event), the audit record output for the same alternate data stream records the base file name in the ObjectName tag; however, the handle in the HandleID tag is the alternative data stream's handle and can be used to correlate this event with the alternative data stream:

- - 4663 Read Object [...] - [...] Stream 00000000000401;00;000001e4;00176767 (data1);/dir1/file1.txt [...] -

NFS file and directory access events that can be audited Data ONTAP can audit certain NFS file and directory access events. Knowing what access events can be audited is helpful when interpreting results from the converted audit event logs. You can audit the following NFS file and directory access events: • READ • OPEN • CLOSE • READDIR • WRITE Auditing NAS events on SVMs with FlexVol volumes | 139

• SETATTR • CREATE • LINK • OPENATTR • REMOVE • GETATTR • VERIFY • NVERIFY • RENAME To reliably audit NFS RENAME events, you should set audit ACEs on directories instead of files because file permissions are not checked for a RENAME operation if the directory permissions are sufficient.

Related tasks Configuring auditing for UNIX security style files and directories on page 150

Planning the auditing configuration Before you configure auditing on Storage Virtual Machines (SVMs) with FlexVol volumes, you must understand which configuration options are available and plan the values that you want to set for each option. This information can help you configure the auditing configuration that meets your business needs. There are certain configuration parameters that are common to all auditing configurations. Additionally, there are certain parameters that you can use to specify which of two methods are used when rotating the consolidated and converted audit logs. You can specify one of the two following methods when you configure auditing: • Rotate logs based on log size This is the default method used to rotate logs. • Rotate logs based on a schedule

Parameters common to all auditing configurations There are two required parameters that you must specify when you create the auditing configuration. There are also three optional parameters that you can specify:

Type of information Option Required Include Your values

SVM name -vserver Yes Yes Name of the SVM on which to create vserver_name the auditing configuration. The SVM must already exist. 140 | File Access Management Guide for NFS

Type of information Option Required Include Your values

Log destination path -destination Yes Yes Specifies where the converted audit text logs are stored. The path must already exist on the SVM. The path can be up to 864 characters in length and must have read-write permissions. If the path is not valid, the audit configuration command fails. If the SVM is an SVM disaster recovery source, the log destination path cannot be on the root volume. This is because root volume content is not replicated to the disaster recovery destination. Categories of events to audit -events {file- No Specifies the categories of events to ops|cifs-logon- audit. The following event categories logoff|cap- can be audited: staging} • File access events (both SMB and NFSv4) • CIFS logon and logoff events • Central access policy staging events Central access policy staging events are a new advanced auditing event available starting with Windows 2012 Active Directory domains. Central access policy staging events log information about changes to central access policies configured in Active Directory. The default is to audit file access and CIFS logon and logoff events. Note: Before you can specify cap- staging as an event category, a CIFS server must exist on the SVM. Although you can enable central access policy staging in the auditing configuration without enabling Dynamic Access Control on the CIFS server, central access policy staging events are generated only if Dynamic Access Control is enabled. Dynamic Access Control is enabled through a CIFS server option. It is not enabled by default. Auditing NAS events on SVMs with FlexVol volumes | 141

Type of information Option Required Include Your values

Log file output format -format {xml| No Determines the output format of the evtx} audit logs. The output format can be either Data ONTAP-specific XML or Microsoft Windows EVTX log format. By default, the output format is EVTX.

Log files rotation limit -rotate-limit No Determines how many audit log files integer to retain before rotating the oldest log file out. For example, if you enter a value of 5, the last five log files are retained. A value of 0 indicates that all the log files are retained. The default value is 0.

Parameters used for determining when to rotate audit event logs Rotate logs based on log size The default is to rotate audit logs based on size. The default log size is 100 MB. If you want to use the default log rotation method and the default log size, you do not need to configure any specific parameters for log rotation. If you do not want to use the default log size, you can configure the - rotate-size parameter to specify a custom log size:

Type of information Option Required Include Your values

Log file size limit -rotate-size No Determines the audit log file size {integer[KB|MB| limit. GB|TB|PB]}

Rotate logs based on a schedule If you choose to rotate the audit logs based on a schedule, you can schedule log rotation by using the time-based rotation parameters in any combination. • If you configure time-based log rotation parameters, logs are rotated based on the configured schedule instead of log size.

• If you use time-based rotation, the -rotate-schedule-minute parameter is mandatory.

• All other time-based rotation parameters are optional. • The rotation schedule is calculated by using all the time-related values. For example, if you specify only the -rotate-schedule-minute parameter, the audit log files are rotated based on the minutes specified on all days of the week, during all hours on all months of the year.

• If you specify only one or two time-based rotation parameters (for example, -rotate- schedule-month and -rotate-schedule-minutes), the log files are rotated based on the minute values that you specified on all days of the week, during all hours, but only during the specified months. For example, you can specify that the audit log is to be rotated during the months January, March, and August on all Mondays, Wednesdays, and Saturdays at 10:30 a.m. 142 | File Access Management Guide for NFS

• If you specify values for both -rotate-schedule-dayofweek and -rotate-schedule-day, they are considered independently. For example, if you specify -rotate-schedule-dayofweek as Friday and -rotate- schedule-day as 13, then the audit logs would be rotated on every Friday and on the 13th day of the specified month, not just on every Friday the 13th.

You can use the following list of available auditing parameters to determine what values to use for configuring a schedule for audit event log rotations:

Type of information Option Required Include Your values

Log rotation schedule: Month -rotate- No Determines the monthly schedule for schedule-month rotating audit logs. chron_month Valid values are January through December, and all. For example, you can specify that the audit log is to be rotated during the months January, March, and August.

Log rotation schedule: Day of week -rotate- No Determines the daily (day of week) schedule- schedule for rotating audit logs. dayofweek chron_dayofweek Valid values are January through December, and all. For example, you can specify that the audit log is to be rotated on Tuesdays and Fridays, or during all the days of a week.

Log rotation schedule: Day -rotate- No Determines the day of the month schedule-day schedule for rotating the audit log. chron_dayofmont h Valid values range from 1 through 31. For example, you can specify that the audit log is to be rotated on the 10th and 20th days of a month, or all days of a month.

Log rotation schedule: Hour -rotate- No Determines the hourly schedule for schedule-hour rotating the audit log. chron_hour Valid values range from 0 (midnight) to 23 (11:00 p.m.). Specifying all rotates the audit logs every hour. For example, you can specify that the audit log is to be rotated at 6 (6 a.m.) and 18 (6 p.m.). Log rotation schedule: Minute -rotate- Yes, if Determines the minute schedule for schedule-minute configuring rotating the audit log. chron_minute schedule- based log Valid values range from 0 to 59. For rotation; example, you can specify that the audit otherwise, log is to be rotated at the 30th minute. no. Auditing NAS events on SVMs with FlexVol volumes | 143

Related concepts Configuring file and folder audit policies on page 146 Auditing requirements and considerations on page 132 What the supported audit event log formats are on page 133

Related tasks Creating a file and directory auditing configuration on SVMs on page 143

Creating a file and directory auditing configuration on SVMs Creating a file and directory auditing configuration on your Storage Virtual Machine (SVM) with FlexVol volumes includes understanding the available configuration options, planning the configuration, and then configuring and enabling the configuration. You can then display information about the auditing configuration to confirm that the resultant configuration is the desired configuration.

Steps 1. Creating the auditing configuration on page 143 Before you can begin auditing file and directory events, you must create an auditing configuration on the Storage Virtual Machine (SVM). 2. Enabling auditing on the SVM on page 145 After you finish setting up the auditing configuration, you must enable auditing on the Storage Virtual Machine (SVM). 3. Verifying the auditing configuration on page 145 After completing the auditing configuration, you should verify that auditing is configured properly and is enabled.

Related concepts Planning the auditing configuration on page 139 How to configure NTFS audit policies using the Data ONTAP CLI on page 149 Managing auditing configurations on page 153

Related tasks Configuring NTFS audit policies using the Windows Security tab on page 146 Configuring auditing for UNIX security style files and directories on page 150 Enabling and disabling auditing on SVMs on page 154 Deleting an auditing configuration on page 157 Manually rotating the audit event logs on page 154

Creating the auditing configuration Before you can begin auditing file and directory events, you must create an auditing configuration on the Storage Virtual Machine (SVM).

Before you begin If you plan on creating an auditing configuration for central access policy staging, a CIFS server must exist on the SVM. Note: Although you can enable central access policy staging in the auditing configuration without enabling Dynamic Access Control on the CIFS server, central access policy staging events are 144 | File Access Management Guide for NFS

generated only if Dynamic Access Control is enabled. Dynamic Access Control is enabled through a CIFS server option. It is not enabled by default.

About this task If the SVM is an SVM disaster recovery source, the destination path cannot be on the root volume.

Step 1. Using the information in the planning worksheet, create the auditing configuration to rotate audit logs based on log size or a schedule:

If you want to rotate audit Enter... logs by...

Log size vserver audit create -vserver vserver_name - destination path -events [{file-ops|cifs-logon- logoff|cap-staging}] [-format {xml|evtx}] [- rotate-limit integer] [-rotate-size {integer[KB| MB|GB|TB|PB]}]

A schedule vserver audit create -vserver vserver_name - destination path -events [{file-ops|cifs-logon- logoff|cap-staging}] [-format {xml|evtx}] [- rotate-limit integer] [-rotate-schedule-month chron_month] [-rotate-schedule-dayofweek chron_dayofweek] [-rotate-schedule-day chron_dayofmonth] [-rotate-schedule-hour chron_hour] -rotate-schedule-minute chron_minute Note: The -rotate-schedule-minute parameter is required if configuring time-based audit log rotation.

Examples The following example creates an auditing configuration that audits file operations and CIFS logon and logoff events (the default) using size-based rotation. The log format is EVTX (the default). The logs are stored in the /audit_log directory. The log file size limit is 200 MB. The logs are rotated when they reach 200 MB in size:

cluster1::> vserver audit create -vserver vs1 -destination /audit_log - rotate-size 200MB

The following example creates an auditing configuration that audits file operations and CIFS logon and logoff events (the default) using size-based rotation. The log format is EVTX (the default). The log file size limit is 100 MB (the default), and the log rotation limit is 5:

cluster1::> vserver audit create -vserver vs1 -destination /audit_log - rotate-limit 5

The following example creates an auditing configuration that audits file operations, CIFS logon and logoff events, and central access policy staging events using time-based rotation. The log format is EVTX (the default). The audit logs are rotated monthly, at 12:30 p.m. on all days of the week. The log rotation limit is 5:

cluster1::> vserver audit create -vserver vs1 -destination /audit_log - events file-ops,cifs-logon-logoff,cap-staging -rotate-schedule-month all - rotate-schedule-dayofweek all -rotate-schedule-hour 12 -rotate-schedule- minute 30 -rotate-limit 5 Auditing NAS events on SVMs with FlexVol volumes | 145

Enabling auditing on the SVM After you finish setting up the auditing configuration, you must enable auditing on the Storage Virtual Machine (SVM).

Before you begin The SVM audit configuration must already exist.

About this task When an SVM disaster recovery ID discard configuration is first started (after the SnapMirror initialization is complete) and the SVM has an auditing configuration, clustered Data ONTAP automatically disables the auditing configuration. Auditing is disabled on the read-onlySVM to prevent the staging volumes from filling up. You can enable auditing only after the SnapMirror relationship is broken and the SVM is read-write.

Step 1. Enable auditing on the SVM: vserver audit enable -vserver vserver_name

Example vserver audit enable -vserver vs1

Verifying the auditing configuration After completing the auditing configuration, you should verify that auditing is configured properly and is enabled.

Step 1. Verify the auditing configuration: vserver audit show -instance -vserver vserver_name

Example The following command displays in list form all auditing configuration information for Storage Virtual Machine (SVM) vs1: vserver audit show -instance -vserver vs1

Vserver: vs1 Auditing state: true Log Destination Path: /audit_log Categories of Events to Audit: file-ops Log Format: evtx Log File Size Limit: 200MB Log Rotation Schedule: Month: - Log Rotation Schedule: Day of Week: - Log Rotation Schedule: Day: - Log Rotation Schedule: Hour: - Log Rotation Schedule: Minute: - Rotation Schedules: - Log Files Rotation Limit: 0 146 | File Access Management Guide for NFS

Configuring file and folder audit policies Implementing auditing on file and folder access events is a two-step process. First you must create and enable an auditing configuration on Storage Virtual Machines (SVMs) with FlexVol volumes. Second, you must configure audit policies on the files and folders that you want to monitor. You can configure audit policies to monitor both successful and failed access attempts. You can configure both SMB and NFS audit polices. SMB and NFS audit policies have different configuration requirements and audit capabilities. If the appropriate audit policies are configured, Data ONTAP monitors SMB and NFS access events as specified in the audit policies only if the SMB or NFS servers are running.

Related concepts How the Data ONTAP auditing process works on page 130 SMB events that can be audited on page 134 Displaying information about audit policies applied to files and directories on page 150

Configuring audit policies on NTFS security-style files and directories Before you can audit file and directory operations, you must configure audit policies on the files and directories for which you want to collect audit information. This is in addition to setting up and enabling the audit configuration. You can configure NTFS audit policies by using the Windows Security tab or by using the Data ONTAP CLI.

Configuring NTFS audit policies using the Windows Security tab You can configure audit policies on files and directories by using the Windows Security tab in the Windows Properties window. This is the same method used when configuring audit polices on data residing on a Windows client, which enables customers to use the same GUI interface that they are accustomed to using.

Before you begin Auditing must be configured on the Storage Virtual Machine (SVM) that contains the data to which you are applying SACLs.

About this task Configuring NTFS audit policies is done by adding entries to NTFS system access control lists (SACLs) that are associated with an NTFS security descriptor. The security descriptor is then applied to NTFS files and directories. These tasks are automatically handled by the Windows GUI. The security descriptor can contain discretionary access control lists (DACLs) for applying file and folder access permissions, system access control lists (SACLs) for file and folder auditing, or both SACLs and DACLs. You can set NTFS audit policies for auditing access on individual files and folders using the Windows Security tab in the Windows Properties window by completing the following steps on a Windows host:

Steps 1. From the Tools menu in Windows Explorer, select Map network drive.

2. Complete the Map Network Drive box:

a. Select a Drive letter. Auditing NAS events on SVMs with FlexVol volumes | 147

b. In the Folder box, type the CIFS server name that contains the share holding the data you would like to audit and the name of the share.

Example If your CIFS server name is “CIFS_SERVER” and your share is named “share1”, you should enter \\CIFS_SERVER\share1. Note: You can specify the IP address of the data interface for the CIFS server instead of the CIFS server name.

c. Click Finish. The drive you selected is mounted and ready with the Windows Explorer window displaying files and folders contained within the share.

3. Select the file or directory for which you want to enable auditing access.

4. Right-click on the file or directory, and select Properties.

5. Select the Security tab.

6. Click Advanced.

7. Select the Auditing tab.

8. Perform the desired actions:

If you want to.... Do the following Set up auditing for a new user a. Click Add. or group b. In the Enter the object name to select box, type the name of the user or grout that you want to add.

c. Click OK.

Remove auditing from a user a. In the Enter the object name to select box, select the user or group or group that you want to remove.

b. Click Remove.

c. Click OK.

d. Skip the rest of this procedure.

Change auditing for a user or a. In the Enter the object name to select box, select the user or group group that you want to change.

b. Click Edit.

c. Click OK.

If you are setting up auditing on a user or group or changing auditing on an existing user or group, the Auditing Entry for box opens.

9. In the Apply to box, select how you want to apply this auditing entry. You can select one of the following:

• This folder, subfolders and files

• This folder and subfolders 148 | File Access Management Guide for NFS

• This folder only

• This folder and files

• Subfolders and files only

• Subfolders only

• Files only

If you are setting up auditing on a single file, the Apply to box is not active. The Apply to defaults to This object only. Note: Since auditing takes SVM resources, select only the minimal level that provides the auditing events that meet your security requirements.

10. In the Access box, select what you want audited and whether you want to audit successful events, failure events or both.

• To audit successful events, select the Success box.

• To audit failure events, select the Failure box. You can audit the following events:

• Full control

• Traverse folder / execute file

• List folder / read data

• Read attributes

• Read extended attributes

• Create files / write data

• Create folders / append data

• Write attributes

• Write extended attributes

• Delete subfolders and files

• Delete

• Read permissions

• Change permissions

• Take ownership

Note: Select only the actions that you need to monitor to meet your security requirements. For more information on these auditable events, see your Windows documentation.

11. If you do not want the auditing setting to propagate to subsequent files and folders of the original container, select Apply these auditing entries to objects and/or containers within this container only box.

12. Click Apply.

13. After you finish adding, removing, or editing auditing entries, click OK. The Auditing Entry for box closes. Auditing NAS events on SVMs with FlexVol volumes | 149

14. In the Auditing box, select the inheritance settings for this folder. You can choose one of the following:

• Select the Include inheritable auditing entries from this object's parent box.

• Select the Replace all existing inheritable auditing entries on all descendants with inheritable auditing entries from this object box. • Select both boxes. • Select neither box.

If you are setting SACLs on a single file, the Replace all existing inheritable auditing entries on all descendants with inheritable auditing entries from this object box is not present in the Auditing dialog box. Note: Select only the minimal level that provides the auditing events that meet your security requirements.

15. Click OK. The Auditing box closes.

Related concepts SMB events that can be audited on page 134

Related tasks Displaying information about NTFS audit policies on FlexVol volumes using the CLI on page 151 Displaying information about audit policies using the Windows Security tab on page 150

How to configure NTFS audit policies using the Data ONTAP CLI You can configure audit policies on files and folders using the Data ONTAP CLI. This enables you to configure NTFS audit policies without needing to connect to the data using an SMB share on a Windows client. You can configure NTFS audit policies by using the vserver security file-directory command family. You can only configure NTFS SACLs using the CLI. Configuring NFSv4 SACLs is not supported with this Data ONTAP command family. See the man pages for more information about using these commands to configure and add NTFS SACLs to files and folders.

Related concepts SMB events that can be audited on page 134

Related tasks Displaying information about NTFS audit policies on FlexVol volumes using the CLI on page 151 150 | File Access Management Guide for NFS

Configuring auditing for UNIX security style files and directories You configure auditing for UNIX security style files and directories by adding audit ACEs to NFSv4.x ACLs. This allows you to monitor certain NFS file and directory access events for security purposes.

About this task For NFSv4.x, both discretionary and system ACEs are stored in the same ACL. They are not stored in separate DACLs and SACLs. Therefore, you must exercise caution when adding audit ACEs to an existing ACL to avoid overwriting and losing an existing ACL. The order in which you add the audit ACEs to an existing ACL does not matter.

Steps

1. Retrieve the existing ACL for the file or directory by using the nfs4_getfacl or equivalent command. For more information about manipulating ACLs, see the man pages of your NFS client.

2. Append the desired audit ACEs.

3. Apply the updated ACL to the file or directory by using the nfs4_setfacl or equivalent command.

Related concepts Managing NFSv4 ACLs on page 114

Related references NFS file and directory access events that can be audited on page 138

Displaying information about audit policies applied to files and directories Displaying information about audit policies applied to files and directories enables you to verify that you have the appropriate system access control lists (SACLs) set on specified files and folders.

Related concepts Configuring file and folder audit policies on page 146

Displaying information about audit policies using the Windows Security tab You can display information about audit policies that have been applied to files and directories by using the Security tab in the Windows Properties window. This is the same method used for data residing on a Windows server, which enables customers to use the same GUI interface that they are accustomed to using.

About this task To display information about SACLs that have been applied to NTFS files and folders, complete the following steps on a Windows host. Auditing NAS events on SVMs with FlexVol volumes | 151

Steps 1. From the Tools menu in Windows Explorer, select Map network drive.

2. Complete the Map Network Drive dialog box:

a. Select a Drive letter.

b. In the Folder box, type the IP address or CIFS server name of the Storage Virtual Machine (SVM) containing the share that holds both the data you would like to audit and the name of the share.

Example If your CIFS server name is “CIFS_SERVER” and your share is named “share1”, you should enter \\CIFS_SERVER\share1. Note: You can specify the IP address of the data interface for the CIFS server instead of the CIFS server name.

c. Click Finish. The drive you selected is mounted and ready with the Windows Explorer window displaying files and folders contained within the share.

3. Select the file or directory for which you display auditing information.

4. Right-click on the file or directory, and select Properties.

5. Select the Security tab.

6. Click Advanced.

7. Select the Auditing tab.

8. Click Continue. The Auditing box opens. The Auditing entries box displays a summary of users and groups that have SACLs applied to them.

9. In the Auditing entries box select the user or group whose SACL entries you want displayed.

10. Click Edit. The Auditing entry for box opens.

11. In the Access box, view the current SACLs that are applied to the selected object.

12. Click Cancel to close the Auditing entry for box.

13. Click Cancel to close the Auditing box.

Displaying information about NTFS audit policies on FlexVol volumes using the CLI You can display information about NTFS audit policies on FlexVol volumes, including what the security styles and effective security styles are, what permissions are applied, and information about 152 | File Access Management Guide for NFS

system access control lists. You can use the results to validate your security configuration or to troubleshoot auditing issues.

About this task You must supply the name of the Storage Virtual Machine (SVM) and the path to the files or folders whose audit information you want to display. You can display the output in summary form or as a detailed list. • NTFS security-style volumes and qtrees use only NTFS SACLs (system access control lists) for audit policies. • Files and folders in a mixed security-style volume with NTFS effective security can have NTFS audit policies applied to them. Mixed security-style volumes and qtrees can contain some files and directories that use UNIX file permissions, either mode bits or NFSv4 ACLs, and some files and directories that use NTFS file permissions. • The top level of a mixed security-style volume can have either UNIX or NTFS effective security and might or might not contain NTFS SACLs. • Because Storage-Level Access Guard security can be configured on a mixed security-style volume or qtree even if the effective security style of the volume root or qtreee is UNIX, output for a volume or qtree path where Storage-Level Access Guard is configured might display both regular file and folder NFSv4 SACLs and Storage-Level Access Guard NTFS SACLs. • If the path entered in the command is to data with NTFS effective security, the output also displays information about Dynamic Access Control ACEs if Dynamic Access Control is configured for the given file or directory path. • When displaying security information about files and folders with NTFS effective security, UNIX-related output fields contain display-only UNIX file permission information. NTFS security-style files and folders use only NTFS file permissions and Windows users and groups when determining file access rights. • ACL output is displayed only for file and folders with NTFS or NFSv4 security. This field is empty for files and folders using UNIX security that have only mode bit permissions applied (no NFSv4 ACLs). • The owner and group output fields in the ACL output apply only in the case of NTFS security descriptors.

Step 1. Display file and directory audit policy settings with the desired level of detail:

If you want to display Enter the following command... information...

In summary form vserver security file-directory show -vserver vserver_name -path path

With expanded detail vserver security file-directory show -vserver vserver_name -path path -expand-mask true

Examples The following example displays the audit policy information about the path /corp in SVM vs1. The path has NTFS effective security. The NTFS security descriptor contains both a SUCCESS and a SUCCESS/FAIL SACL entry. Auditing NAS events on SVMs with FlexVol volumes | 153

cluster::> vserver security file-directory show -vserver vs1 -path /corp Vserver: vs1 File Path: /corp File Inode Number: 357 Security Style: ntfs Effective Style: ntfs DOS Attributes: 10 DOS Attributes in Text: ----D--- Expanded Dos Attributes: - Unix User Id: 0 Unix Group Id: 0 Unix Mode Bits: 777 Unix Mode Bits in Text: rwxrwxrwx ACLs: NTFS Security Descriptor Control:0x8014 Owner:DOMAIN\Administrator Group:BUILTIN\Administrators SACL - ACEs ALL-DOMAIN\Administrator-0x100081-OI|CI|SA|FA SUCCESSFUL-DOMAIN\user1-0x100116-OI|CI|SA DACL - ACEs ALLOW-BUILTIN\Administrators-0x1f01ff-OI|CI ALLOW-BUILTIN\Users-0x1f01ff-OI|CI ALLOW-CREATOR OWNER-0x1f01ff-OI|CI ALLOW-NT AUTHORITY\SYSTEM-0x1f01ff-OI|CI

The following example displays the audit policy information about the path /datavol1 in SVM vs1. The path contains both regular file and folder SACLs and Storage-Level Access Guard SACLs.

cluster::> vserver security file-directory show -vserver vs1 -path /datavol1

Vserver: vs1 File Path: /datavol1 File Inode Number: 77 Security Style: ntfs Effective Style: ntfs DOS Attributes: 10 DOS Attributes in Text: ----D--- Expanded Dos Attributes: - Unix User Id: 0 Unix Group Id: 0 Unix Mode Bits: 777 Unix Mode Bits in Text: rwxrwxrwx ACLs: NTFS Security Descriptor Control:0xaa14 Owner:BUILTIN\Administrators Group:BUILTIN\Administrators SACL - ACEs AUDIT-EXAMPLE\marketing-0xf01ff-OI|CI|FA DACL - ACEs ALLOW-EXAMPLE\Domain Admins-0x1f01ff-OI|CI ALLOW-EXAMPLE\marketing-0x1200a9-OI|CI

Storage-Level Access Guard security SACL (Applies to Directories): AUDIT-EXAMPLE\Domain Users-0x120089-FA AUDIT-EXAMPLE\engineering-0x1f01ff-SA DACL (Applies to Directories): ALLOW-EXAMPLE\Domain Users-0x120089 ALLOW-EXAMPLE\engineering-0x1f01ff ALLOW-NT AUTHORITY\SYSTEM-0x1f01ff SACL (Applies to Files): AUDIT-EXAMPLE\Domain Users-0x120089-FA AUDIT-EXAMPLE\engineering-0x1f01ff-SA DACL (Applies to Files): ALLOW-EXAMPLE\Domain Users-0x120089 ALLOW-EXAMPLE\engineering-0x1f01ff ALLOW-NT AUTHORITY\SYSTEM-0x1f01ff

Managing auditing configurations You can manage Storage Virtual Machine (SVM) auditing configurations by manually rotating the audit logs, enabling or disabling auditing, displaying information about auditing configurations, modifying auditing configurations, and deleting auditing configurations. You also need to understand what happens when reverting to a release where auditing is not supported. 154 | File Access Management Guide for NFS

Related concepts Troubleshooting auditing and staging volume space issues on page 158

Manually rotating the audit event logs Before you can view the audit event logs, the logs must be converted to user-readable formats. If you want to view the event logs for a specific Storage Virtual Machine (SVM) before Data ONTAP automatically rotates the log, you can manually rotate the audit event logs on an SVM.

Step

1. Rotate the audit event logs by using the vserver audit rotate-log command.

Example vserver audit rotate-log -vserver vs1 The audit event log is saved in the SVM audit event log directory with the format specified by the auditing configuration (XML or EVTX), and can be viewed by using the appropriate application.

Related concepts Viewing audit event logs on page 133

Related tasks Creating a file and directory auditing configuration on SVMs on page 143

Enabling and disabling auditing on SVMs You can enable or disable auditing on Storage Virtual Machines (SVMs). You might want to temporarily stop file and directory auditing by disabling auditing. You can enable auditing at any time (if an auditing configuration exists).

Before you begin Before you can enable auditing on the SVM, the SVM's auditing configuration must already exist.

About this task Disabling auditing does not delete the auditing configuration.

Steps 1. Perform the appropriate command:

If you want auditing to be... Enter the command...

Enabled vserver audit enable -vserver vserver_name

Disabled vserver audit disable -vserver vserver_name

2. Verify that auditing is in the desired state: vserver audit show -vserver vserver_name

Examples The following example enables auditing for SVM vs1: Auditing NAS events on SVMs with FlexVol volumes | 155

cluster1::> vserver audit enable -vserver vs1

cluster1::> vserver audit show -vserver vs1

Vserver: vs1 Auditing state: true Log Destination Path: /audit_log Categories of Events to Audit: file-ops, cifs-logon-logoff Log Format: evtx Log File Size Limit: 100MB Log Rotation Schedule: Month: - Log Rotation Schedule: Day of Week: - Log Rotation Schedule: Day: - Log Rotation Schedule: Hour: - Log Rotation Schedule: Minute: - Rotation Schedules: - Log Files Rotation Limit: 10

The following example disables auditing for SVM vs1:

cluster1::> vserver audit disable -vserver vs1

Vserver: vs1 Auditing state: false Log Destination Path: /audit_log Categories of Events to Audit: file-ops, cifs-logon-logoff Log Format: evtx Log File Size Limit: 100MB Log Rotation Schedule: Month: - Log Rotation Schedule: Day of Week: - Log Rotation Schedule: Day: - Log Rotation Schedule: Hour: - Log Rotation Schedule: Minute: - Rotation Schedules: - Log Files Rotation Limit: 10

Related tasks Deleting an auditing configuration on page 157

Displaying information about auditing configurations You can display information about auditing configurations. The information can help you determine whether the configuration is what you want in place for each SVM. The displayed information also enables you to verify whether an auditing configuration is enabled.

About this task You can display detailed information about auditing configurations on all SVMs or you can customize what information is displayed in the output by specifying optional parameters. If you do not specify any of the optional parameters, the following is displayed: • SVM name to which the auditing configuration applies • The audit state, which can be true or false If the audit state is true, auditing is enabled. If the audit state is false, auditing is disabled.

• The categories of events to audit • The audit log format • The target directory where the auditing subsystem stores consolidated and converted audit logs 156 | File Access Management Guide for NFS

Step

1. Display information about the auditing configuration by using the vserver audit show command. For more information about using the command, see the man pages.

Examples The following example displays a summary of the auditing configuration for all SVMs:

cluster1::> vserver audit show

Vserver State Event Types Log Format Target Directory ------vs1 false file-ops evtx /audit_log

The following example displays, in list form, all auditing configuration information for all SVMs:

cluster1::> vserver audit show -instance

Vserver: vs1 Auditing state: true Log Destination Path: /audit_log Categories of Events to Audit: file-ops Log Format: evtx Log File Size Limit: 100MB Log Rotation Schedule: Month: - Log Rotation Schedule: Day of Week: - Log Rotation Schedule: Day: - Log Rotation Schedule: Hour: - Log Rotation Schedule: Minute: - Rotation Schedules: - Log Files Rotation Limit: 0

Related tasks Creating a file and directory auditing configuration on SVMs on page 143

Commands for modifying auditing configurations If you want to change an auditing setting, you can modify the current configuration at any time, including modifying the log path destination and log format, modifying the categories of events to audit, how to automatically save log files, and specify the maximum number of log files to save.

If you want to... Use this command...

Modify the log destination path vserver audit modify with the -destination parameter Modify the category of events to audit vserver audit modify with the -events parameter Note: To audit central access policy staging events, the Dynamic Access Control (DAC) CIFS server option must be enabled on the Storage Virtual Machine (SVM).

Modify the log format vserver audit modify with the -format parameter Enabling automatic saves based on vserver audit modify with the -rotate-size internal log file size parameter Auditing NAS events on SVMs with FlexVol volumes | 157

If you want to... Use this command...

Enabling automatic saves based on a vserver audit modify with the -rotate- time interval schedule-month, -rotate-schedule- dayofweek, -rotate-schedule-day, -rotate- schedule-hour, and -rotate-schedule-minute parameters Specifying the maximum number of vserver audit modify with the -rotate-limit saved log files parameter

Deleting an auditing configuration In you no longer want to audit file and directory events on the Storage Virtual Machine (SVM) and do not want to maintain an auditing configuration on the SVM, you can delete the auditing configuration.

Steps 1. Disable the auditing configuration: vserver audit disable -vserver vserver_name

Example vserver audit disable -vserver vs1

2. Delete the auditing configuration: vserver audit delete -vserver vserver_name

Example vserver audit delete -vserver vs1

Related tasks Enabling and disabling auditing on SVMs on page 154

What the process is when reverting If you plan to revert the cluster, you should be aware of the revert process Data ONTAP follows when there are auditing-enabled Storage Virtual Machines (SVMs) in the cluster. You must take certain actions before reverting.

Reverting to a version of clustered Data ONTAP that does not support the auditing of CIFS logon and logoff events and central access policy staging events Support for auditing of CIFS logon and logoff events and for central access policy staging events starts with clustered Data ONTAP 8.3. If you are reverting to a version of clustered Data ONTAP that does not support these event types and you have auditing configurations that monitor these event types, you must change the auditing configuration for those audit-enabled SVMs before reverting. You must modify the configuration so that only file-op events are audited.

Related tasks Enabling and disabling auditing on SVMs on page 154 Deleting an auditing configuration on page 157 158 | File Access Management Guide for NFS

Troubleshooting auditing and staging volume space issues Issues can arise when there is insufficient space on either the staging volumes or on the volume containing the audit event logs. If there is insufficient space, new audit records cannot be created, which prevents clients from accessing data, and access requests fail. You should know how to troubleshoot and resolve these volume space issues.

Related concepts Aggregate space considerations when enabling auditing on page 132

How to troubleshoot space issues related to the event log volumes If volumes containing event log files run out of space, auditing cannot convert log records into log files. This results in client access failures. You need to know how to troubleshoot space issues related to event log volumes. • Storage Virtual Machine (SVM) and cluster administrators can determine whether there is insufficient volume space by displaying information about volume and aggregate usage and configuration.

• If there is insufficient space in the volumes containing event logs, SVM and cluster administrators can resolve the space issues by either removing some of the event log files or by increasing the size of the volume. Note: If the aggregate that contains the event log volume is full, then the size of the aggregate must be increased before you can increase the size of the volume. Only a cluster administrator can increase the size of an aggregate.

• The destination path for the event log files can be changed to a directory on another volume by modifying the auditing configuration. For more information about viewing information about volumes and increasing volume size, see the Clustered Data ONTAP Logical Storage Management Guide. For more information about viewing information about aggregates and managing aggregates, see the Clustered Data ONTAP Physical Storage Management Guide.

How to troubleshoot space issues related to the staging volumes (cluster administrators only) If any of the volumes containing staging files for your Storage Virtual Machine (SVM) runs out of space, auditing cannot write log records into staging files. This results in client access failures. To troubleshoot this issue, you need to determine whether any of the staging volumes used in the SVM are full by displaying information about volume usage. If the volume containing the consolidated event log files has sufficient space but there are still client access failures due to insufficient space, then the staging volumes might be out of space. The SVM administrator must contact you to determine whether the staging volumes that contain staging files for the SVM have insufficient space. The auditing subsystem generates an EMS event if auditing events cannot be generated due to insufficient space in a staging volume. The following message is displayed: No space left on device. Only you can view information about staging volumes; SVM administrators cannot. All staging volume names begin with MDV_aud_ followed by the UUID of the aggregate containing that staging volume. The following example shows four system volumes on the admin SVM, which were automatically created when a file services auditing configuration was created for a data SVM in the cluster: Auditing NAS events on SVMs with FlexVol volumes | 159

cluster1::> volume show -vserver cluster1 Vserver Volume Aggregate State Type Size Available Used% ------cluster1 MDV_aud_1d0131843d4811e296fc123478563412 aggr0 online RW 2GB 1.90GB 5% cluster1 MDV_aud_8be27f813d7311e296fc123478563412 root_vs0 online RW 2GB 1.90GB 5% cluster1 MDV_aud_9dc4ad503d7311e296fc123478563412 aggr1 online RW 2GB 1.90GB 5% cluster1 MDV_aud_a4b887ac3d7311e296fc123478563412 aggr2 online RW 2GB 1.90GB 5% 4 entries were displayed.

If there is insufficient space in the staging volumes, you can resolve the space issues by increasing the size of the volume. Note: If the aggregate that contains the staging volume is full, then the size of the aggregate must be increased before you can increase the size of the volume. Only you can increase the size of an aggregate; SVM administrators cannot. 160

Using FPolicy for file monitoring and management on SVMs with FlexVol volumes

FPolicy is a file access notification framework that is used to monitor and manage file access events on Storage Virtual Machines (SVMs) with FlexVol volumes. The framework generates notifications that are sent to either external FPolicy servers or to Data ONTAP. FPolicy supports event notifications for files and directories that are accessed using NFS and SMB. Note: FPolicy is not supported on SVMs with Infinite Volume.

How FPolicy works Before you plan and create your FPolicy configuration, you should understand the basics of how FPolicy works.

What the two parts of the FPolicy solution are There are two parts to an FPolicy solution. The Data ONTAP FPolicy framework manages activities on the cluster and sends notifications to external FPolicy servers. External FPolicy servers process notifications sent by Data ONTAP FPolicy. The Data ONTAP framework creates and maintains the FPolicy configuration, monitors file events, and sends notifications to external FPolicy servers. Data ONTAP FPolicy provides the infrastructure that allows communication between external FPolicy servers and Storage Virtual Machine (SVM) nodes. The FPolicy framework connects to external FPolicy servers and sends notifications for certain file system events to the FPolicy servers when these events occur as a result of client access. The external FPolicy servers process the notifications and send responses back to the node. What happens as a result of the notification processing depends on the application and whether the communication between the node and the external servers is asynchronous or synchronous.

Related concepts Roles that cluster components play with FPolicy implementation on page 161 How FPolicy works with external FPolicy servers on page 162 How FPolicy services work across SVM namespaces on page 165 FPolicy configuration types on page 165 What the steps for setting up an FPolicy configuration are on page 169

What synchronous and asynchronous notifications are FPolicy sends notifications to external FPolicy servers via the FPolicy interface. The notifications are sent either in synchronous or asynchronous mode. The notification mode determines what Data ONTAP does after sending notifications to FPolicy servers. Asynchronous notifications With asynchronous notifications, the node does not wait for a response from the FPolicy server, which enhances overall throughput of the system. This type of notification is suitable for applications where the FPolicy server does not require that any action be taken as a result of notification evaluation. For example, asynchronous notifications are used Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 161

when the Storage Virtual Machine (SVM) administrator wants to monitor and audit file access activity. Synchronous notifications When configured to run in synchronous mode, the FPolicy server must acknowledge every notification before the client operation is allowed to continue. This type of notification is used when an action is required based on the results of notification evaluation. For example, synchronous notifications are used when the SVM administrator wants to either allow or deny requests based on criteria specified on the external FPolicy server.

Related concepts How control channels are used for FPolicy communication on page 162 How privileged data access channels are used for synchronous communication on page 162

Synchronous and asynchronous applications There are many possible uses for FPolicy applications, both asynchronous and synchronous. Asynchronous applications are ones where the external FPolicy server does not alter access to files or directories or modify data on the Storage Virtual Machine (SVM). For example: • File access and audit logging • Storage resource management Synchronous applications are ones where data access is altered or data is modified by the external FPolicy server. For example: • Quota management • File access blocking • File archiving and hierarchical storage management • Encryption and decryption services • Compression and decompression services You can use the SDK for FPolicy to identify and implement other applications as well.

Roles that cluster components play with FPolicy implementation The cluster, the contained Storage Virtual Machines (SVMs), and data LIFs all play a role in an FPolicy implementation. cluster The cluster contains the FPolicy management framework and maintains and manages information about all FPolicy configurations in the cluster. SVM An FPolicy configuration is defined at the SVM level. The scope of the configuration is the SVM, and it only operates on SVM resources. One SVM configuration cannot monitor and send notifications for file access requests that are made for data residing on another SVM. FPolicy configurations can be defined on the admin SVM. After configurations are defined on the admin SVM, they can be seen and used in all SVMs. data LIFs Connections to the FPolicy servers are made through data LIFs belonging to the SVM with the FPolicy configuration. The data LIFs used for these connections can fail over in the same manner as data LIFs used for normal client access. 162 | File Access Management Guide for NFS

How FPolicy works with external FPolicy servers After FPolicy is configured and enabled on the Storage Virtual Machine (SVM), FPolicy runs on every node on which the SVM participates. FPolicy is responsible for establishing and maintaining connections with external FPolicy servers (FPolicy servers), for notification processing, and for managing notification messages to and from FPolicy servers. Additionally, as part of connection management, FPolicy has the following responsibilities: • Ensures that file notification flows through the correct LIF to the FPolicy server. • Ensures that when multiple FPolicy servers are associated with a policy, load balancing is done when sending notifications to the FPolicy servers. • Attempts to reestablish the connection when a connection to an FPolicy server is broken. • Sends the notifications to FPolicy servers over an authenticated session. • Manages the passthrough-read data connection established by the FPolicy server for servicing client requests when passthrough-read is enabled.

How control channels are used for FPolicy communication FPolicy initiates a control channel connection to an external FPolicy server from the data LIFs of each node participating on a Storage Virtual Machine (SVM). FPolicy uses control channels for transmitting file notifications; therefore, an FPolicy server might see multiple control channel connections based on SVM topology.

How privileged data access channels are used for synchronous communication With synchronous use cases, the FPolicy server accesses data residing on the Storage Virtual Machine (SVM) through a privileged data access path. Access through the privileged path exposes the complete file system to the FPolicy server. It can access data files to collect information, to scan files, read files, or write into files. Because the external FPolicy server can access the entire file system from the root of the SVM through the privileged data channel, the privileged data channel connection must be secure.

Related concepts What granting super user credentials for privileged data access means on page 163

How FPolicy connection credentials are used with privileged data access channels The FPolicy server makes privileged data access connections to cluster nodes by using a specific Windows user credential that is saved with the FPolicy configuration. SMB is the only supported protocol for making a privileged data access channel connection. If the FPolicy server requires privileged data access, the following conditions must be met: • A CIFS license must be enabled on the cluster. • The FPolicy server must run under the credentials configured in the FPolicy configuration. When making a data channel connection, FPolicy uses the credential for the specified Windows user name. Data access is made over the admin share ONTAP_ADMIN$. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 163

What granting super user credentials for privileged data access means Data ONTAP uses the combination of the IP address and the user credential configured in the FPolicy configuration to grant super user credentials to the FPolicy server. Super user status grants the following privileges when the FPolicy server accesses data: • Avoid permission checks The user avoids checks on files and directory access. • Special locking privileges Data ONTAP allows read, write, or modify access to any file regardless of existing locks. If the FPolicy server takes byte range locks on the file, it results in immediate removal of existing locks on the file. • Bypass any FPolicy checks Access does not generate any FPolicy notifications.

How FPolicy manages policy processing There might be multiple FPolicy policies assigned to your Storage Virtual Machine (SVM); each with a different priority. To create an appropriate FPolicy configuration on the SVM, it is important to understand how FPolicy manages policy processing. Each file access request is initially evaluated to determine which policies are monitoring this event. If it is a monitored event, information about the monitored event along with interested policies is passed to FPolicy where it is evaluated. Each policy is evaluated in order of the assigned priority. You should consider the following recommendations when configuring policies: • When you want a policy to always be evaluated before other policies, configure that policy with a higher priority. • If the success of requested file access operation on a monitored event is a prerequisite for a file request that is evaluated against another policy, give the policy that controls the success or failure of the first file operation a higher priority. For example, if one policy manages FPolicy file archiving and restore functionality and a second policy manages file access operations on the online file, the policy that manages file restoration must have a higher priority so that the file is restored before the operation managed by the second policy can be allowed. • If you want all policies that might apply to a file access operation to be evaluated, give synchronous policies a lower priority. You can reorder policy priorities for existing policies by modifying the policy sequence number. However, to have FPolicy evaluate policies based on the modified priority order, you must disable and reenable the policy with the modified sequence number.

Related concepts Planning the FPolicy policy configuration on page 183

What the node-to-external FPolicy server communication process is To properly plan your FPolicy configuration, you should understand what the node-to-external FPolicy server communication process is. Every node that participates on each Storage Virtual Machine (SVM) initiates a connection to an external FPolicy server (FPolicy server) using TCP/IP. Connections to the FPolicy servers are set up using node data LIFs; therefore, a participating node can set up a connection only if the node has an operational data LIF for the SVM. 164 | File Access Management Guide for NFS

Each FPolicy process on participating nodes attempts to establish a connection with the FPolicy server when the policy is enabled. It uses the IP address and port of the FPolicy external engine specified in the policy configuration. The connection establishes a control channel from each of the nodes participating on each SVM to the FPolicy server through the data LIF. In addition, if IPv4 and IPv6 data LIF addresses are present on the same participating node, FPolicy attempts to establish connections for both IPv4 and IPv6. Therefore, in a scenario where the SVM extends over multiple nodes or if both IPv4 and IPv6 addresses are present, the FPolicy server must be ready for multiple control channel setup requests from the cluster after the FPolicy policy is enabled on the SVM. For example, if a cluster has three nodes—Node1, Node2, and Node3—and SVM data LIFs are spread across only Node2 and Node3, control channels are initiated only from Node2 and Node3, irrespective of the distribution of data volumes. Say that Node2 has two data LIFs—LIF1 and LIF2— that belong to the SVM and that the initial connection is from LIF1. If LIF1 fails, FPolicy attempts to establish a control channel from LIF2.

Node1 Node2 Node3

Client N Client M LIF1 LIF2 LIF3 FPolicy server

Control channel setup

Client File request FPolicy Notification

Privileged data access - Optional

FPolicy Response Client File response Time lines

Client File request FPolicy Notification

Privileged data access - Optional FPolicy Response Client File response Node without SVM data LIF Nodes with SVM data LIFs

How FPolicy manages external communication during LIF migration or failover Data LIFs can be migrated to data ports in the same node or to data ports on a remote node. When a data LIF fails over or is migrated, a new control channel connection is made to the FPolicy server. FPolicy can then retry SMB and NFS client requests that timed out, with the result that new notifications are sent to the external FPolicy servers. The node rejects FPolicy server responses to original, timed-out SMB and NFS requests.

How FPolicy manages external communication during node failover If the cluster node that hosts the data ports used for FPolicy communication fails, Data ONTAP breaks the connection between the FPolicy server and the node. The impact of cluster failover to the FPolicy server can be mitigated by configuring the LIF manager to migrate the data port used in FPolicy communication to another active node. After the migration is complete, a new connection is established using the new data port. If the LIF manager is not configured to migrate the data port, the FPolicy server must wait for the failed node to come up. After the node is up, a new connection is initiated from that node with a new Session ID. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 165

Note: The FPolicy server detects broken connections with the keep-alive protocol message. The timeout for purging the session ID is determined when configuring FPolicy. The default keep-alive timeout is two minutes.

How FPolicy services work across SVM namespaces Data ONTAP provides a unified Storage Virtual Machine (SVM) namespace. Volumes across the cluster are joined together by junctions to provide a single, logical file system. The FPolicy server is aware of the namespace topology and provides FPolicy services across the namespace. The namespace is specific to and contained within the SVM; therefore, you can see the namespace only from the SVM context. Namespaces have the following characteristics: • A single namespace exists in each SVM, with the root of the namespace being the root volume, represented in the namespace as slash (/). • All other volumes have junction points below the root (/). • Volume junctions are transparent to clients. • A single NFS export can provide access to the complete namespace; otherwise, export policies can export specific volumes. • SMB shares can be created on the volume or on qtrees within the volume, or on any directory within the namespace. • The namespace architecture is flexible. Examples of typical namespace architectures are as follows: ◦ A namespace with a single branch off of the root ◦ A namespace with multiple branches off of the root ◦ A namespace with multiple unbranched volumes off of the root

Related concepts How namespaces and volume junctions affect file access on SVMs with FlexVol volumes on page 11 Creating and managing data volumes in NAS namespaces on page 15

FPolicy configuration types There are two basic FPolicy configuration types. One configuration uses external FPolicy servers to process and act upon notifications. The other configuration does not use external FPolicy servers; instead, it uses the Data ONTAP internal, native FPolicy server for simple file blocking based on extensions. External FPolicy server configuration The notification is sent to the FPolicy server, which screens the request and applies rules to determine whether the node should allow the requested file operation. For synchronous policies, the FPolicy server then sends a response to the node to either allow or block the requested file operation. Native FPolicy server configuration The notification is screened internally. The request is allowed or denied based on file extension settings configured in the FPolicy scope. 166 | File Access Management Guide for NFS

Related concepts Planning the FPolicy policy configuration on page 183 Creating the FPolicy configuration on page 191

When to create a native FPolicy configuration Native FPolicy configurations use the Data ONTAP internal FPolicy engine to monitor and block file operations based on the file's extension. This solution does not require external FPolicy servers (FPolicy servers). Using a native file blocking configuration is appropriate when this simple solution is all that is needed. Native file blocking enables you to monitor any file operations that match configured operation and filtering events and then deny access to files with particular extensions. This is the default configuration. This configuration provides a means to block file access based only on the file's extension. For example, to block files that contain mp3 extensions, you configure a policy to provide notifications for certain operations with target file extensions of mp3. The policy is configured to deny mp3 file requests for operations that generate notifications. The following applies to native FPolicy configurations: • The same set of filters and protocols that are supported by FPolicy server-based file screening are also supported for native file blocking. • Native file blocking and FPolicy server-based file screening applications can be configured at the same time. To do so, you can configure two separate FPolicy policies for the Storage Virtual Machine (SVM), with one configured for native file blocking and one configured for FPolicy server-based file screening. • The native file blocking feature only screens files based on the extensions and not on the content of the file. • In the case of symbolic links, native file blocking uses the file extension of the root file.

When to create a configuration that uses external FPolicy servers FPolicy configurations that use external FPolicy servers to process and manage notifications provide robust solutions for use cases where more than simple file blocking based on file extension is needed. You should create a configuration that uses external FPolicy servers when you want to do such things as monitor and record file access events, provide quota services, perform file blocking based on criteria other than simple file extensions, provide data migration services using hierarchical storage management applications, or provide a fine-grained set of policies that monitor only a subset of data in the Storage Virtual Machine (SVM).

How FPolicy passthrough-read enhances usability for hierarchical storage management FPolicy passthrough-read enhances usability for hierarchical storage management (HSM). Passthrough-read enables the FPolicy server (functioning as the HSM server) to provide read access to offline files without having to recall the file from the secondary storage system to the primary storage system. When an FPolicy server is configured to provide HSM to files residing on a CIFS server, policy- based file migration occurs where the files are stored offline on secondary storage and only a stub file remains on primary storage. Even though a stub file appears as a normal file to clients, it is actually a Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 167

sparse file that is the same size of the original file. The sparse file has the CIFS offline bit set and points to the actual file that has been migrated to secondary storage. Typically when a read request for an offline file is received, the requested content must be recalled back to primary storage and then accessed through primary storage. The need to recall data back to primary storage has several undesirable effects. Among the undesirable effects is the increased latency to client requests caused by the need to recall the content before responding to the request and the increased space consumption needed for recalled files on the primary storage. FPolicy passthrough-read allows the HSM server (the FPolicy server) to provide read access to migrated, offline files without having to recall the file from the secondary storage system to the primary storage system. Instead of recalling the files back to primary storage, read requests can be serviced directly from secondary storage. Passthrough-read enhances usability by providing the following benefits: • Read requests can be serviced even if the primary storage does not have sufficient space to recall requested data back to primary storage. • Better capacity and performance management when a surge of data recall might occur, such as if a script or a backup solution needs to access many offline files. • Read requests for offline files in Snapshot copies can be serviced. Because Snapshot copies are read-only, the FPolicy server cannot restore the original file if the stub file is located in a Snapshot copy. Using passthrough-read eliminates this problem. • Policies can be set up that control when read requests are serviced through access to the file on secondary storage and when the offline file should be recalled to primary storage. For example, a policy can be created on the HSM server that specifies the number of times the offline file can be accessed in a specified period of time before the file is migrated back to primary storage. This type of policy avoids recalling files that are rarely accessed.

How read requests are managed when FPolicy passthrough-read is enabled You should understand how read requests are managed when FPolicy passthrough-read is enabled so that you can optimally configure connectivity between the Storage Virtual Machine (SVM) and the FPolicy servers. When FPolicy passthrough-read is enabled and the SVM receives a request for an offline file, FPolicy sends a notification to the FPolicy server (HSM server) through the standard connection channel. After receiving the notification, the FPolicy server reads the data from the file path sent in the notification and sends the requested data to the SVM through the passthrough-read privileged data connection that is established between the SVM and the FPolicy server. After the data is sent, the FPolicy server then responds to the read request as an ALLOW or DENY. Based on whether the read request is allowed or denied, Data ONTAP either sends the requested information or sends an error message to the client.

Requirements, considerations, and best practices for configuring FPolicy Before you create and configure FPolicy configurations on your Storage Virtual Machines (SVMs) with FlexVol volumes, you need to be aware of certain requirements, considerations, and best practices for configuring FPolicy. 168 | File Access Management Guide for NFS

Related concepts Planning the FPolicy policy configuration on page 183 Creating the FPolicy configuration on page 191

Ways to configure FPolicy FPolicy features are configured either through the command line interface (CLI) or through APIs. This guide uses the CLI to create, manage, and monitor an FPolicy configuration on the cluster.

Requirements for setting up FPolicy Before you configure and enable FPolicy on your Storage Virtual Machine (SVM), you need to be aware of certain requirements. • All nodes in the cluster must be running a version of Data ONTAP that supports FPolicy. • If you are not using the Data ONTAP native FPolicy engine, you must have external FPolicy servers (FPolicy servers) installed. • The FPolicy servers must be installed on a server accessible from the data LIFs of the SVM where FPolicy policies are enabled.

• The IP address of the FPolicy server must be configured as a primary or secondary server in the FPolicy policy external engine configuration. • If the FPolicy servers access data over a privileged data channel, the following additional requirements must be met: ◦ CIFS must be licensed on the cluster. Privileged data access is accomplished using SMB connections. ◦ A user credential must be configured for accessing files over the privileged data channel. ◦ The FPolicy server must run under the credentials configured in the FPolicy configuration. ◦ All data LIFs used to communicate with the FPolicy servers must be configured to have cifs as one of the allowed protocols. This includes the LIFs used for passthrough-read connections.

Related concepts Planning the FPolicy external engine configuration on page 171 How privileged data access channels are used for synchronous communication on page 162 How FPolicy connection credentials are used with privileged data access channels on page 162 What granting super user credentials for privileged data access means on page 163

Best practices and recommendations when setting up FPolicy When setting up FPolicy on Storage Virtual Machines (SVMs), you need to be familiar with configuration best practices and recommendations to ensure that your FPolicy configuration provides robust monitoring performance and results that meet your requirements. • External FPolicy servers (FPolicy servers) should be placed in close proximity to the cluster with high-bandwidth connectivity to provide minimal latency and high-bandwidth connectivity. • The FPolicy external engine should be configured with more than one FPolicy server to provide resiliency and high availability of FPolicy server notification processing, especially if policies are configured for synchronous screening. • It is recommended that you disable the FPolicy policy before making any configuration changes. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 169

For example, if you want to add or modify an IP address in the FPolicy external engine configured for the enabled policy, you should first disable the policy. • The cluster node-to-FPolicy server ratio should be optimized to ensure that FPolicy servers are not overloaded, which can introduce latencies when the SVM responds to client requests. The optimal ratio depends on the application for which the FPolicy server is being used.

Related concepts Planning the FPolicy external engine configuration on page 171

Related tasks Enabling or disabling FPolicy policies on page 197

Passthrough-read upgrade and revert considerations There are certain upgrade and revert considerations that you must know about before upgrading to a Data ONTAP release that supports passthrough-read or before reverting to a release that does not support passthrough-read.

Upgrading After all nodes are upgraded to a version of Data ONTAP that supports FPolicy passthrough-read, the cluster is capable of using the passthrough-read functionality; however, passthrough-read is disabled by default on existing FPolicy configurations. To use passthrough-read on existing FPolicy configurations, you must disable the FPolicy policy and modify the configuration, and then reenable the configuration.

Reverting Before reverting to a version of Data ONTAP that does not support FPolicy passthrough-read, the following conditions must be met: • All the policies using passthrough-read must be disabled, and then the affected configurations must be modified so that they do not use passthrough-read. • FPolicy functionality must be disabled on the cluster by disabling every FPolicy policy on the cluster.

What the steps for setting up an FPolicy configuration are Before FPolicy can monitor file access, an FPolicy configuration must be created and enabled on the Storage Virtual Machine (SVM) for which FPolicy services are required. The steps for setting up and enabling an FPolicy configuration on the SVM are as follows:

1. Create an FPolicy external engine. The FPolicy external engine identifies the external FPolicy servers (FPolicy servers) that are associated with a specific FPolicy configuration. If the internal “native” FPolicy engine is used to create a native file-blocking configuration, you do not need to create an FPolicy external engine.

2. Create an FPolicy event. An FPolicy event describes what the FPolicy policy should monitor. Events consist of the protocols and file operations to monitor, and can contain a list of filters. Events use filters to narrow the list of monitored events for which the FPolicy external engine must send notifications. Events also specify whether the policy monitors volume operations.

3. Create an FPolicy policy. 170 | File Access Management Guide for NFS

The FPolicy policy is responsible for associating, with the appropriate scope, the set of events that need to be monitored and for which of the monitored events notifications must be sent to the designated FPolicy server (or to the native engine if no FPolicy servers are configured). The policy also defines whether the FPolicy server is allowed privileged access to the data for which it receives notifications. An FPolicy server needs privileged access if the server needs to access the data. Typical use cases where privileged access is needed include file blocking, quota management, and hierarchical storage management. The policy is where you specify whether the configuration for this policy uses an FPolicy server or the internal “native” FPolicy server. A policy specifies whether screening is mandatory. If screening is mandatory and all FPolicy servers are down or no response is received from the FPolicy servers within a defined timeout period, then file access is denied. A policy's boundaries are the SVM. A policy cannot apply to more than one SVM. However, a specific SVM can have multiple FPolicy policies, each with the same or different combination of scope, event, and external server configurations.

4. Configure the policy scope. The FPolicy scope determines which volumes, shares, or export-policies the policy acts on or excludes from monitoring. A scope also determines which file extensions should be included or excluded from FPolicy monitoring. Note: Exclude lists take precedence over include lists.

5. Enable the FPolicy policy. When the policy is enabled, the control channels and, optionally, the privileged data channels are connected. The FPolicy process on the nodes on which the SVM participates begin monitoring file and folder access and, for events that match configured criteria, sends notifications to the FPolicy servers (or to the native engine if no FPolicy servers are configured).

Note: If the policy uses native file blocking, an external engine is not configured or associated with the policy.

Related concepts Planning the FPolicy configuration on page 170 Creating the FPolicy configuration on page 191

Planning the FPolicy configuration Before you create an FPolicy configuration, you must understand what is involved in each step of the configuration. You need to decide what settings you need to use when performing the configuration and record them in the planning worksheets. You need to plan for the following configuration tasks:

• Creating the FPolicy external engine • Creating the FPolicy policy event • Creating the FPolicy policy • Creating the FPolicy policy scope FPolicy is supported on Storage Virtual Machines (SVMs) with FlexVol volumes. FPolicy is not supported on SVMs with Infinite Volume.

Related concepts What the steps for setting up an FPolicy configuration are on page 169 Creating the FPolicy configuration on page 191 Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 171

Planning the FPolicy external engine configuration Before you configure the FPolicy external engine (external engine), you must understand what it means to create an external engine and which configuration parameters are available. This information helps you to determine which values to set for each parameter.

Information that is defined when creating the FPolicy external engine The external engine configuration defines the information that FPolicy needs to make and manage connections to the external FPolicy servers (FPolicy servers), including the following information: • Storage Virtual Machine (SVM) name • Engine name • The IP addresses of the primary and secondary FPolicy servers and the TCP port number to use when making the connection to the FPolicy servers • Whether the engine type is asynchronous or synchronous • How to authenticate the connection between the node and the FPolicy server If you choose to configure mutual SSL authentication, then you must also configure parameters that provide SSL certificate information. • How to manage the connection using various advanced privilege settings This includes parameters that define such things as timeout values, retry values, keep-alive values, maximum request values, sent and receive buffer size values, and session timeout values.

The vserver fpolicy policy external-engine create command is used to create an FPolicy external engine.

What the basic external engine parameters are You can use the following table of basic FPolicy configuration parameters to help you plan your configuration:

Type of information Option

SVM -vserver Specifies the SVM name that you want to associate with this external vserver_name engine. Each FPolicy configuration is defined within a single SVM. The external engine, policy event, policy scope, and policy that combine together to create an FPolicy policy configuration must all be associated with the same SVM. 172 | File Access Management Guide for NFS

Type of information Option

Engine name -engine-name Specifies the name to assign to the external engine configuration. You engine_name must specify the external engine name later when you create the FPolicy policy. This associates the external engine with the policy. The name can be up to 256 characters long. Note: The name should be up to 200 characters long if configuring the external engine name in a MetroCluster or SVM disaster recovery configuration. The name can contain any combination of the following ASCII-range characters: • a through z

• A through Z

• 0 through 9

•“_”, “-”, and “.”

Primary FPolicy servers -primary-servers Specifies the primary FPolicy servers to which the node sends IP_address,... notifications for a given FPolicy policy. The value is specified as a comma-delimited list of IP addresses. If more than one primary server IP address is specified, every node on which the SVM participates creates a control connection to every specified primary FPolicy server at the time the policy is enabled. If you configure multiple primary FPolicy servers, notifications are sent to the FPolicy servers in a round-robin fashion. If the external engine is used in a MetroCluster or SVM disaster recovery configuration, you should specify the IP addresses of the FPolicy servers at the source site as primary servers. The IP addresses of the FPolicy servers at the destination site should be specified as secondary servers.

Port number -port integer Specifies the port number of the FPolicy service. Secondary FPolicy servers -secondary- Specifies the secondary FPolicy servers to which to send file access events servers for a given FPolicy policy. The value is specified as a comma-delimited IP_address,... list of IP addresses. Secondary servers are used only when none of the primary servers are reachable. Connections to secondary servers are established when the policy is enabled, but notifications are sent to secondary servers only if none of the primary servers are reachable. If you configure multiple secondary servers, notifications are sent to the FPolicy servers in a round- robin fashion. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 173

Type of information Option

External engine type -extern-engine- Specifies whether the external engine operates in synchronous or type asynchronous mode. By default, FPolicy operates in synchronous mode. external_engine_ type When set to synchronous, file request processing sends a notification to the FPolicy server, but then does not continue until after receiving a The value for this response from the FPolicy server. At that point, request flow either parameter can be one continues or processing results in denial, depending on whether the of the following: response from the FPolicy server permits the requested action. • synchronous When set to asynchronous, file request processing sends a notification to the FPolicy server, and then continues. • asynchronous

SSL option for communication with FPolicy server -ssl-option {no- Specifies the SSL option for communication with the FPolicy server. This auth|server-auth| is a required parameter. You can choose one of the options based on the mutual-auth} following information: • When set to no-auth, no authentication takes place. The communication link is established over TCP. • When set to server-auth, the SVM authenticates the FPolicy server using SSL server authentication. • When set to mutual-auth, mutual authentication takes place between the SVM and the FPolicy server; the SVM authenticates the FPolicy server, and the FPolicy server authenticates the SVM. If you choose to configure mutual SSL authentication, then you must also configure the -certificate-common-name, -certificate- serial, and -certifcate-ca parameters.

Certificate FQDN or custom common name -certificate- Specifies the certificate name used if SSL authentication between the common-name text SVM and the FPolicy server is configured. You can specify the certificate name as an FQDN or as a custom common name. If you specify mutual-auth for the -ssl-option parameter, you must specify a value for the -certificate-common-name parameter.

Certificate serial number -certificate- Specifies the serial number of the certificate used for authentication if SSL serial text authentication between the SVM and the FPolicy server is configured. If you specify mutual-auth for the -ssl-option parameter, you must specify a value for the -certificate-serial parameter.

Certificate authority -certifcate-ca Specifies the CA name of the certificate used for authentication if SSL text authentication between the SVM and the FPolicy server is configured. If you specify mutual-auth for the -ssl-option parameter, you must specify a value for the -certifcate-ca parameter.

What the advanced external engine options are You can use the following table of advanced FPolicy configuration parameters as you plan whether to customize your configuration with advanced parameters. You use these parameters to modify communication behavior between the cluster nodes and the FPolicy servers: 174 | File Access Management Guide for NFS

Type of information Option

Timeout for canceling a request -reqs-cancel- Specifies the time interval in hours (h), minutes (m), or seconds (s) that timeout integer[h| the node waits for a response from the FPolicy server. m|s] If the timeout interval passes, the node sends a cancel request to the FPolicy server. The node then sends the notification to an alternate FPolicy server. This timeout helps in handling an FPolicy server that is not responding, which can improve SMB/NFS client response. Also, canceling requests after a timeout period can help in releasing system resources because the notification request is moved from a down/bad FPolicy server to an alternate FPolicy server. The range for this value is 0 through 100. If the value is set to 0, the option is disabled and cancel request messages are not sent to the FPolicy server. The default is 20s.

Timeout for aborting a request -reqs-abort- Specifies the timeout in hours (h), minutes (m), or seconds (s) for aborting timeout integer[h| a request. m|s] The range for this value is 0 through 200.

Interval for sending status requests -status-req- Specifies the interval in hours (h), minutes (m), or seconds (s) after which interval a status request is sent to the FPolicy server. integer[h|m|s] The range for this value is 0 through 50. If the value is set to 0, the option is disabled and status request messages are not sent to the FPolicy server. The default is 10s.

Maximum outstanding requests on the FPolicy server -max-server-reqs Specifies the maximum number of outstanding requests that can be integer queued on the FPolicy server. The range for this value is 1 through 10000. The default is 50.

Timeout for disconnecting a nonresponsive FPolicy server -server- Specifies the time interval in hours (h), minutes (m), or seconds (s) after progress-timeout which the connection to the FPolicy server is terminated. integer[h|m|s] The connection is terminated after the timeout period only if the FPolicy server's queue contains the maximum allowed requests and no response is received within the this timeout period. The maximum allowed number of requests is either 50 (the default) or the number specified by the max- server-reqs- parameter. The range for this value is 1 through 100. The default is 60s.

Interval for sending keep-alive messages to the FPolicy server -keep-alive- Specifies the time interval in hours (h), minutes (m), or seconds (s) at interval- which keep-alive messages are sent to the FPolicy server. integer[h|m|s] Keep-alive messages detect half-open connections. The range for this value is 10 through 600. If the value is set to 0, the option is disabled and keep-alive messages are prevented from being sent to the FPolicy servers. The default is 120s. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 175

Type of information Option

Maximum reconnect attempts -max-connection- Specifies the maximum number of times the SVM attempts to reconnect retries integer to the FPolicy server after the connection has been broken. The range for this value is 0 through 20. The default is 5.

Receive buffer size -recv-buffer- Specifies the receive buffer size of the connected socket for the FPolicy size integer server. The default value is set to 256 kilobytes (Kb). When the value is set to 0, the size of the receive buffer is set to a value defined by the system. For example, if the default receive buffer size of the socket is 65536 bytes, by setting the tunable value to 0, the socket buffer size is set to 65536 bytes. You can use any non-default value to set the size (in bytes) of the receive buffer. Send buffer size -send-buffer- Specifies the send buffer size of the connected socket for the FPolicy size integer server. The default value is set to 256 kilobytes (Kb). When the value is set to 0, the size of the send buffer is set to a value defined by the system. For example, if the default send buffer size of the socket is set to 65536 bytes, by setting the tunable value to 0, the socket buffer size is set to 65536 bytes. You can use any non-default value to set the size (in bytes) of the send buffer.

Timeout for purging a session ID during reconnection -session-timeout Specifies the interval in hours (h), minutes (m), or seconds (s) after which [integerh] a new session ID is sent to the FPolicy server during reconnection [integerm] attempts. [integers] If the connection between the storage controller and the FPolicy server is terminated and reconnection is made within the -session-timeout interval, the old session ID is sent to FPolicy server so that it can send responses for old notifications. The default value is set to 10 seconds.

Additional information about configuring FPolicy external engines to use SSL authenticated connections You need to know some additional information if you want to configure the FPolicy external engine to use SSL when connecting to FPolicy servers.

SSL server authentication If you choose to configure the FPolicy external engine for SSL server authentication, before creating the external engine, you must install the public certificate of the certificate authority (CA) that signed the FPolicy server certificate.

Mutual authentication If you configure FPolicy external engines to use SSL mutual authentication when connecting Storage Virtual Machine (SVM) data LIFs to external FPolicy servers, before creating the external engine, you must install the public certificate of the CA that signed the FPolicy server certificate along with the public certificate and key file for authentication of the SVM. You must not delete this certificate while any FPolicy policies are using the installed certificate. 176 | File Access Management Guide for NFS

If the certificate is deleted while FPolicy is using it for mutual authentication when connecting to an external FPolicy server, you cannot reenable a disabled FPolicy policy that uses that certificate. The FPolicy policy cannot be reenabled in this situation even if a new certificate with the same settings is created and installed on the SVM. If the certificate has been deleted, you need to install a new certificate, create new FPolicy external engines that use the new certificate, and associate the new external engines with the FPolicy policy that you want to reenable by modifying the FPolicy policy.

How to install certificates for SSL The public certificate of the CA that is used to sign the FPolicy server certificate is installed by using the security certificate install command with the -type parameter set to client_ca. The private key and public certificate required for authentication of the SVM is installed by using the security certificate install command with the -type parameter set to server.

Certificates do not replicate in SVM disaster recovery relationships with a non-ID-preserve configuration Security certificates used for SSL authentication when making connections to FPolicy servers do not replicate to SVM disaster recovery destinations with non-ID-preserve configurations. Although the FPolicy external-engine configuration on the SVM is replicated, security certificates are not replicated. You must manually install the security certificates on the destination. When you set up the SVM disaster recovery relationship, the value you select for the -identity- preserve option of the snapmirror create command determines the configuration details that are replicated in the destination SVM. If you set the -identity-preserve option to true (ID-preserve), all of the FPolicy configuration details are replicated, including the security certificate information. You must install the security certificates on the destination only if you set the option to false (non-ID-preserve).

Restrictions for cluster-scoped FPolicy external engines with MetroCluster and SVM disaster recovery configurations You can create a cluster-scoped FPolicy external engine by assigning the cluster Storage Virtual Machine (SVM) to the external engine. However, when creating a cluster-scoped external engine in a MetroCluster or SVM disaster recovery configuration, there are certain restrictions when choosing the authentication method that the SVM uses for external communication with the FPolicy server. There are three authentication options that you can choose when creating external FPolicy servers: no authentication, SSL server authentication, and SSL mutual authentication. Although there are no restrictions when choosing the authentication option if the external FPolicy server is assigned to a data SVM, there are restrictions when creating a cluster-scoped FPolicy external engine:

Configuration Permitted? MetroCluster or SVM disaster recovery and a Yes cluster-scoped FPolicy external engine with no authentication (SSL is not configured) MetroCluster or SVM disaster recovery and a No cluster-scoped FPolicy external engine with SSL server or SSL mutual authentication

• If a cluster-scoped FPolicy external engine with SSL authentication exists and you want to create a MetroCluster or SVM disaster recovery configuration, you must modify this external engine to use no authentication or remove the external engine before you can create the MetroCluster or SVM disaster recovery configuration. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 177

• If the MetroCluster or SVM disaster recovery configuration already exists, clustered Data ONTAP prevents you from creating a cluster-scoped FPolicy external engine with SSL authentication.

Completing the FPolicy external engine configuration worksheet You can use this worksheet to record the values that you need during the FPolicy external engine configuration process. If a parameter value is required, you need to determine what value to use for those parameters before you configure the external engine.

Information for a basic external engine configuration You should record whether you want to include each parameter setting in the external engine configuration and then record the value for the parameters that you want to include.

Type of information Required Include Your values Storage Virtual Machine Yes Yes (SVM) name Engine name Yes Yes Primary FPolicy servers Yes Yes Port number Yes Yes Secondary FPolicy servers No External engine type No SSL option for Yes Yes communication with external FPolicy server Certificate FQDN or custom No common name Certificate serial number No Certificate authority No

Information for advanced external engine parameters To configure an external engine with advanced parameters, you must enter the configuration command while in advanced privilege mode.

Type of information Required Include Your values Timeout for canceling a No request Timeout for aborting a No request Interval for sending status No requests Maximum outstanding No requests on the FPolicy server Timeout for disconnecting a No nonresponsive FPolicy server 178 | File Access Management Guide for NFS

Type of information Required Include Your values Interval for sending keep- No alive messages to the FPolicy server Maximum reconnect attempts No Receive buffer size No Send buffer size No Timeout for purging a session No ID during reconnection

Planning the FPolicy event configuration Before you configure FPolicy events, you must understand what it means to create an FPolicy event. You must determine which protocols you want the event to monitor, which events to monitor, and which event filters to use. This information helps you plan the values that you want to set.

What it means to create an FPolicy event Creating the FPolicy event means defining information that the FPolicy process needs to determine what file access operations to monitor and for which of the monitored events notifications should be sent to the external FPolicy server. The FPolicy event configuration defines the following configuration information: • Storage Virtual Machine (SVM) name • Event name • Which protocols to monitor FPolicy can monitor SMB, NFSv3, and NFSv4 file access operations. • Which file operations to monitor Not all file operations are valid for each protocol. • Which file filters to configure Only certain combinations of file operations and filters are valid. Each protocol has its own set of supported combinations. • Whether to monitor volume mount and unmount operations

Note: There is a dependency with three of the parameters (-protocol, -file-operations, - filters). The following are the valid combinations for the three parameters:

• You can specify the -protocol and -file-operations parameters.

• You can specify all three of the parameters. • You can specify none of the parameters.

What the FPolicy event configuration contains You can use the following list of available FPolicy event configuration parameters to help you plan your configuration: Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 179

Type of information Option

SVM -vserver Specifies the SVM name that you want to associate with this FPolicy vserver_name event. Each FPolicy configuration is defined within a single SVM. The external engine, policy event, policy scope, and policy that combine together to create an FPolicy policy configuration must all be associated with the same SVM. Event name -event-name Specifies the name to assign to the FPolicy event. When you create the event_name FPolicy policy you associate the FPolicy event with the policy using the event name. The name can be up to 256 characters long. Note: The name should be up to 200 characters long if configuring the event in a MetroCluster or SVM disaster recovery configuration. The name can contain any combination of the following ASCII-range characters: • a through z

• A through Z

• 0 through 9

•“_”, “-”, and “.”

Protocol -protocol Specifies which protocol to configure for the FPolicy event. The list for - protocol protocol can include one of the following values:

• cifs

• nfsv3

• nfsv4

Note: If you specify -protocol, then you must specify a valid value in the -file-operations parameter. As the protocol version changes, the valid values might change. 180 | File Access Management Guide for NFS

Type of information Option

File operations -file-operations Specifies the list of file operations for the FPolicy event. file_operations,... The event checks the operations specified in this list from all client requests using the protocol specified in the -protocol parameter. You can list one or more file operations by using a comma-delimited list. The list for -file-operations can include one or more of the following values: • close for file close operations

• create for file create operations

• create-dir for directory create operations

• delete for file delete operations

• delete_dir for directory delete operations

• getattr for get attribute operations

• link for link operations

• lookup for lookup operations

• open for file open operations

• read for file read operations

• write for file write operations

• rename for file rename operations

• rename_dir for directory rename operations

• setattr for set attribute operations

• symlink for symbolic link operations

Note: If you specify-file-operations, then you must specify a valid protocol in the -protocol parameter. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 181

Type of information Option

Filters -filters filter, ... Specifies the list of filters for a given file operation for the specified protocol. The values in the -filters parameter are used to filter client requests. The list can include one or more of the following: • monitor-ads to filter the client request for alternate data stream

• close-with-modification to filter the client request for close with modification • close-without-modification to filter the client request for close without modification • first-read to filter the client request for first read

• first-write to filter the client request for first write

• offline-bit to filter the client request for offline bit set Setting this filter results in the FPolicy server receiving notification only when offline files are accessed. • open-with-delete-intent to filter the client request for open with delete intent Setting this filter results in the FPolicy server receiving notification only when an attempt is made to open a file with the intent to delete it. This is used by file systems when the FILE_DELETE_ON_CLOSE flag is specified. • open-with-write-intent to filter client request for open with write intent Setting this filter results in the FPolicy server receiving notification only when an attempt is made to open a file with the intent to write something in it. • write-with-size-change to filter the client request for write with size change

Note: If you specify the -filters parameter, then you must also specify valid values for the -file-operations and -protocol parameters.

Is volume operation required -volume-operation Specifies whether monitoring is required for volume mount and unmount {true|false} operations. The default is false.

List of supported file operation and filter combinations that FPolicy can monitor for SMB When you configure your FPolicy event, you need to be aware that only certain combinations of file operations and filters are supported for monitoring SMB file access operations. The list of supported file operation and filter combinations for FPolicy monitoring of SMB file access events is provided in the following table:

Supported file operations Supported filters close monitor-ads, offline-bit, close-with-modification, close-without- modification 182 | File Access Management Guide for NFS

Supported file operations Supported filters create monitor-ads, offline-bit create_dir Currently no filter is supported for this file operation. delete monitor-ads, offline-bit delete_dir Currently no filter is supported for this file operation. getattr offline-bit open monitor-ads, offline-bit, open-with-delete-intent, open-with- write-intent read monitor-ads, offline-bit, first-read write monitor-ads, offline-bit, first-write, write-with-size-change rename monitor-ads, offline-bit rename_dir Currently no filter is supported for this file operation. setattr monitor-ads, offline-bit

List of supported file operation and filter combinations that FPolicy can monitor for NFSv3 When you configure your FPolicy event, you need to be aware that only certain combinations of file operations and filters are supported for monitoring NFSv3 file access operations. The list of supported file operation and filter combinations for FPolicy monitoring of NFSv3 file access events is provided in the following table:

Supported file operations Supported filters create offline-bit create_dir Currently no filter is supported for this file operation. delete offline-bit delete_dir Currently no filter is supported for this file operation. link offline-bit lookup offline-bit read offline-bit write offline-bit, write-with-size-change rename offline-bit rename_dir Currently no filter is supported for this file operation. setattr offline-bit symlink offline-bit

List of supported file operation and filter combinations that FPolicy can monitor for NFSv4 When you configure your FPolicy event, you need to be aware that only certain combinations of file operations and filters are supported for monitoring NFSv4 file access operations. The list of supported file operation and filter combinations for FPolicy monitoring of NFSv4 file access events is provided in the following table: Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 183

Supported file operations Supported filters close offline-bit create offline-bit create_dir Currently no filter is supported for this file operation. delete offline-bit delete_dir Currently no filter is supported for this file operation. getattr offline-bit link offline-bit lookup offline-bit open offline-bit read offline-bit write offline-bit, write-with-size-change rename offline-bit rename_dir Currently no filter is supported for this file operation. setattr offline-bit symlink offline-bit

Completing the FPolicy event configuration worksheet You can use this worksheet to record the values that you need during the FPolicy event configuration process. If a parameter value is required, you need to determine what value to use for those parameters before you configure the FPolicy event. You should record whether you want to include each parameter setting in the FPolicy event configuration and then record the value for the parameters that you want to include.

Type of information Required Include Your values Storage Virtual Machine Yes Yes (SVM) name Event name Yes Yes Protocol No File operations No Filters No Is volume operation required No

Planning the FPolicy policy configuration Before you configure the FPolicy policy, you must understand which parameters are required when creating the policy as well as why you might want to configure certain optional parameters. This information helps you to determine which values to set for each parameter. When creating an FPolicy policy you associate the policy with the following: • The Storage Virtual Machine (SVM) • One or more FPolicy events 184 | File Access Management Guide for NFS

• An FPolicy external engine You can also configure several optional policy settings.

What the FPolicy policy configuration contains You can use the following list of available FPolicy policy required and optional parameters to help you plan your configuration:

Type of information Option Required Default

SVM name -vserver Yes None Specifies the name of the SVM on which vserver_name you want to create an FPolicy policy. Policy name -policy-name Yes None Specifies the name of the FPolicy policy. policy_name The name can be up to 256 characters long. Note: The name should be up to 200 characters long if configuring the policy in a MetroCluster or SVM disaster recovery configuration. The name can contain any combination of the following ASCII-range characters: • a through z

• A through Z

• 0 through 9

•“_”, “-”, and “.”

Event names -events Yes None Specifies a comma-delimited list of events event_name, ... to associate with the FPolicy policy. • You can associate more than one event to a policy. • An event is specific to a protocol. • You can use a single policy to monitor file access events for more than one protocol by creating an event for each protocol that you want the policy to monitor, and then associating the events to the policy. • The events must already exist. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 185

Type of information Option Required Default

External engine name -engine Yes (unless native Specifies the name of the external engine to engine_name the policy associate with the FPolicy policy. uses the internal Data • An external engine contains information ONTAP required by the node to send native notifications to an FPolicy server. engine) • You can configure FPolicy to use the Data ONTAP native external engine for simple file blocking or to use an external engine that is configured to use external FPolicy servers (FPolicy servers) for more sophisticated file blocking and file management. • If you want to use the native external engine, you can either not specify a value for this parameter or you can specify native as the value.

• If you want to use FPolicy servers, the configuration for the external engine must already exist.

Is mandatory screening required -is-mandatory No true Specifies whether mandatory file access {true|false} screening is required. • The mandatory screening setting determines what action is taken on a file access event in a case when all primary and secondary servers are down or no response is received from the FPolicy servers within a given timeout period. • When set to true, file access events are denied. • When set to false, file access events are allowed. 186 | File Access Management Guide for NFS

Type of information Option Required Default

Allow privileged access -allow- No (unless no Specifies whether you want the FPolicy privileged- passthrough- server to have privileged access to the access {yes|no} read is monitored files and folders by using a enabled) privileged data connection. If configured, FPolicy servers can access files from the root of the SVM containing the monitored data using the privileged data connection. For privileged data access, CIFS must be licensed on the cluster and all the data LIFs used to connect to the FPolicy servers must be configured to have cifs as one of the allowed protocols. If you want to configure the policy to allow privileged access, you must also specify the user name for the account that you want the FPolicy server to use for privileged access.

Privileged user name -privileged- No (unless None Specifies the user name of the account the user-name privileged FPolicy servers use for privileged data user_name access is access. enabled) • The value for this parameter should use the “domain\user name” format.

• If -allow-privileged-access is set to no, any value set for this parameter is ignored. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 187

Type of information Option Required Default

Allow passthrough-read -is- No false Specifies whether the FPolicy servers can passthrough- provide passthrough-read services for files read-enabled that have been archived to secondary storage {true|false} (offline files) by the FPolicy servers: • Passthrough-read is a way to read data for offline files without restoring the data to the primary storage. Passthrough-read reduces response latencies because there is no need to recall files back to primary storage before responding to the read request. Additionally, passthrough-read optimizes storage efficiency by eliminating the need to consume primary storage space with files that are recalled solely to satisfy read requests. • When enabled, the FPolicy servers provide the data for the file over a separate privileged data channel opened specifically for passthrough-reads. • If you want to configure passthrough- read, the policy must also be configured to allow privileged access.

Related concepts How FPolicy manages policy processing on page 163 Requirements, considerations, and best practices for configuring FPolicy on page 167

Requirement for FPolicy scope configurations if the FPolicy policy uses the native engine If you configure the FPolicy policy to use the native engine, there is a specific requirement for how you define the FPolicy scope configured for the policy. The FPolicy scope defines the boundaries on which the FPolicy policy applies, for example whether the FPolicy applies to specified volumes or shares. There are a number of parameters that further restrict the scope to which the FPolicy policy applies. One of these parameters, -is-file- extension-check-on-directories-enabled, specifies whether to check file extensions on directories. The default value is false, which means that file extensions on directories are not checked. When an FPolicy policy that uses the native engine is enabled on a share or volume and the -is- file-extension-check-on-directories-enabled parameter is set to false for the scope of the policy, directory access is denied. With this configuration, because the file extensions are not checked for directories, any directory operation is denied if it falls under the scope of the policy. To ensure that directory access succeeds when using the native engine, you must set the -is-file- extension-check-on-directories-enabled parameter to true when creating the scope. With this parameter set to true, extension checks happen for directory operations and the decision whether to allow or deny access is taken based on the extensions included or excluded in the FPolicy scope configuration. 188 | File Access Management Guide for NFS

Completing the FPolicy policy worksheet You can use this worksheet to record the values that you need during the FPolicy policy configuration process. You should record whether you want to include each parameter setting in the FPolicy policy configuration and then record the value for the parameters that you want to include.

Type of information Include Your values Storage Virtual Machine (SVM) Yes name Policy name Yes Event names Yes External engine name Is mandatory screening required Allow privileged access Privileged user name Is passthrough-read enabled

Planning the FPolicy scope configuration Before you configure the FPolicy scope, you must understand what it means to create a scope. You must understand what the scope configuration contains. You also need to understand what the scope rules of precedence are. This information can help you plan the values that you want to set.

What it means to create an FPolicy scope Creating the FPolicy scope means defining the boundaries on which the FPolicy policy applies. The Storage Virtual Machine (SVM) is the basic boundary. When you create a scope for an FPolicy policy, you must define the FPolicy policy to which it will apply, and you must designate to which SVM you want to apply the scope. There are a number of parameters that further restrict the scope within the specified SVM. You can restrict the scope by specifying what to include in the scope or by specifying what to exclude from the scope. After you apply a scope to an enabled policy, policy event checks get applied to the scope defined by this command. Notifications are generated for file access events where matches are found in the “include” options. Notifications are not generated for file access events where matches are found in the “exclude” options. The FPolicy scope configuration defines the following configuration information:

• SVM name • Policy name • The shares to include or exclude from what gets monitored • The export policies to include or exclude from what gets monitored • The volumes to include or exclude from what gets monitored • The file extensions to include or exclude from what gets monitored • Whether to do file extension checks on directory objects Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 189

Note: There are special considerations for the scope for a cluster FPolicy policy. The cluster FPolicy policy is a policy that the cluster administrator creates for the admin SVM. If the cluster administrator also creates the scope for that cluster FPolicy policy, the SVM administrator cannot create a scope for that same policy. However, if the cluster administrator does not create a scope for the cluster FPolicy policy, then any SVM administrator can create the scope for that cluster policy. If the SVM administrator creates a scope for that cluster FPolicy policy, the cluster administrator cannot subsequently create a cluster scope for that same cluster policy. This is because the cluster administrator cannot override the scope for the same cluster policy.

What the scope rules of precedence are The following rules of precedence apply to scope configurations:

• When a share is included in the -shares-to-include parameter and the parent volume of the share is included in the -volumes-to-exclude parameter, -volumes-to-exclude has precedence over -shares-to-include.

• When an export policy is included in the -export-policies-to-include parameter and the parent volume of the export policy is included in the -volumes-to-exclude parameter, - volumes-to-exclude has precedence over -export-policies-to-include.

• An administrator can specify both -file-extensions-to-include and -file- extensions-to-exclude lists. The -file-extensions-to-exclude parameter is checked before the -file-extensions- to-include parameter is checked.

What the FPolicy scope configuration contains You can use the following list of available FPolicy scope configuration parameters to help you plan your configuration: Note: When configuring what shares, export policies, volumes, and file extensions to include or exclude from the scope, the include and exclude parameters can contain regular expressions and can include metacharacters such as “?” and “*”.

Type of information Option

SVM -vserver Specifies the SVM name on which you want to create an FPolicy scope. vserver_name Each FPolicy configuration is defined within a single SVM. The external engine, policy event, policy scope, and policy that combine together to create an FPolicy policy configuration must all be associated with the same SVM.

Policy name -policy-name Specifies the name of the FPolicy policy to which you want to attach the policy_name scope. The FPolicy policy must already exist. Shares to include -shares-to- Specifies a comma-delimited list of shares to monitor for the FPolicy include policy to which the scope is applied. share_name, ... Shares to exclude -shares-to- Specifies a comma-delimited list of shares to exclude from monitoring exclude for the FPolicy policy to which the scope is applied. share_name, ... 190 | File Access Management Guide for NFS

Type of information Option

Volumes to include -volumes-to- Specifies a comma-delimited list of volumes to monitor for the FPolicy include policy to which the scope is applied. volume_name, ... Volumes to exclude -volumes-to- Specifies a comma-delimited list of volumes to exclude from monitoring exclude for the FPolicy policy to which the scope is applied. volume_name, ... Export policies to include -export-policies- Specifies a comma-delimited list of export policies to monitor for the to-include FPolicy policy to which the scope is applied. export_policy_nam e, ...

Export policies to exclude -export-policies- Specifies a comma-delimited list of export policies to exclude from to-exclude monitoring for the FPolicy policy to which the scope is applied. export_policy_nam e, ...

File extensions to include -file-extensions- Specifies a comma-delimited list of file extensions to monitor for the to-include FPolicy policy to which the scope is applied. file_extensions, ... File extension to exclude -file-extensions- Specifies a comma-delimited list of file extensions to exclude from to-exclude monitoring for the FPolicy policy to which the scope is applied. file_extensions, ... Is file extension check on directory enabled -is-file- Specifies whether the file name extension checks apply to directory extension-check- objects as well. If this parameter is set to true, the directory objects are on-directories- subjected to the same extension checks as regular files. If this parameter enabled {true| is set to false, the directory names are not matched for extensions and false|} notifications are sent for directories even if their name extensions do not match. If the FPolicy policy to which the scope is assigned is configured to use the native engine, this parameter must be set to true.

Completing the FPolicy scope worksheet You can use this worksheet to record the values that you need during the FPolicy scope configuration process. If a parameter value is required, you need to determine what value to use for those parameters before you configure the FPolicy scope. You should record whether you want to include each parameter setting in the FPolicy scope configuration and then record the value for the parameters that you want to include.

Type of information Required Include Your values Storage Virtual Machine Yes Yes (SVM) name Policy name Yes Yes Shares to include No Shares to exclude No Volumes to include No Volumes to exclude No Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 191

Type of information Required Include Your values Export policies to include No Export policies to exclude No File extensions to include No File extension to exclude No Is file extension check on No directory enabled

Creating the FPolicy configuration There are several steps you must perform to creating an FPolicy configuration. First, you must plan your configuration. Then, you create an FPolicy external engine, an FPolicy event, and an FPolicy policy. You then create an FPolicy scope and attach it to the FPolicy policy, and then enable the FPolicy policy. FPolicy is supported on Storage Virtual Machines (SVMs) with FlexVol volumes. FPolicy is not supported on SVMs with Infinite Volume.

Steps 1. Creating the FPolicy external engine on page 192 The first step to creating an FPolicy configuration is to create an external engine. The external engine defines how FPolicy makes and manages connections to external FPolicy servers. If your configuration uses the internal Data ONTAP engine (the native external engine) for simple file blocking, you do not need to configure a separate FPolicy external engine and do not need to perform this step. 2. Creating the FPolicy event on page 193 As part of creating an FPolicy policy configuration, you need to create an FPolicy event. You associate the event with the FPolicy policy when it is created. An event defines which protocol to monitor and which file access events to monitor and filter. 3. Creating the FPolicy policy on page 193 When you create the FPolicy policy, you associate an external engine and one or more events to the policy. The policy also specifies whether mandatory screening is required, whether the FPolicy servers have privileged access to data on the Storage Virtual Machine (SVM), and whether passthrough-read for offline files is enabled. 4. Creating the FPolicy scope on page 195 After creating the FPolicy policy, you need to create an FPolicy scope. When creating the scope, you associate the scope with an FPolicy policy. A scope defines the boundaries on which the FPolicy policy applies. Scopes can include or exclude files based on shares, export policies, volumes, and file extensions. 5. Enabling the FPolicy policy on page 195 After you are through configuring an FPolicy policy configuration, you enable the FPolicy policy. Enabling the policy sets its priority and starts file access monitoring for the policy.

Related concepts What the steps for setting up an FPolicy configuration are on page 169 Planning the FPolicy configuration on page 170 Requirements, considerations, and best practices for configuring FPolicy on page 167 Displaying information about FPolicy configurations on page 197 192 | File Access Management Guide for NFS

Creating the FPolicy external engine The first step to creating an FPolicy configuration is to create an external engine. The external engine defines how FPolicy makes and manages connections to external FPolicy servers. If your configuration uses the internal Data ONTAP engine (the native external engine) for simple file blocking, you do not need to configure a separate FPolicy external engine and do not need to perform this step.

Before you begin The external engine worksheet should be completed.

About this task If the external engine is used in a MetroCluster configuration, you should specify the IP addresses of the FPolicy servers at the source site as primary servers. The IP addresses of the FPolicy servers at the destination site should be specified as secondary servers.

Steps

1. Create the FPolicy external engine by using the vserver fpolicy policy external- engine create command.

Example The following command creates an external engine on Storage Virtual Machine (SVM) vs1.example.com. No authentication is required for external communications with the FPolicy server. vserver fpolicy policy external-engine create -vserver-name vs1.example.com -engine-name engine1 -primary-servers 10.1.1.2,10.1.1.3 -port 6789 -ssl-option no-auth

2. Verify the FPolicy external engine configuration by using the vserver fpolicy policy external-engine show command.

Example The following command display information about all external engines configured on SVM vs1.example.com: vserver fpolicy policy external-engine show -vserver vs1.example.com

Primary Secondary External Vserver Engine Servers Servers Port Engine Type ------vs1.example.com engine1 10.1.1.2, - 6789 synchronous 10.1.1.3

The following command displays detailed information about the external engine named “engine1” on SVM vs1.example.com: vserver fpolicy policy external-engine show -vserver vs1.example.com - engine-name engine1

Vserver: vs1.example.com Engine: engine1 Primary FPolicy Servers: 10.1.1.2, 10.1.1.3 Port Number of FPolicy Service: 6789 Secondary FPolicy Servers: - External Engine Type: synchronous Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 193

SSL Option for External Communication: no-auth FQDN or Custom Common Name: - Serial Number of Certificate: - Certificate Authority: -

Creating the FPolicy event As part of creating an FPolicy policy configuration, you need to create an FPolicy event. You associate the event with the FPolicy policy when it is created. An event defines which protocol to monitor and which file access events to monitor and filter.

Before you begin The FPolicy event worksheet should be completed.

Steps

1. Create the FPolicy event by using the vserver fpolicy policy event create command.

Example vserver fpolicy policy event create -vserver-name vs1.example.com - event-name event1 -protocol cifs -file-operations open,close,read,write

2. Verify the FPolicy event configuration by using the vserver fpolicy policy event show command.

Example vserver fpolicy policy event show -vserver vs1.example.com

Event File Is Volume Vserver Name Protocols Operations Filters Operation ------vs1.example.com event1 cifs open, close, - false read, write

Creating the FPolicy policy When you create the FPolicy policy, you associate an external engine and one or more events to the policy. The policy also specifies whether mandatory screening is required, whether the FPolicy servers have privileged access to data on the Storage Virtual Machine (SVM), and whether passthrough-read for offline files is enabled.

Before you begin • The FPolicy policy worksheet should be completed. • If you plan on configuring the policy to use FPolicy servers, the external engine must exist. • At least one FPolicy event that you plan on associating with the FPolicy policy must exist. • If you want to configure privileged data access, a CIFS server must exist on the SVM.

Steps 1. Create the FPolicy policy: vserver fpolicy policy create -vserver-name vserver_name -policy-name policy_name -engine engine_name -events event_name,... [-is-mandatory {true|false}] [-allow-privileged-access {yes|no}] [-privileged-user-name domain\user_name] [-is-passthrough-read-enabled {true|false}] 194 | File Access Management Guide for NFS

• You can add one or more events to the FPolicy policy. • By default, mandatory screening is enabled.

• If you want to allow privileged access by setting the -allow-privileged-access parameter to yes, you must also configure a privileged user name for privileged access.

• If you want to configure passthrough-read by setting the -is-passthrough-read-enabled parameter to true, you must also configure privileged data access.

Example The following command creates a policy named “policy1” that has the event named “event1” and the external engine named “engine1” associated with it. This policy uses default values in the policy configuration: vserver fpolicy policy create -vserver vs1.example.com -policy-name policy1 -events event1 -engine engine1 The following command creates a policy named “policy2” that has the event named “event2” and the external engine named “engine2” associated with it. This policy is configured to use privileged access using the specified user name. Passthrough-read is enabled: vserver fpolicy policy create -vserver vs1.example.com -policy-name policy2 -events event2 -engine engine2 -allow-privileged-access yes ‑privileged-user-name example\archive_acct -is-passthrough-read-enabled true The following command creates a policy named “native1” that has the event named “event3” associated with it. This policy uses the native engine and uses default values in the policy configuration: vserver fpolicy policy create -vserver vs1.example.com -policy-name native1 -events event3 -engine native

2. Verify the FPolicy policy configuration by using the vserver fpolicy policy show command.

Example The following command displays information about the three configured Fpolicy polices, including the following information: • The SVM associated with the policy • The external engine associated with the policy • The events associated with the policy

• Whether mandatory screening is required • Whether privileged access is required vserver fpolicy policy show

Vserver Policy Events Engine Is Mandatory Privileged Name Access ------vs1.example.com policy1 event1 engine1 true no vs1.example.com policy2 event2 engine2 true yes vs1.example.com native1 event3 native true no Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 195

Creating the FPolicy scope After creating the FPolicy policy, you need to create an FPolicy scope. When creating the scope, you associate the scope with an FPolicy policy. A scope defines the boundaries on which the FPolicy policy applies. Scopes can include or exclude files based on shares, export policies, volumes, and file extensions.

Before you begin The FPolicy scope worksheet must be completed. The FPolicy policy must exist with an associated external engine (if the policy is configured to use external FPolicy servers) and must have at least one associated FPolicy event.

Steps

1. Create the FPolicy scope by using the vserver fpolicy policy scope create command.

Example vserver fpolicy policy scope create -vserver-name vs1.example.com - policy-name policy1 -volumes-to-include datavol1,datavol2

2. Verify the FPolicy scope configuration by using the vserver fpolicy policy scope show command.

Example vserver fpolicy policy scope show -vserver vs1.example.com -instance

Vserver: vs1.example.com Policy: policy1 Shares to Include: - Shares to Exclude: - Volumes to Include: datavol1, datavol2 Volumes to Exclude: - Export Policies to Include: - Export Policies to Exclude: - File Extensions to Include: - File Extensions to Exclude: -

Enabling the FPolicy policy After you are through configuring an FPolicy policy configuration, you enable the FPolicy policy. Enabling the policy sets its priority and starts file access monitoring for the policy.

Before you begin The FPolicy policy must exist with an associated external engine (if the policy is configured to use external FPolicy servers) and must have at least one associated FPolicy event. The FPolicy policy scope must exist and must be assigned to the FPolicy policy.

About this task The priority is used when multiple policies are enabled on the Storage Virtual Machine (SVM) and more than one policy has subscribed to the same file access event. Policies that use the native engine configuration have a higher priority than policies for any other engine, regardless of the sequence number assigned to them when enabling the policy. Note: A policy cannot be enabled on the admin SVM. 196 | File Access Management Guide for NFS

Steps

1. Enable the FPolicy policy by using the vserver fpolicy enable command.

Example vserver fpolicy enable -vserver-name vs1.example.com -policy-name policy1 -sequence-number 1

2. Verify that the FPolicy policy is enabled by using the vserver fpolicy show command.

Example vserver fpolicy show -vserver vs1.example.com

Sequence Vserver Policy Name Number Status Engine ------vs1.example.com policy1 1 on engine1

Modifying FPolicy configurations You can modify FPolicy configurations by modifying the elements that make up the configuration. You can modify external engines, FPolicy events, FPolicy scopes, and FPolicy policies. You can also enable or disable FPolicy policies. When you disable the FPolicy policy, file monitoring is discontinued for that policy. It is recommended to disable the FPolicy policy before modifying the configuration.

Related concepts Creating the FPolicy configuration on page 191 Managing FPolicy server connections on page 200

Commands for modifying FPolicy configurations You can modify FPolicy external engines, events, scopes, and policies.

If you want to modify... Use this command...

External engines vserver fpolicy policy external-engine modify

Events vserver fpolicy policy event modify

Scopes vserver fpolicy policy scope modify

Policies vserver fpolicy policy modify

See the man pages for the commands for more information.

Related references Commands for displaying information about FPolicy configurations on page 198 Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 197

Enabling or disabling FPolicy policies You can enable FPolicy policies after the configuration is complete. Enabling the policy sets its priority and starts file access monitoring for the policy. You can disable FPolicy policies if you want to stop file access monitoring for the policy.

Before you begin Before enabling FPolicy policies, the FPolicy configuration must be completed.

About this task • The priority is used when multiple policies are enabled on the Storage Virtual Machine (SVM) and more than one policy has subscribed to the same file access event. • Policies that use the native engine configuration have a higher priority than policies for any other engine, regardless of the sequence number assigned to them when enabling the policy. • If you want to change the priority of an FPolicy policy, you must disable the policy and then reenable it using the new sequence number.

Step 1. Perform the appropriate action:

If you want to... Enter the following command...

Enable an FPolicy policy vserver fpolicy enable -vserver-name vserver_name -policy-name policy_name -sequence-number integer

Disable an FPolicy policy vserver fpolicy disable -vserver-name vserver_name -policy-name policy_name

Related tasks Displaying information about FPolicy policy status on page 198 Displaying information about enabled FPolicy policies on page 199

Displaying information about FPolicy configurations You might want to display information about FPolicy configurations to determine whether the configuration for each Storage Virtual Machine (SVM) is correct or to verify that an FPolicy policy configuration is enabled. You can display information about FPolicy external engines, FPolicy events, FPolicy scopes, and FPolicy policies.

Related concepts Creating the FPolicy configuration on page 191 Modifying FPolicy configurations on page 196

How the show commands work It is helpful when displaying information about the FPolicy configuration to understand how the show commands work. A show command without additional parameters displays information in a summary form. Additionally, every show command has the same two mutually exclusive optional parameters, - instance and -fields. 198 | File Access Management Guide for NFS

When you use the -instance parameter with a show command, the command output displays detailed information in a list format. In some cases, the detailed output can be lengthy and include more information than you need. You can use the -fields fieldname[,fieldname...] parameter to customize the output so that it displays information only for the fields you specify. You can identity which fields that you can specify by entering ? after the -fields parameter. Note: The output of a show command with the -fields parameter might display other relevant and necessary fields related to the requested fields.

Every show command has one or more optional parameters that filter that output and enable you to narrow the scope of information displayed in command output. You can identity which optional parameters are available for a command by entering ? after the show command. The show command supports UNIX-style patterns and wildcards to enable you to match multiple values in command-parameters arguments. For example, you can use the wildcard operator (*), the NOT operator (!), the OR operator (|), the range operator (integer...integer), the less-than operator (<), the greater-than operator (>), the less-than or equal to operator (<=), and the greater-than or equal to operator (>=) when specifying values. For more information about using UNIX-style patterns and wildcards, see the “Using the Data ONTAP command-line interface” section of the Clustered Data ONTAP System Administration Guide for SVM Administrators.

Commands for displaying information about FPolicy configurations You use the fpolicy show commands to display information about the FPolicy configuration, including information about FPolicy external engines, events, scopes, and policies.

If you want to display Use this command... information about FPolicy...

External engines vserver fpolicy policy external-engine show

Events vserver fpolicy policy event show

Scopes vserver fpolicy policy scope show

Policies vserver fpolicy policy show

See the man pages for the commands for more information.

Displaying information about FPolicy policy status You can display information about the status for FPolicy policies to determine whether a policy is enabled, what external engine it is configured to use, what the sequence number is for the policy, and to which Storage Virtual Machine (SVM) the FPolicy policy is associated.

About this task If you do not specify any parameters, the command displays the following information: • SVM name • Policy name • Policy sequence number • Policy status In addition to displaying information about policy status for FPolicy policies configured on the cluster or a specific SVM, you can use command parameters to filter the command's output by other criteria. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 199

You can specify the -instance parameter to display detailed information about listed policies. Alternatively, you can use the -fields parameter to display only the indicated fields in the command output, or -fields ? to determine what fields you can use.

Step 1. Display filtered information about FPolicy policy status by using the appropriate command:

If you want to display status Enter the command... information about policies...

On the cluster vserver fpolicy show

That have the specified status vserver fpolicy show -status {on|off}

On a specified SVM vserver fpolicy show -vserver vserver_name

With the specified policy vserver fpolicy show -policy-name policy_name name

That use the specified vserver fpolicy show -engine engine_name external engine

The following example displays the information about FPolicy policies on the cluster:

cluster1::> vserver fpolicy show Sequence Vserver Policy Name Number Status Engine ------FPolicy cserver_policy - off eng1 vs1.example.com v1p1 - off eng2 vs1.example.com v1p2 - off native vs1.example.com v1p3 - off native vs1.example.com cserver_policy - off eng1 vs2.example.com v1p1 3 on native vs2.example.com v1p2 1 on eng3 vs2.example.com cserver_policy 2 on eng1

Displaying information about enabled FPolicy policies You can display information about enabled FPolicy policies to determine what FPolicy external engine it is configured to use, what the priority is for the policy, and to which Storage Virtual Machine (SVM) the FPolicy policy is associated.

About this task If you do not specify any parameters, the command displays the following information: • SVM name • Policy name • Policy priority You can use command parameters to filter the command's output by specified criteria.

Step 1. Display information about enabled FPolicy policies by using the appropriate command: 200 | File Access Management Guide for NFS

If you want to display Enter the command... information about enabled policies...

On the cluster vserver fpolicy show-enabled

On a specified SVM vserver fpolicy show-enabled -vserver vserver_name

With the specified policy vserver fpolicy show-enabled -policy-name name policy_name

With the specified sequence vserver fpolicy show-enabled -priority integer number

The following example displays the information about enabled FPolicy policies on the cluster:

cluster1::> vserver fpolicy show-enabled Vserver Policy Name Priority

------vs1.example.com pol_native native vs1.example.com pol_native2 native vs1.example.com pol1 2 vs1.example.com pol2 4

Managing FPolicy server connections You can manage your FPolicy server connections by connecting to external FPolicy servers, disconnecting from external FPolicy servers, or displaying information about connections and connection status.

Related concepts What the two parts of the FPolicy solution are on page 160 What synchronous and asynchronous notifications are on page 160 How FPolicy works with external FPolicy servers on page 162 What the node-to-external FPolicy server communication process is on page 163

Connecting to external FPolicy servers To enable file processing, you might need to manually connect to an external FPolicy server if the connection has previously been terminated. A connection is terminated after the server timeout is reached or due to some error. Alternatively, the administrator might manually terminate a connection.

About this task If a fatal error occurs, the connection to the FPolicy server can be terminated. After resolving the issue that caused the fatal error, you must manually reconnect to the FPolicy server.

Steps

1. Connect to the external FPolicy server by using the vserver fpolicy engine-connect command. For more information about the command, see the man pages. Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 201

2. Verify that the external FPolicy server is connected by using the vserver fpolicy show- engine command. For more information about the command, see the man pages.

Disconnecting from external FPolicy servers You might need to manually disconnect from an external FPolicy server. This might be desirable if the FPolicy server has issues with notification request processing or if you need to perform maintenance on the FPolicy server.

Steps

1. Disconnect from the external FPolicy server by using the vserver fpolicy engine- disconnect command. For more information about the command, see the man pages.

2. Verify that the external FPolicy server is disconnected by using the vserver fpolicy show- engine command. For more information about the command, see the man pages.

Displaying information about connections to external FPolicy servers You can display status information about connections to external FPolicy servers (FPolicy servers) for the cluster or for a specified Storage Virtual Machine (SVM). This information can help you determine which FPolicy servers are connected.

About this task If you do not specify any parameters, the command displays the following information: • SVM name • Node name • FPolicy policy name • FPolicy server IP address • FPolicy server status • FPolicy server type In addition to displaying information about FPolicy connections on the cluster or a specific SVM, you can use command parameters to filter the command's output by other criteria. You can specify the -instance parameter to display detailed information about listed policies. Alternatively, you can use the -fields parameter to display only the indicated fields in the command output. You can enter ? after the -fields parameter to find out which fields you can use.

Step 1. Display filtered information about connection status between the node and the FPolicy server by using the appropriate command:

If you want to display Enter... connection status information about FPolicy servers...

That you specify vserver fpolicy show-engine -server IP_address 202 | File Access Management Guide for NFS

If you want to display Enter... connection status information about FPolicy servers...

For a specified SVM vserver fpolicy show-engine -vserver vserver_name

That are attached with a vserver fpolicy show-engine -policy-name specified policy policy_name

With the server status that vserver fpolicy show-engine -server-status status you specify The server status can be one of the following:

• connected

• disconnected

• connecting

• disconnecting

With the specified type vserver fpolicy show-engine -server-type type The FPolicy server type can be one of the following:

• primary

• secondary

That were disconnected with vserver fpolicy show-engine -disconnect-reason the specified reason text Disconnect can be due to multiple reasons. The following are common reasons for disconnect:

• Disconnect command received from CLI.

• Error encountered while parsing notification response from FPolicy server.

• FPolicy Handshake failed.

• SSL handshake failed.

• TCP Connection to FPolicy server failed.

• The screen response message received from the FPolicy server is not valid.

This example displays information about external engine connections to FPolicy servers on SVM vs1.example.com:

cluster1::> vserver fpolicy show-engine -vserver vs1.example.com FPolicy Server- Server- Vserver Policy Node Server status type ------vs1.example.com policy1 node1 10.1.1.2 connected primary vs1.example.com policy1 node1 10.1.1.3 disconnected primary vs1.example.com policy1 node2 10.1.1.2 connected primary vs1.example.com policy1 node2 10.1.1.3 disconnected primary

This example displays information only about connected FPolicy servers: Using FPolicy for file monitoring and management on SVMs with FlexVol volumes | 203

cluster1::> vserver fpolicy show-engine -fields server -server-status connected node vserver policy-name server ------node1 vs1.example.com policy1 10.1.1.2 node2 vs1.example.com policy1 10.1.1.2

Displaying information about the FPolicy passthrough-read connection status You can display information about FPolicy passthrough-read connection status to external FPolicy servers (FPolicy servers) for the cluster or for a specified Storage Virtual Machine (SVM). This information can help you determine which FPolicy servers have passthrough-read data connections and for which FPolicy servers the passthrough-read connection is disconnected.

About this task If you do not specify any parameter, the command displays the following information: • SVM name • FPolicy policy name • Node name • FPolicy server IP address • FPolicy passthrough-read connection status In addition to displaying information about FPolicy connections on the cluster or a specific SVM, you can use command parameters to filter the command's output by other criteria. You can specify the -instance parameter to display detailed information about listed policies. Alternatively, you can use the -fields parameter to display only the indicated fields in the command output. You can enter ? after the -fields parameter to find out which fields you can use.

Step 1. Display filtered information about connection status between the node and the FPolicy server by using the appropriate command:

If you want to display Enter the command... connection status information about...

FPolicy passthrough-read vserver fpolicy show-passthrough-read-connection connection status for the cluster

FPolicy passthrough-read vserver fpolicy show-passthrough-read-connection connection status for a -vserver vserver_name specified SVM

FPolicy passthrough-read vserver fpolicy show-passthrough-read-connection connection status for a -policy-name policy_name specified policy

Detailed FPolicy vserver fpolicy show-passthrough-read-connection passthrough-read connection -policy-name policy_name -instance status for a specified policy 204 | File Access Management Guide for NFS

If you want to display Enter the command... connection status information about...

FPolicy passthrough-read vserver fpolicy show-passthrough-read-connection connection status for the -policy-name policy_name -server-status status status that you specify The server status can be one of the following:

• connected

• disconnected

The following command displays information about passthrough-read connections from all FPolicy servers on the cluster:

cluster1::> vserver fpolicy show-passthrough-read-connection FPolicy Server Vserver Policy Name Node Server Status ------vs2.example.com pol_cifs_2 FPolicy-01 2.2.2.2 disconnected vs1.example.com pol_cifs_1 FPolicy-01 1.1.1.1 connected

The following command displays detailed information about passthrough-read connections from FPolicy servers configured in the “pol_cifs_1 ” policy:

cluster1::> vserver fpolicy show-passthrough-read-connection -policy-name pol_cifs_1 - instance

Node: FPolicy-01 Vserver: vs1.example.com Policy: pol_cifs_1 Server: 1.1.1.1 Session ID of the Control Channel: 8cef052e-2502-11e3-88d4-123478563412 Server Status: connected Time Passthrough Read Channel was Connected: 9/24/2013 10:17:45 Time Passthrough Read Channel was Disconnected: - Reason for Passthrough Read Channel Disconnection: none 205

Glossary

To understand the file access and protocols management concepts in this document, you might need to know how certain terms are used. A ACE Access Control Entry. ACL Access control list. AD Active Directory. adapter A SCSI card, network card, hot-swap adapter, serial adapter, or VGA adapter that plugs into an expansion slot. Sometimes called expansion card. address resolution The procedure for determining an address corresponding to the address of a LAN or WAN destination. admin SVM Formerly known as admin Vserver. In clustered Data ONTAP, a Storage Virtual Machine (SVM) that has overall administrative access to all objects in the cluster, including all objects owned by other SVMs, but does not provide data access to clients or hosts. administrator The account that has the required permission to administer a Data ONTAP system. agent A process that gathers status and diagnostic information and forwards it to network management stations, for example, SNMP agent. aggregate A grouping of physical storage resources (disks or array LUNs) that provides storage to volumes associated with the aggregate. Aggregates provide the ability to control the RAID configuration for all associated volumes. API See Application Program Interface (API). appliance A device that performs a single, well-defined function and is simple to install and operate. Application Program Interface (API) A language and message format used by an application program to communicate with the operating system or some other system, control program, or communications protocol. ASCII American Standard Code for Information Interchange. ATM Asynchronous Transfer Mode. A network technology that combines the features of cell- switching and multiplexing to offer reliable and efficient network services. ATM provides an interface between devices such as workstations and routers, and the network. authentication 206 | File Access Management Guide for NFS

The process of verifying the identity of a user who is logging in to a secured system or network. authorization The limiting of usage of system resources to authorized users, programs, processes, or other systems. The process of determining, for example via access control, that a requestor is allowed to receive a service or perform an operation. AutoSupport A storage system daemon that triggers email messages from the customer site to technical support or another specified email recipient when there is a potential storage system problem. B big-endian A binary data format for storage and transmission in which the most significant byte comes first. bind The process of establishing a software connection between one protocol or application and another, thus creating an internal pathway. C cache As a verb, storing data temporarily for expedited access. As a noun, the location in which data is stored temporarily. Types of caches include disk cache, memory cache, and Web cache. CIFS See Common Internet File System (CIFS). CIFS share • In Data ONTAP, a directory or directory structure that has been made available to network users and can be mapped to a drive letter on a CIFS client. Also known simply as a share. • In OnCommand Insight (formerly SANscreen suite), a service exposed from a NAS device to provide file-based storage through the CIFS protocol. CIFS is mostly used for Microsoft Windows clients, but many other operating systems can access CIFS shares as well.

CLI command-line interface. The storage system prompt is an example of a command-line interface. client A workstation or PC in a client-server architecture; that is, a computer system or process that requests services from and accepts the responses of another computer system or process. cluster • In clustered Data ONTAP 8.x, a group of connected nodes (storage systems) that share a namespace and that you can manage as a single virtual server or multiple virtual servers, providing performance, reliability, and scalability benefits. • In the Data ONTAP 7.1 release family and earlier releases, a pair of storage systems (sometimes called nodes) configured to serve data for each other if one of the two systems stops functioning. Glossary | 207

• In the Data ONTAP 7.3 and 7.2 release families, this functionality is referred to as an active/active configuration. • For some storage array vendors, cluster refers to the hardware component on which host adapters and ports are located. Some storage array vendors refer to this component as a controller. cluster monitor The software that administers the relationship of nodes in a cluster. cluster Vserver Former name for a data SVM; see data SVM. Common Internet File System (CIFS) Microsoft's file-sharing networking protocol that evolved from SMB. community A logical relationship between an SNMP agent and one or more SNMP managers. A community is identified by name, and all members of the community have the same access privileges. console The physical or virtual terminal that is used to monitor and control a storage system. Copy-On-Write (COW) The technique for creating Snapshot copies without consuming excess disk space. D Data ONTAP The NetApp operating system software product that optimizes file service by combining patented file-system technology and a microkernel design dedicated to network data access. data SVM Formerly known as data Vserver. In clustered Data ONTAP, a Storage Virtual Machine (SVM) that facilitates data access from the cluster; the hardware and storage resources of the cluster are dynamically shared by data SVMs within a cluster. degraded mode The operating mode of a storage system when a disk in the RAID group fails or the batteries on the NVRAM card are low. disk ID number The number assigned by the storage system to each disk when it probes the disks at startup. disk shelf A shelf that contains disk drives and is attached to a storage system. DNS See Domain Name System (DNS). Domain Name System (DNS) An Internet service for finding computers on an IP network. E Ethernet adapter An Ethernet interface card. expansion card 208 | File Access Management Guide for NFS

A SCSI card, NVRAM card, network card, hot-swap card, or console card that plugs into a storage system expansion slot. Sometimes called an adapter. expansion slot The slots on the storage system board into which you insert expansion cards. export The NFS mechanism used to define which areas of the storage system are to be made available to NFS clients. F

failover A process that reroutes data immediately to an alternate path so that services remain uninterrupted, if a physical disruption to a network component occurs. Failover applies to clustering and to multiple paths to storage. In the case of clustering, one or more services, such as the Exchange service, is moved to a standby server. In the case of multiple paths to storage, a path failure results in data being rerouted to a different physical connection to the storage. file lock The implicit state of a file when opened by a client. Depending on the open mode, the file can be locked to prevent all other access, or set to allow certain types of shared access. file server A computer that provides file-handling and storage functions for multiple users. FlexVol volume In clustered Data ONTAP, a logical entity contained in a Storage Virtual Machine (SVM, formerly known as Vserver)—referred to as SVM with FlexVol volumes. FlexVol volumes typically hold user data, although they also serve as node or SVM root volumes and metadata containers. A FlexVol volume obtains its storage from a single aggregate. FQDN See Fully Qualified Domain Name (FQDN). FSID file system ID. FTP File Transfer Protocol. Fully Qualified Domain Name (FQDN) The complete name of a specific computer on the Internet, consisting of the computer's host name and its domain name. G

GID See Group ID (GID). Group ID (GID) The number used by UNIX systems to identify groups. GUI graphical user interface. GUID globally unique identifier. H heartbeat Glossary | 209

A repeating signal transmitted from one storage system to the other that indicates that the storage system is in operation. Heartbeat information is also stored on disk. hot spare disk A disk installed in the storage system that can be used to substitute for a failed disk. Before the disk failure, the hot spare disk is not part of the RAID disk array. hot swap The process of adding, removing, or replacing a disk while the storage system is running. hot swap adapter An expansion card that makes it possible to add or remove a hard disk with minimal interruption to file system activity. HTTP Hypertext Transfer Protocol. I I/O input/output. Infinite Volume In clustered Data ONTAP, a logical entity contained in a Storage Virtual Machine (SVM, formerly known as Vserver)—referred to as SVM with Infinite Volume—that holds user data. An Infinite Volume obtains its storage from multiple aggregates. inode A data structure containing information about files on a storage system and in a UNIX file system. interrupt switch A switch on some storage system front panels used for debugging purposes. K KDC Kerberos Distribution Center. L LAN local area network. LAN Emulation (LANE) The architecture, protocols, and services that create an Emulated LAN using ATM as an underlying network topology. LANE enables ATM-connected end systems to communicate with other LAN-based systems. LDAP Lightweight Directory Access Protocol. Lightweight Directory Access Protocol (LDAP) A client-server protocol for accessing a directory service. local storage system The system you are logged in to. M magic directory 210 | File Access Management Guide for NFS

A directory that can be accessed by name but does not show up in a directory listing. The .snapshot directories are magic directories, except for the one at the mount point or at the root of the share. mail host The client host responsible for sending automatic email to technical support when certain storage system events occur. Maintenance mode An option when booting a storage system from a system boot disk. Maintenance mode provides special commands for troubleshooting hardware and configuration. MIB Management Information Base. ASCII files that describe the information that the SNMP agent sends to network management stations. MIME Multipurpose Internet Mail Extensions. A specification that defines the mechanisms for specifying and describing the format of Internet message bodies. An HTTP response containing the MIME Content-Type header allows the HTTP client to invoke the application that is appropriate for the data received. N name server A network server that provides a directory service. namespace In network-attached storage (NAS) environments, a collection of files and path names to the files. NAS network-attached storage. NDMP Network Data Management Protocol. A protocol that allows storage systems to communicate with backup applications and provides capabilities for controlling the robotics of multiple tape backup devices. network adapter An Ethernet, FDDI, or ATM card. network management station See NMS. NFS Network File System. NMS Network Management Station. A host on a network that uses third-party network management application (SNMP manager) to process status and diagnostic information about a storage system. NTFS New Technology File System. NTP Network Time Protocol. null user The Windows NT machine account used by applications to access remote data. Glossary | 211

NVRAM cache Nonvolatile RAM in a storage system, used for logging incoming write data and NFS requests. Improves system performance and prevents loss of data in case of a storage system or power failure. NVRAM card An adapter that contains the storage system’s NVRAM cache. NVRAM mirror A synchronously updated copy of the contents of the storage system NVRAM (nonvolatile random access memory) contents kept on the partner storage system. P panic A serious error condition causing the system that is running Data ONTAP to halt. Similar to a software crash in the Windows system environment. parity disk The disk on which parity information is stored for a RAID4 disk drive array. In RAID groups using RAID-DP protection, two parity disks store the parity and double-parity information. Used to reconstruct data in failed disk blocks or on a failed disk. PCI Peripheral Component Interconnect. The bus architecture used in newer storage system models. POST Power-on self-tests. The tests run by a storage system after the power is turned on. PVC Permanent Virtual Circuit. A link with a static route defined in advance, usually by manual setup. Q qtree A special subdirectory of the root of a volume that acts as a virtual subvolume with special attributes. R RAID Redundant Array of Independent Disks. A technique that protects against disk failure by computing parity information based on the contents of all the disks in a storage array. Storage systems use either RAID4, which stores all parity information on a single disk, or RAID-DP, which stores all parity information on two disks. RAID disk scrubbing The process in which a system reads each disk in the RAID group and tries to fix media errors by rewriting the data to another disk area. S SCSI adapter An expansion card that supports SCSI disk drives and tape drives. SCSI address The full address of a disk, consisting of the disk’s SCSI adapter number and the disk’s SCSI ID, such as 9a.1. SCSI ID 212 | File Access Management Guide for NFS

The number of a disk drive on a SCSI chain (0 to 6). serial adapter An expansion card for attaching a terminal as the console on some storage system models. serial console An ASCII or ANSI terminal attached to a storage system’s serial port. Used to monitor and manage storage system operations. share A directory or directory structure that has been made available to network users and can be mapped to a drive letter on a CIFS client. Also known as a CIFS share. SID Security identifier used by the Windows operating system. Snapshot copy An online, read-only copy of an entire file system that protects against accidental deletions or modifications of files without duplicating file contents. Snapshot copies enable users to restore files and to back up the storage system to tape while the storage system is in use. SNMP Simple Network Management Protocol. storage system The hardware device running Data ONTAP that receives data from and sends data to native disk shelves, storage arrays, or both. Storage systems include a controller component and an internal or external disk storage subsystem component. Storage systems are sometimes referred to as filers, appliances, storage appliances, V-Series systems, Data ONTAP systems, or systems. Storage Virtual Machine (SVM) (Known as Vserver prior to clustered Data ONTAP 8.2.1. The term “Vserver” is still used in CLI displays and vserver command syntax.) A virtual machine that provides network access through unique network addresses, that might serve data out of a distinct namespace, and that is separately administrable from the rest of the cluster. There are three types of SVMs—admin, node, and data—but unless there is a specific need to identify the type of SVM, “SVM” usually refers to the data SVM. SVC Switched Virtual Circuit. A connection established through signaling. The user defines the endpoints when the call is initiated. system board A printed circuit board that contains a storage system’s CPU, expansion bus slots, and system memory. T takeover The process by which one node in an HA pair undergoes a system failure and cannot reboot, and the partner node takes over the failed node's functions and serves data. TCP Transmission Control Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. trap Glossary | 213

An asynchronous, unsolicited message sent by an SNMP agent to an SNMP manager indicating that an event has occurred on the storage system. tree quota A type of disk quota that restricts the disk usage of a directory created by the quota qtree command. Different from user and group quotas that restrict disk usage by files with a given UID or GID. U UDP User Datagram Protocol. UID user identification number. Unicode A 16-bit character set standard. It was designed and is maintained by the nonprofit consortium Unicode Inc. URI universal resource identifier. URL uniform resource locator. UUID universal unique identifier. V VCI Virtual Channel Identifier. A unique numerical tag defined by a 16-bit field in the ATM cell header that identifies a virtual channel over which the cell is to travel. VGA adapter An expansion card for attaching a VGA terminal as the console. VPI Virtual Path Identifier. An eight-bit field in the ATM cell header that indicates the virtual path over which the cell should be routed. Vserver (Known as “Storage Virtual Machine (SVM)” in clustered Data ONTAP 8.2.1 and later.) A virtual machine that provides network access through unique network addresses, that might serve data out of a distinct namespace, and that is separately administrable from the rest of the cluster. There are three types of Vservers—admin, node, and cluster (“cluster Vserver” is called “data Vserver” in Data ONTAP 8.2)—but unless there is a specific need to identify the type of Vserver, “Vserver” usually refers to the cluster/data Vserver. W WAFL Write Anywhere File Layout. A file system designed for the storage system to optimize write performance. WAN wide area network. WINS Windows Internet Name Service. 214

Copyright information

Copyright © 1994–2015 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this document covered by copyright may be reproduced in any form or by any means— graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner. Software derived from copyrighted NetApp material is subject to the following license and disclaimer: THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp. The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987). 215

Trademark information

NetApp, the NetApp logo, Go Further, Faster, AltaVault, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, Clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC, SANtricity, SecureShare, Simplicity, Simulate ONTAP, Snap Creator, SnapCenter, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, and WAFL and other names are trademarks or registered trademarks of NetApp, Inc., in the United States, and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. A current list of NetApp trademarks is available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx. 216

How to send comments about documentation and receive update notifications

You can help us to improve the quality of our documentation by sending us your feedback. You can receive automatic notification when production-level (GA/FCS) documentation is initially released or important changes are made to existing production-level documents. If you have suggestions for improving this document, send us your comments by email to [email protected]. To help us direct your comments to the correct division, include in the subject line the product name, version, and operating system. If you want to be notified automatically when production-level documentation is released or important changes are made to existing production-level documents, follow Twitter account @NetAppDoc. You can also contact us in the following ways: • NetApp, Inc., 495 East Java Drive, Sunnyvale, CA 94089 U.S. • Telephone: +1 (408) 822-6000 • Fax: +1 (408) 822-4501 • Support telephone: +1 (888) 463-8277 Index | 217

Index

NTFS, how to configure using the Data ONTAP CLI 8.3-format file names 149 creating 24 using the Data ONTAP CLI to display information about NTFS 151 A audit-enabled SVMs actions you must take before revert 157 access auditing how security types determine levels of client 36 actions you must take on audit-enabled SVMs before access cache revert 157 explained 108 actions you must take prior to revert 157 how timeout values work 108 aggregate space considerations when enabling 132 optimizing performance 109 commands for modifying configuration 156 access control lists configuring for NFS 150 See ACLs considerations for files with NTFS alternate data access levels streams 137 how security types determine client 36 considerations for symlinks and hard links 136 access requests creating configuration for file and directory events mapping to anonymous 35 143 accessing creating file and directory, configuration 143 handling unknown UNIX users for NTFS volumes or deleting configuration 157 qtrees 87 determining what the complete path to the audited ACEs object is 136 limit for NFSv4 ACLs 116 displaying information about configuration 155 ACLs displaying information about NTFS audit policies enabling or disabling modification of NFSv4 115 using the Data ONTAP CLI 151 enabling or disabling NFSv4 116 enabling and disabling on SVMs 154 limit of ACEs for NFSv4 116 enabling on the SVM 145 NFSv4, benefits of enabling 114 event log consolidation 130 NFSv4, how they work 114 event log consolidation when a node is unavailable NFSv4, introduction to managing 114 130 adding event log rotation 130 rules to export policies 40 how active audit logs are viewed using Event Viewer users to local UNIX groups 72 134 AES how staging volumes are created on aggregates 132 configuring for NFS Kerberos 53 how the Data ONTAP process works 130 aggregates how to troubleshoot event log volume space issues space considerations when staging volumes are 158 created by enabled auditing subsystem 132 how to troubleshoot staging volume space issues 158 alternate data streams introduction to NFS and SMB file and folder access considerations when auditing files with NTFS 137 129 anonymous list of NFS events 138 mapping clients to 35 manually converting the audit event logs 154 anonymous access partial event log consolidation 130 how to configure with export rules 38 planning the configuration 139 APIs process when enabling or disabling 130 supported VMware vStorage, for NFS 123 requirements and considerations for configuring 132 architectures revert process when there are audit-enabled SVMs typical NAS namespace 12 157 assigning SMB events that can be audited 134 export policies to qtrees 47 staging files, staging volumes, consolidation tasks, asynchronous conversion tasks, defined 129 FPolicy applications 161 supported audit event log formats 133 FPolicy notifications, defined 160 verifying configuration 145 audit event logs verifying that it is enabled 155 manually rotating 154 viewing audit event logs 133 audit policies authentication configuring using the Windows Security tab 146 additional information when using FPolicy external displaying using the Windows Security tab 150 engine connections with SSL 175 introduction to configuring file and folder 146 how Data ONTAP handles NFS client 26 218 | File Access Management Guide for NFS

how Data ONTAP uses name services for user 26 commands for managing LDAP 100 security with Kerberos, support for 50 client schemas authentication-based creating new LDAP 62 restrictions 10 clients validating qtree IDs for file operations 48 clusters B role with FPolicy implementations 161 basic concepts commands introduction to how Data ONTAP secures LDAP for managing LDAP client schema templates 100 communication using LDAP over SSL/TLS 60 for managing local UNIX users 96 best practices for managing name mappings 95 FPolicy setup 168 for modifying SVM auditing configurations 156 bits comments how Data ONTAP treats read-only 110 how to send feedback about documentation 216 breaking concepts locks 113 introduction to how Data ONTAP secures LDAP communication using LDAP over SSL/TLS 60 configuration options C LDAP directory search 64 configuration requirements CA certificates LIF file access management 11 installing self-signed root, on the SVM 61 configuration types cache FPolicy, defined 165 configuring time-to-live for NFS user credentials 104 configurations reasons for modifying time-to-live for NFS commands for managing NFS Kerberos realm 101 credentials 103 creating LDAP client 65 cache, access creating NFS Kerberos realm 55 explained 108 creating NIS domain 70 how timeout values work 108 planning auditing 139 optimizing performance 109 configuring cache, NFS credential audit policies using the Windows Security tab 146 how it works 103 auditing for file and directory events 143 caches auditing for NFS 150 displaying export policy netgroup 106 default users 81 flushing export policy 106 FPolicy 191 how Data ONTAP uses export policy 105 local UNIX users and groups 71 introduction to configuring NFS credential 103 name service switch table 59 case-sensitivity number of group IDs allowed for NFS users 126 of file names 24 security style on FlexVol volumes 23 central access policy staging events security style on qtrees 23 SMB, that can be audited 134 security style on SVM root volumes 22 certificates SVMs to use LDAP 69 installing self-signed root, on the SVM 61 connecting checking to external FPolicy servers 200 client membership of netgroups 107 connection credentials CIFS FPolicy, how used with privileged data access file naming dependencies 24 channels 162 how Data ONTAP grants file access from NFS connections clients 27 additional information when using SSL CIFS servers authentication for FPolicy external engine 175 enabling LDAP over SSL/TLS on 62 considerations client access aggregate space, for staging volumes when enabling how security types determine levels of 36 auditing 132 performing stricter checking for netgroups by auditing configuration 132 verifying domains 89 consolidation tasks client access failures defined for auditing 129 how to troubleshoot staging volume space issues that control channels result in 158 how FPolicy uses 162 client authentication conversion tasks how Data ONTAP handles 26 defined for auditing 129 client configurations copying creating LDAP 65 export policies 101 client schema templates LDAP client schema templates, command for 100 Index | 219 creating LDAP client schema templates, command for 100 auditing configuration for file and directory events LDAP configurations, command for 99 143 local UNIX groups, command for 96 export policies 40 local UNIX users, command for 96 export rules, command for 102 name mappings, command for 95 file and directory auditing configuration 143 name service switch entries 95 file names 24 netgroups, command for 98 FPolicy configurations 191 NFS Kerberos realm configurations, command for FPolicy events 193 101 FPolicy external engines 192 NFS servers 92 FPolicy policies 193 NIS domain configurations, command for 98 FPolicy scopes 195 users from local UNIX groups, command for 96 LDAP client configurations 65 DES LDAP client configurations, command for 99 configuring for NFS Kerberos 53 LDAP configurations, command for 99 directory events local UNIX groups 74 creating auditing configuration for 143 local UNIX groups, command for 96 directory searches local UNIX users 71 configuration options for LDAP 64 local UNIX users, command for 96 disabling name mappings 80 auditing on SVMs 154 name service switch entries 95 FPolicy policies 197 new LDAP client schema 62 NFS Kerberos on a LIF 100 NFS Kerberos realm configurations 55 NFSv3 84 NFS Kerberos realm configurations, command for NFSv4 84 101 NFSv4 referrals 121 NFS servers 32 NFSv4.1 84 NIS domain configuration 70 parallel NFS 85 NIS domain configurations, command for 98 pNFS 85 credential cache, NFS rquota support 124 how it works 103 vStorage over NFS 123 credential caches disaster recovery introduction to configuring NFS 103 security certificates for FPolicy do not replicate in SVM non-ID-preserve configurations 176 disconnecting D from external FPolicy servers 201 data access displaying introduction to how security styles affect 19 audit policy information using the Windows Security data access channels tab 150 how FPolicy connection credentials are used with export policies 101 privileged 162 export rules, command for 102 how FPolicy uses privileged 162 FPolicy configuration information, commands for data LIFs 198 how control channels are used with FPolicy FPolicy configuration, how show commands work communication 162 when 197 how FPolicy handle migrations and failovers for 164 information about auditing configurations 155 role with FPolicy implementations 161 information about connections to external FPolicy Data ONTAP servers 201 how the auditing process works 130 information about enabled FPolicy policies 199 Data ONTAP CLI information about FPolicy configurations 197 how to configure NTFS audit policies using 149 information about FPolicy passthrough-read data streams connection status 203 considerations when auditing files with NTFS information about FPolicy policy status 198 alternate 137 information about locks 111 default users LDAP client configurations, command for 99 configuring 81 LDAP client schema templates, command for 100 definitions LDAP configurations, command for 99 FPolicy 160 local UNIX groups, command for 96 verifying status of netgroup 44 local UNIX users. command for 96 deleting name mappings, command for 95 audit configuration 157 name service switch entries 95 export policies 101 netgroups, command for 98 export rules, command for 102 NFS Kerberos interface configurations 100 LDAP client configurations, command for 99 220 | File Access Management Guide for NFS

NFS Kerberos realm configurations, command for supported combinations of file operations and filters 101 that FPolicy can monitor for NFSv3 182 NFS servers 92 supported combinations of file operations and filters NFS statistics 122 that FPolicy can monitor for NFSv4 182 NIS domain configuration binding status, command supported combinations of file operations and filters for 98 that FPolicy can monitor for SMB 181 NIS domain configurations, command for 98 EVTX NTFS auditing information on FlexVol volumes file format, viewing audit event logs with 133 using the Data ONTAP CLI 151 supported audit event log file format 133 volume mount and junction point information 18 exchanging documentation name mappings, command for 95 how to receive automatic notification of changes to export policies 216 adding rules to 40 how to send feedback about 216 assigning to qtrees 47 domain configurations associating with a FlexVol volume 46 creating NIS 70 creating 40 domains default, for SVMs 33 commands for managing NIS 98 flushing caches 106 performing stricter access checking for netgroups by how Data ONTAP uses caches 105 verifying 89 how they control client access to qtrees 33 how they control client access to volumes 33 managing 101 E removing from qtrees 48 enabling restrictions and nested junctions 49 auditing on SVMs 154 setting index numbers for rules 45 auditing on the SVM 145 troubleshooting issues with client access 49 FPolicy policies 195, 197 export policy caches IPv6 for NFS 82 introduction to managing 105 LDAP on SVMs 68 export policy rules LDAP over SSL/TLS on the CIFS server 62 how the access cache works to store results from 108 NFS Kerberos on a LIF 100 export rules NFSv3 84 commands for managing 102 NFSv4 84 how they work 33 NFSv4 referrals 121 how to configure anonymous access 38 NFSv4.1 84 how to configure superuser access 38 parallel NFS 85 exporting pNFS 85 qtrees 47 rquota support 124 exports vStorage over NFS 123 checking client access to 49 encryption types enabling NFS clients to view list of 85 configuring for NFS Kerberos 53 fencing clients from 102 error messages unfencing clients from 102 No space left on device 158 external communication event log formats how FPolicy handles during node failover 164 support for EVTX file format 133 external engines support for XML file format 133 command for displaying information about FPolicy event logs 198 manually rotating audit 154 command for modifying FPolicy 196 supported file formats for audit 133 creating FPolicy 192 viewing audit 133 information to gather for configuring FPolicy 177 Event Viewer planning the configuration for FPolicy 171 how active audit logs are viewed using 134 external FPolicy servers events configuration type defined 165 command for displaying information about FPolicy connecting to 200 198 disconnecting from 201 command for modifying FPolicy 196 displaying information about connections to 201 creating auditing configuration for file and directory how FPolicy works with external FPolicy servers 143 162 creating FPolicy 193 when to create FPolicy configurations that use 166 information to gather for configuring FPolicy 183 planning the configuration for FPolicy 178 F SMB, that can be audited 134 failover Index | 221

how FPolicy handles external communication during filters node 164 supported combinations of file operations and filters failures for NFSv4 FPolicy events 182 how to troubleshoot staging volume space issues that supported combinations of file operations and filters result in client access 158 for SMB FPolicy events 181 feedback supported combinations with file operations for how to send comments about documentation 216 NFSv3 FPolicy events 182 fencing FlexVol volumes clients from exports 102 associating export policy to 46 file access configuring security style on 23 how Data ONTAP controls 10 flushing LIF configuration requirements for managing 11 export policy caches 106 NFS, managing 84 folder audit policies setting up for NFS 31 introduction to configuring 146 to Infinite Volumes, where to find information about FPolicy setting up for NFS 83 how passthrough-read enhances usability for HSM file access events 166 using FPolicy to monitor 160 how read requests are managed when passthrough- file and directory auditing read is enabled 167 creating configuration on SVMs 143 upgrade and revert considerations for passthrough- file and folder access read 169 introduction to auditing NFS and SMB 129 FPolicy best practices file and folder access events for setup 168 SMB, that can be audited 134 FPolicy communications file and record locking synchronous and asynchronous notifications, defined NFSv4, described 119 160 file audit policies FPolicy configuration types introduction to configuring 146 defined 165 file delegations when to create a native FPolicy configuration 166 enabling or disabling NFSv4 read 118 when to create configurations that use external enabling or disabling NFSv4 write 118 FPolicy servers 166 how they work for NFSv4 117 FPolicy configurations NFSv4, managing 117 commands for displaying information about 198 file events commands for modifying 196 creating auditing configuration for 143 creating 191 file formats displaying information about 197 viewing audit event logs with XML or EVTX 133 how show commands work when displaying file locking information about 197 between protocols, explained 110 information about requirements, considerations, and file locks best practices 167 breaking 113 overview of configuration planning 170 displaying information about 111 steps to setup 169 introduction to managing 110 FPolicy connections file names displaying information about external server case-sensitivity 24 connections 201 creating 24 how connection credentials are used with privileged dependencies for NFS and CIFS 24 data access channels 162 valid characters for 24 how control channels are used with 162 file operations how data LIF migrations and failovers are handled supported combinations of file operations and filters 164 for NFSv4 FPolicy events 182 how privileged data access channels are used 162 supported combinations of file operations and filters management responsibilities when connecting to for SMB FPolicy events 181 external FPolicy servers 162 supported combinations with filters for NFSv3 synchronous and asynchronous applications 161 FPolicy events 182 synchronous and asynchronous notifications, defined validating qtree IDs for 48 160 file permissions what it means to grant super user credentials for effect of security styles on 19 privileged data access 163 file-based what the node-to-external FPolicy server restrictions 11 communication process is 163 files FPolicy events how to troubleshoot space issues related to volumes creating 193 that contain staging 158 information to gather for configuring 183 222 | File Access Management Guide for NFS

planning the configuration for 178 recommendations for 168 supported combinations of file operations and filters requirements for 168 for NFSv3 182 supported combinations of file operations and filters for NFSv4 182 G supported combinations of file operations and filters glossary that FPolicy can monitor for SMB 181 term definitions 205 FPolicy external communication group IDs how managed during node failovers 164 configuring number allowed for NFS users 126 FPolicy external engines group information additional information about configuring SSL configuration options for LDAP directory searches authenticated connections for 175 for 64 creating 192 group memberships information to gather for configuring 177 enabling LDAP RFC2307bis support to use nested planning the configuration for 171 63 security certificates for SSL authentication with groups FPolicy do not replicate in SVM non-ID-preserve adding users to local UNIX 72 configurations 176 commands for managing local UNIX 96 with MetroCluster and SVM disaster recovery, loading local UNIX, from URIs 75 restrictions when choosing authentication methods local UNIX, limits for 97 for 176 local UNIX, managing limits for 97 FPolicy external servers local, UNIX, configuring 71 See FPolicy servers UNIX, creating local 74 FPolicy framework guaranteed auditing defined 160 how Data ONTAP ensures 130 protocols that can be monitored 160 roles that cluster components play with 161 what it does 160 H FPolicy notifications hard links synchronous and asynchronous, defined 160 considerations when auditing 136 FPolicy policies hard mounts creating 193 use of 25 displaying information about enabled 199 hierarchical storage management displaying information about status 198 See HSM enabling 195 HSM enabling or disabling 197 how FPolicy passthrough-read enhances usability for how FPolicy manages processing multiple 163 166 information to gather for configuration 188 planning the configuration for 183 requirements for FPolicy scopes if using the native I engine for 187 FPolicy scopes Infinite Volumes configuration information to gather 190 where to find information about NFS support on 29 creating 195 where to find information about setting up NFS file planning the configuration for 188 access to 83 requirements if using the FPolicy policy uses the where to get information about security styles of 22 native engine 187 information FPolicy servers how to send feedback about improving connecting to external 200 documentation 216 disconnecting from external 201 inserting displaying information about connections to external name mappings, command for 95 201 installing displaying information about FPolicy passthrough- self-signed root CA certificate on the SVM 61 read connection status 203 IPv6 how FPolicy works with external FPolicy servers enabling for NFS 82 162 NFS support for 82 what the communication process to nodes is 163 what they do 160 J when to create FPolicy configurations that use external 166 junction points FPolicy services creating volumes with specified 15 how they work across SVM namespaces 165 creating volumes without specified 16 FPolicy setup displaying information about volume 18 Index | 223

volume, how used to create namespaces 11 local UNIX users junctions commands for managing 96 defined 11, 12 creating 71 usage rules 12 limits for 97 volume, how used in SMB and NFS namespaces 12 managing limits for 97 locking grace period NFSv4, specifying 120 K locking lease period Kerberos specifying NFSv4 120 commands for managing realm configurations 101 locks configuring encryption types for NFS 53 breaking 113 creating NFS configuration for SVMs 56 displaying information about 111 creating realm configurations 55 logon and logoff events Data ONTAP support for 50 SMB, that can be audited 134 for NFS, commands for managing interface logs configurations 100 manually rotating audit logs 154 introduction to using with NFS for strong security 50 requirements for configuring with NFS 50 M managing L export policies 101 LDAP export rules, commands for 102 commands for managing 99 LDAP client configurations, commands for 99 commands for managing client configurations 99 LDAP client schema templates, commands for 100 commands for managing client schema templates LDAP configurations, commands for 99 100 limits for local UNIX users and groups 97 configuration options for directory searches 64 local UNIX groups, commands for 96 configuring SVMs to use 69 local UNIX users, command for 96 creating client configurations 65 name mappings, commands for 95 creating new client schema 62 name service switch entries 95 enabling on SVMs 68 NFS Kerberos realm configurations, commands for enabling RFC2307bis support to use nested group 101 memberships 63 NFS servers 92 improving netgroup search performance 67 manually rotating introduction to using as a name service 60 audit event logs 154 LDAP over SSL/TLS memberships enabling on the CIFS server 62 enabling LDAP RFC2307bis support to use nested installing self-signed root CA certificate on the SVM group 63 61 MetroCluster configurations introduction to configuring and using to secure restrictions when choosing authentication methods communication 60 for cluster-scoped FPolicy external engines with 176 introduction to how Data ONTAP uses to secure modifying LDAP communication 60 auditing configurations, commands for 156 LIFs export rule index numbers 45 configuration requirements for file access export rules, command for 102 management 11 FPolicy configurations, commands for 196 data, role with FPolicy implementations 161 LDAP client configurations, command for 99 how FPolicy handle migrations and failovers for data LDAP client schema templates, command for 100 164 LDAP configurations, command for 99 limitations local UNIX groups, command for 96 of Data ONTAP support for NFSv4 28 local UNIX users, command for 96 limits name mapping patterns, command for 95 of ACEs for NFSv4 ACLs 116 name service switch entries 95 loading NFS Kerberos interface configurations 100 local UNIX groups from URIs 75 NFS Kerberos realm configurations, command for local UNIX users from URIs 73 101 netgroups from URI, command for 98 NFS servers 92 local UNIX groups NFSv3 services ports 91 commands for managing 96 NFSv3 TCP maximum read and write size 125 creating 74 NFSv4 ACLs, enabling or disabling capability of limits for 97 115 managing limits for 97 NIS domain configurations, command for 98 protocols for SVMs 31 224 | File Access Management Guide for NFS

server implementation ID 113 configuration type defined 165 mount requests nested group memberships controlling NFS, from nonreserved ports 87 enabling LDAP RFC2307bis support to use 63 mounting netgroup information NFS exports using nonreserved ports 88 configuration options for LDAP directory searches volumes in NAS namespaces 17 for 64 mounts netgroup searches use of hard 25 improving performance 67 netgroups checking client membership of 107 N commands for managing 98 name mapping deleting, command for 98 how it works 77 displaying export policy cache 106 name mappings displaying export policy queue 106 commands for managing 95 displaying, command for 98 configuring SVMs to use LDAP 69 loading from URI, command for 98 conversion rules 79 loading into SVMs from URIs 43 creating 80 performing stricter client access checking by how used 76 verifying domains 89 multidomain searches for UNIX user to Windows verifying status across nodes, command for 98 user 77 verifying status of definitions 44 name service switch entries NFS commands for managing 95 benefits of enabling v4 ACLs 114 creating 95 clients, how Data ONTAP grants CIFS file access deleting 95 from 27 displaying 95 commands for managing Kerberos interface modifying 95 configurations 100 name service switch tables configuring auditing 150 configuring 59 configuring Kerberos encryption types 53 name services configuring number of allowed group IDs for users configuring SVMs to use LDAP 69 126 how Data ONTAP uses 26, 57 configuring time-to-live for cached user credentials introduction to configuring 57 104 introduction to using LDAP 60 controlling access over TCP and UDP 86 troubleshooting issues with 93 controlling requests from nonreserved ports 87 names displaying statistics 122 valid characters for file 24 enabling clients to view list of exports 85 namespaces enabling IPv6 for 82 defined 11 enabling or disabling v3 84 how FPolicy services work across SVM 165 enabling or disabling v4 84 how volume junctions are used for NAS access 12 enabling or disabling v4 ACLs 116 introduction to creating and managing data volumes enabling or disabling v4 read file delegations 118 in NAS 15 enabling or disabling v4 write file delegations 118 mounting or unmounting volumes within NAS 17 enabling or disabling v4.1 84 typical architectures for NAS 12 events that can be audited 138 NAS file access setup for Infinite Volumes, where to find creating volumes with specified junction points 15 information about 83 creating volumes without specified junction points file locking between protocols explained 110 16 file naming dependencies 24 displaying volume mount and junction point handling access to NTFS volumes or qtrees for information 18 unknown UNIX users 87 mounting or unmounting volumes in the namespace how Data ONTAP handles client authentication 26 17 how Data ONTAP treats read-only bits 110 typical namespace architectures 12 how v4 ACLs work 114 NAS namespaces Infinite Volume support, where to find information introduction to creating and managing data volumes about 29 in 15 introduction to managing v4 ACLs 114 native engine managing file access 84 requirements for FPolicy scopes if using in FPolicy modifying protocols for SVMs 31 policy 187 mounting exports using nonreserved ports 88 native FPolicy configurations process to access NTFS security style data 30 when to create 166 process to access UNIX security style data 29 native FPolicy servers Index | 225

reasons for modifying the NFS credential cache P time-to-live 103 requirements for configuring with Kerberos 50 parallel NFS setting up file access for 31 enabling or disabling 85 specifying user ID domain for v4 54 parameters, timeout support for Kerberos 50 optimizing access cache performance 109 support for parallel 29 passthrough-read support for v4.1 29 creating FPolicy policies with 193 support for Windows clients 82 displaying information about status of connections support for, over IPv6 82 for FPolicy 203 supported versions and clients 27 FPolicy, how read requests are handled when v4 file and record locking described 119 enabled 167 v4 support 28 FPolicy, upgrade and revert considerations for 169 v4, determining file deletion 116 passthrough-reads v4, how file delegations work 117 for FPolicy, how it enhances usability for HSM 166 v4, limitations of Data ONTAP support 28 paths v4, managing file delegations 117 determining for audited objects 136 v4, specifying locking grace period 120 performance v4, specifying locking lease period 120 improving for NFSv3 TCP 124 NFS credential cache optimizing access cache 109 how it works 103 permissions introduction to configuring 103 effect of security styles on file 19 NFS exports how Data ONTAP preserves UNIX 21 how volume junctions are used with 12 UNIX, how to manage using Windows Security tab NFS servers 21 creating 32 planning managing 92 auditing configuration 139 NFSv3 FPolicy configuration overview 170 changing services ports 91 FPolicy event configuration 178 improving performance for TCP 124 FPolicy external engine configurations 171 modifying TCP maximum read and write size 125 FPolicy policy configurations 183 NFSv4 FPolicy scope configurations 188 ACLs, enabling or disabling modification of 115 pNFS enabling or disabling referrals 121 enabling or disabling 85 how referrals work 120 support for 29 NIS domains policies commands for managing 98 adding rules to export 40 creating configuration 70 assigning export, to qtrees 47 No space left on device error command for displaying information about FPolicy how to troubleshoot 158 198 nodes command for modifying FPolicy 196 how FPolicy manages external communication creating export 40 during failovers 164 displaying information about enabled FPolicy 199 what the communication process is for FPolicy- displaying information about FPolicy policy status enabled 163 198 NTFS enabling FPolicy 195 controlling root user access 127 enabling or disabling FPolicy 197 how to use the Data ONTAP CLI to configure audit flushing caches of export 106 policies for 149 FPolicy, information to gather for configuration 188 NTFS alternate data streams how Data ONTAP uses caches in export 105 considerations when auditing files with 137 how FPolicy manages processing multiple FPolicy NTFS volumes or qtrees 163 handling access for unknown UNIX users 87 introduction to configuring file and folder audit 146 planning the configuration for FPolicy 183 removing export, from qtrees 48 O policy rules, export offline files how the access cache works to store results from 108 how read requests are handled when FPolicy ports passthrough-read is enabled 167 changing for NFSv3 services 91 operations controlling NFS mount requests from nonreserved validating qtree IDs for file 48 87 options preferred trusted domains LDAP directory search configuration 64 226 | File Access Management Guide for NFS

how used with multidomain searches for user name process when there are audit-enabled SVMs 157 mapping 77 RFC2307bis support priorities enabling LDAP to use nested group memberships 63 how FPolicy manages processing FPolicy policy 163 root CA certificates privileged data access installing on the SVM 61 what it means to grant super user credentials for root user FPolicy 163 controlling access to NTFS security-style data 127 processes root volumes how Data ONTAP auditing works 130 configuring security style on SVM 22 protocols rotating file locking between, explained 110 audit event logs, manually 154 modifying for SVMs 31 RPCSEC_GSS supported 10 support for Kerberos 50 that FPolicy can monitor 160 rquotas enabling or disabling 124 rules, export policy Q how the access cache works to store results from 108 qtrees assigning an export policy to 47 S configuring security style on 23 exporting 47 schema templates how export policies control client access to 33 commands for managing LDAP client 100 removing an export policy from 48 schemas validating IDs for file operations 48 creating new LDAP client 62 query configuration options scopes LDAP directory search 64 command for displaying information about FPolicy queues 198 displaying export policy netgroup 106 command for modifying FPolicy 196 configuration information to gather for FPolicy 190 creating FPolicy 195 R planning the configuration for FPolicy 188 read file delegations searches enabling or disabling NFSv4 118 configuration options for LDAP directory 64 read-only bits secure LDAP communications how Data ONTAP treats 110 enabling LDAP over SSL/TLS on the CIFS server realm configurations for 62 commands for managing NFS Kerberos 101 security creating NFS Kerberos 55 how to use Storage-Level Access Guard to secure recommendations access to files and folders 89 FPolicy setup 168 security certificates referrals for SSL authentication with FPolicy, do not replicate enabling or disabling NFSv4 121 in SVM non-ID-preserve configurations 176 how they work for NFSv4 120 security styles removing configuring on FlexVol volumes 23 export policies from qtrees 48 configuring on qtrees 23 renaming configuring on SVM root volumes 22 export policies 101 effects on file permissions 19 requirements how inheritance works 21 auditing configuration 132 how to choose 20 for FPolicy scope configurations if using the FPolicy introduction to how they affect data access 19 policy uses the native engine 187 when and where to set 20 FPolicy setup 168 Security tab Kerberos with NFS configuration 50 how to manage UNIX permissions using Windows restrictions 21 authentication-based 10 security types file-based 11 how client access levels are determined by 36 when choosing authentication methods for cluster- how to handle clients with unlisted 35 scoped FPolicy external engines with MetroCluster self-signed root CA certificates and SVM disaster recovery 176 installing on the SVM 61 reverting server implementation ID considerations for FPolicy passthrough-read modifying 113 functionality 169 servers creating NFS 32 Index | 227

enabling LDAP over SSL/TLS on CIFS 62 restrictions when choosing authentication methods services for cluster-scoped FPolicy external engines with 176 how Data ONTAP uses name 26 SVMs show commands actions you must take before revert when there are how they work when displaying FPolicy audit-enabled 157 configuration 197 commands for modifying auditing configurations showmount -e command 156 enabling NFS clients to view exports list with 85 configuring security style on root volume 22 SMB creating a file and directory auditing configuration events that can be audited 134 on 143 file locking between protocols explained 110 creating auditing configuration for file and directory how Data ONTAP treats read-only bits 110 events on 143 modifying protocols for SVMs 31 creating FPolicy policies on 193 SMB shares creating NFS Kerberos configuration for 56 how volume junctions are used with 12 default export policy for 33 soft mounts deleting an auditing configuration 157 use of 25 enabling and disabling auditing on 154 space issues enabling auditing on 145 how to troubleshoot staging volume 158 enabling LDAP on 68 SSL how FPolicy manages processing policies 163 security certificates for FPolicy do not replicate in how FPolicy services work across namespaces 165 SVM non-ID-preserve configurations 176 how to troubleshoot space issues related to staging SSL certificates volumes on 158 additional information about configuring for FPolicy installing root CA self-signed certificate for LDAP external engine connections 175 over SSL/TLS on 61 SSL security introduction to auditing NAS file access events 129 restrictions when choosing authentication methods loading netgroups into 43 for cluster-scoped FPolicy external engines with modifying protocols 31 MetroCluster and SVM disaster recovery 176 revert process when there are audit-enabled 157 staging files role with FPolicy implementations 161 defined for auditing 129 security certificates do not replicate in a non-ID- how to troubleshoot space issues related to volumes preserve disaster recovery relationship 176 that contain 158 using FPolicy for file monitoring and management staging volumes 160 aggregate space considerations when enabling SVMs with FlexVol volumes auditing for 132 default export policy for 33 defined for auditing 129 switch name services how to troubleshoot space issues related to 158 how Data ONTAP uses 57 statistics symlinks displaying NFS 122 considerations when auditing 136 Storage-Level Access Guard synchronous displaying information about auditing settings on communication, how privileged data access channels mixed or NTFS security-style volumes for 151 are used with 162 how to secure access to files and folders by using 89 FPolicy applications 161 suggestions FPolicy notifications, defined 160 how to send feedback about documentation 216 super user credentials what it means to grant for FPolicy privileged data T access 163 TCP superuser access controlling NFS access over 86 how to configure with export rules 38 modifying maximum read and write size for NFSv3 supplementary characters 125 how Data ONTAP handles file names containing modifying NFSv3 maximum read and write size 124 UTF-16 25 templates supported protocols 10 commands for managing LDAP schema 100 surrogate pairs terminology how Data ONTAP handles file names containing LDAP over SSL/TLS 60 UTF-16 25 time-to-live SVM configurations configuring for cached NFS user credentials 104 restrictions when choosing authentication methods reasons for modifying NFS credential cache 103 for cluster-scoped FPolicy external engines with 176 timeout parameters SVM disaster recovery configurations optimizing access cache performance 109 timeout values 228 | File Access Management Guide for NFS

how they work for access cache 108 UNIX, creating local 71 troubleshooting UNIX, loading local from URIs 73 auditing event log volume space issues 158 UTF-16 client access to exports 49 supplementary characters, how Data ONTAP handles name service issues 93 file names containing 25 staging volume space issues 158 trusted domains discovered, how used with multidomain searches for V user name mapping 77 verifying twitter auditing configuration 145, 155 how to receive automatic notification of status of netgroup definitions 44 documentation changes 216 status of netgroups, command for 98 viewing U audit event logs 133 VMware UDP enabling or disabling vStorage over NFS 123 controlling NFS access over 86 supported vStorage APIs for NFS 123 unfencing volume junctions clients from exports 102 defined 12 UNIX how used in SMB and NFS namespaces 12 adding users to local groups 72 usage rules 12 commands for managing local groups 96 volumes commands for managing local users 96 aggregate space considerations when enabling configuring local users and groups 71 auditing for staging 132 creating local groups 74 associating export policy to FlexVol 46 creating local users 71 configuring security style on FlexVol 23 limits for local users, groups, and group members 97 creating with specified junction points 15 loading local groups from URIs 75 creating without specified junction points 16 loading local users from URIs 73 displaying information about Storage-Level Access managing limits for local users and groups 97 Guard auditing settings for NTFS or mixed security- UNIX permissions style 151 how Data ONTAP preserves 21 displaying mount and junction point information 18 how to manage using Windows Security tab 21 how export policies control client access to 33 UNIX users how junction points are used to create namespaces handling access to NTFS volumes or qtrees for with 11 unknown 87 how to troubleshoot space issues related to staging unmounting 158 volumes in NAS namespaces 17 how to use Storage-Level Access Guard to secure upgrading access to files and folders on 89 considerations for FPolicy passthrough-read introduction to creating and managing in NAS functionality 169 namespaces 15 URIs mounting and unmounting in NAS namespaces 17 loading local UNIX groups from 75 vStorage loading local UNIX users from 73 enabling or disabling over NFS 123 loading netgroups into SVMs from 43 supported APIs for NFS 123 user authentication how Data ONTAP uses name services for 26 user ID domains W specifying for NFSv4 54 Windows user information NFS clients, support for 82 configuration options for LDAP directory searches Windows Security tab for 64 how to manage UNIX permissions using 21 user name mappings worksheets preferred trusted domains used with multidomain for recording information needed to configure search for 77 FPolicy events 183 user names for recording information needed to configure how mapping works 77 FPolicy external engines 177 users for recording information needed to configure adding to local UNIX groups 72 FPolicy policies 188 commands for managing local UNIX 96 for recording information needed to configure local UNIX, limits for 97 FPolicy scopes 190 local UNIX, managing limits for 97 write file delegations local, UNIX, configuring 71 Index | 229

enabling or disabling NFSv4 118 file format, viewing audit event logs with 133 supported audit event log file format 133 X XML