Granite™ Core and Edge Appliance Deployment Guide

Version 3.6 December 2014

© 2014 Riverbed Technology, Inc. All rights reserved. Riverbed®, SteelApp™, SteelCentral™, SteelFusion™, SteelHead™, SteelScript™, SteelStore™, SteelHead®, Cloud Steelhead®, SteelHead (virtual edition)®, Granite™, Interceptor®, SteelApp product family™, Whitewater®, SteelStore OS™, RiOS®, Think Fast®, AirPcap®, BlockStream™, FlyScript™, SkipWare®, TrafficScript®, TurboCap®, WinPcap®, Mazu®, OPNET®, and SteelCentral® are all trademarks or registered trademarks of Riverbed Technology, Inc. (Riverbed) in the United States and other countries. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed. All other trademarks used herein belong to their respective owners. The trademarks and logos displayed herein cannot be used without the prior written consent of Riverbed or their respective owners. Akamai® and the Akamai wave logo are registered trademarks of Akamai Technologies, Inc. SureRoute is a service mark of Akamai. Apple and Mac are registered trademarks of Apple, Incorporated in the United States and in other countries. Cisco is a registered trademark of , Inc. and its affiliates in the United States and in other countries. EMC, Symmetrix, and SRDF are registered trademarks of EMC Corporation and its affiliates in the United States and in other countries. IBM, iSeries, and AS/400 are registered trademarks of IBM Corporation and its affiliates in the United States and in other countries. Juniper Networks and Junos are registered trademarks of Juniper Networks, Incorporated in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States and in other countries. Microsoft, Windows, Vista, Outlook, and Internet Explorer are trademarks or registered trademarks of Microsoft Corporation in the United States and in other countries. Oracle and JInitiator are trademarks or registered trademarks of Oracle Corporation in the United States and in other countries. UNIX is a registered trademark in the United States and in other countries, exclusively licensed through X/Open Company, Ltd. VMware, ESX, ESXi are trademarks or registered trademarks of VMware, Inc. in the United States and in other countries. This product includes Windows Azure Linux Agent developed by the Microsoft Corporation (http://www.microsoft.com/). Copyright 2012 Microsoft Corporation. This product includes software developed by the University of California, Berkeley (and its contributors), EMC, and Comtech AHA Corporation. This product is derived from the RSA Data Security, Inc. MD5 Message-Digest Algorithm. The SteelHead (virtual edition) Mobile Controller includes VMware Tools. Portions Copyright © 1998-2013 VMware, Inc. All Rights Reserved. NetApp Manageability Software Development Kit (NM SDK), including any third-party software available for review with such SDK which can be found at http://communities.netapp.com/docs/DOC-1152, and are included in a NOTICES file included within the downloaded files. For a list of open source software (including libraries) used in the development of this software along with associated copyright and license agreements, see the Riverbed Support site at https//support.riverbed.com. This documentation is furnished “AS IS” and is subject to change without notice and should not be construed as a commitment by Riverbed. This documentation may not be copied, modified or distributed without the express authorization of Riverbed and may be used only in connection with Riverbed products and services. Use, duplication, reproduction, release, modification, disclosure or transfer of this documentation is restricted in accordance with the Federal Acquisition Regulations as applied to civilian agencies and the Defense Federal Acquisition Regulation Supplement as applied to military agencies. This documentation qualifies as “commercial software documentation” and any use by the government shall be governed solely by these terms. All other use is prohibited. Riverbed assumes no responsibility or liability for any errors or inaccuracies that may appear in this documentation.

Riverbed Technology 680 Folsom Street San Francisco, CA 94107

Phone: 415.247.8800 Fax: 415.247.8801 Part Number Web: http://www.riverbed.com 712-00079-06

Contents

Preface...... 1 About This Guide ...... 1 Audience ...... 2 Document Conventions...... 2 Documentation and Release Notes ...... 2 Contacting Riverbed...... 3 What Is New ...... 3

Chapter 1 - Overview of Granite Core and Granite Edge as a System ...... 5 Introducing Branch Converged Infrastructure...... 5 How the Granite Product Family Works...... 6 System Components and Their Roles...... 7

Chapter 2 - Deploying Granite Core and Granite Edge as a System ...... 9 The Granite Family Deployment Process...... 9 Provisioning LUNs on the Storage Array...... 9 Installing the Granite Appliances ...... 10 LUN Pinning and Prepopulation in the Granite Core ...... 11 Configuring Snapshot and Data Protection Functionality ...... 11 Managing vSphere Datastores on LUNs Presented by Granite Core ...... 12 Single-Appliance Versus High-Availability Deployments ...... 12 Single-Appliance Deployment...... 13 High-Availability Deployment ...... 13 Connecting Granite Core with Granite Edge...... 14 Prerequisites ...... 14 Process Overview: Connecting the Granite Product Family Components...... 14 Adding Granite Edges to the Granite Core Configuration ...... 16 Configuring Granite Edge...... 16 Mapping LUNs to Granite Edges...... 16 Riverbed Turbo Boot...... 18

Granite Core and Edge Appliance Deployment Guide iii Contents

Chapter 3 - Deploying the Granite Core Appliance ...... 21 Prerequisites ...... 21 Interface and Port Configuration ...... 22 Granite Core Ports ...... 22 Configuring Interface Routing ...... 23 Configuring Granite Core for Jumbo Frames...... 26 Configuring the iSCSI Initiator ...... 27 Configuring LUNs...... 27 Exposing LUNs...... 28 Configuring Fibre Channel LUNs...... 28 Removing a LUN from a Granite Core Configuration...... 28 Configuring Redundant Connectivity with MPIO ...... 29 MPIO in Granite Core ...... 29 Configuring Granite Core MPIO Interfaces...... 29 Granite Core Pool Management ...... 30 Overview of Granite Core Pool Management...... 30 Pool Management Architecture...... 31 Configuring Pool Management...... 31 Changing Pool Management Structure ...... 34 High Availability in Pool Management...... 35

Chapter 4 - Granite and Fibre Channel...... 37 Overview of Fibre Channel ...... 37 Fibre Channel LUN Considerations ...... 39 How VMware ESXi Virtualizes Fibre Channel LUNs...... 39 How Core VE Connects to RDM Fibre Channel LUNs...... 41 Requirements for Core VE and Fibre Channel SANs...... 42 Specifics About Fibre Channel LUNs Versus iSCSI LUNs ...... 42 Deploying Fibre Channel LUNs on Core VEs ...... 43 Deployment Prerequisites ...... 43 Configuring Fibre Channel LUNs...... 43 Configuring Fibre Channel LUNs in a Core VE HA Scenario ...... 46 The ESXi Servers Hosting the Core VEs Are Managed by vCenter...... 47 The ESXi Servers Hosting the Core VEs Are Not Managed by vCenter ...... 49 Populating Fibre Channel LUNs ...... 49 Best Practices and Recommendations...... 50 Best Practices ...... 51 Recommendations ...... 51 Troubleshooting ...... 52

Chapter 5 - Configuring the Granite Edge...... 55 Interface and Port Configurations...... 55 Granite Edge Ports...... 56 Configuring Granite Edge for Jumbo Frames ...... 57

iv Granite Core and Edge Appliance Deployment Guide Contents

Configuring iSCSI Initiator Timeouts...... 58 Granite Edge Storage Specifications ...... 58 Configuring Disk Management...... 59 Configuring Granite Storage...... 59 MPIO in Granite Edge...... 60

Chapter 6 - Granite Appliance High-Availability Deployment ...... 61 Overview of Storage Availability ...... 61 Granite Core High Availability...... 62 Granite Core with MPIO ...... 62 Granite Core HA Concepts ...... 63 Configuring HA for Granite Core...... 64 Granite Edge High Availability ...... 73 Using the Correct Interfaces for Granite Edge Deployment ...... 74 Choosing the Correct Cables...... 76 Overview of Granite Edge HA ...... 77 Granite Edge HA Peer Communication...... 80 Configuring WAN Redundancy...... 81 Configuring WAN Redundancy with No Granite Core HA...... 81 Configuring WAN Redundancy in an HA Environment ...... 83

Chapter 7 - Snapshots and Data Protection...... 85 Setting Up Application-Consistent Snapshots ...... 85 Configuring Snapshots for LUNs...... 86 Volume Snapshot Service Support ...... 87 Implementing Riverbed Host Tools for Snapshot Support ...... 87 Overview of RHSP and VSS...... 88 Riverbed Host Tools Operation and Configuration ...... 88 Configuring the Proxy Host for Backup...... 89 Configuring the Storage Array for Proxy Backup ...... 89 Data Protection...... 90 Data Recovery ...... 91 Branch Recovery ...... 92 Overview of Branch Recovery...... 92 How Branch Recovery Works...... 93 Branch Recovery Configuration ...... 93 Branch Recovery CLI Configuration Example...... 94

Chapter 8 - Security and Data Resilience...... 97 Recovering a Single Granite Core...... 97 Recovering a Single Physical Granite Core...... 97

Granite Core and Edge Appliance Deployment Guide v Contents

Recovering a Single Core VE ...... 98 Granite Edge Replacement...... 99 Disaster Recovery Scenarios...... 100 Granite Appliance Failure—Failover...... 100 Granite Appliance Failure—Failback ...... 101 Best Practice for LUN Snapshot Rollback ...... 103 At-Rest and In-Flight Data Security...... 104 Enable Data At-Rest Block Store Encryption...... 104 Enable Data In-Flight Secure Peering Encryption ...... 106

Chapter 9 - Granite Appliance Upgrade ...... 107 Planning Software Upgrades ...... 107 Upgrade Sequence...... 108 Minimize Risk During Upgrading ...... 108 Performing the Upgrade...... 109 Granite Edge Upgrade ...... 109 Granite Core Upgrade...... 109

Chapter 10 - Network Quality of Service ...... 111 Rdisk Protocol Overview...... 111 QoS for Granite Replication Traffic ...... 113 QoS for LUNs ...... 113 QoS for Unpinned LUNs...... 113 QoS for Pinned LUNs ...... 113 QoS for Branch Offices...... 113 QoS for Branch Offices That Mainly Read Data from the Data Center ...... 114 QoS for Branch Offices Booting Virtual Machines from the Data Center ...... 114 Time-Based QoS Rules Example...... 114

Chapter 11 - Deployment Best Practices...... 115 Granite Edge Best Practices...... 115 Segregate Traffic...... 116 Pin the LUN and Prepopulate the Block Store...... 116 Segregate Data onto Multiple LUNs...... 116 Ports and Type of Traffic...... 116 Changing IP Addresses on Granite Edge, ESXi Host, and Servers...... 117 Configure Disk Management...... 118 Rdisk Traffic Routing Options ...... 118 Deploying Granite with Third-Party Traffic Optimization ...... 119 Windows and ESX Storage Layout—Granite-Protected LUNs Versus Local LUNs....119 VMFS Datastores Deployment on Granite LUNs...... 123 Enable Windows Persistent Bindings for Mounted iSCSI LUNs ...... 124 Set Up Memory Reservation for VMs Running on VMware ESXi in the VSP ...... 125

vi Granite Core and Edge Appliance Deployment Guide Contents

Boot from an Unpinned iSCSI LUN...... 126 Running Antivirus Software...... 126 Running Disk Defragmentation Software...... 126 Running Backup Software...... 126 Configure Jumbo Frames...... 127 Granite Core Best Practices ...... 127 Deploy on Gigabit Ethernet Networks...... 127 Use Challenge Handshake Authentication Protocol...... 127 Configure Initiators and Storage Groups or LUN Masking...... 127 Granite Core Hostname and IP Address...... 128 Segregate Storage Traffic from Management Traffic ...... 128 When to PIN and Prepopulate the LUN ...... 128 Granite Core Configuration Export ...... 129 Granite Core in HA Configuration Replacement ...... 129 iSCSI Initiators Timeouts ...... 129 Microsoft iSCSI Initiator Timeouts...... 129 ESX iSCSI Initiator Timeouts ...... 130 Patching...... 130 Patching at the Branch Office for Virtual Servers Installed on iSCSI LUNs ...... 130 Patching at the Data Center for Virtual Servers Installed on iSCSI LUNs...... 130

Chapter 12 - Granite Appliance Sizing ...... 133 General Sizing Considerations...... 133 Granite Core Sizing Guidelines ...... 134 Granite Edge Sizing Guidelines...... 135

Appendix A - Granite Edge Network Reference Architecture ...... 137 Converting In-Path Interfaces to Data Interfaces...... 137 Multiple VLAN Branch with Four-Port Data NIC...... 139 Single VLAN Branch with Four-Port Data NIC...... 141 Multiple VLAN Branch Without Four-Port Data NIC ...... 143

Granite Core and Edge Appliance Deployment Guide vii Contents

viii Granite Core and Edge Appliance Deployment Guide

Preface

Welcome to the Granite Core and Edge Appliance Deployment Guide. Read this preface for an overview of the information provided in this guide and the documentation conventions used throughout, hardware and software dependencies, additional reading, and contact information. This preface includes the following sections:

 “About This Guide” on page 1

 “Documentation and Release Notes” on page 2

 “Contacting Riverbed” on page 3

 “What Is New” on page 3

About This Guide

The Granite Core and Edge Appliance Deployment Guide provides an overview of the Granite Core and Granite Edge appliances and shows you how to install and configure them as a system. Riverbed products names have changed. At the time of publication, the user interfaces of the products described in this guide may have not changed, and the original names may be used in the text. For the product naming key, see http://www.riverbed.com/products/#Product_List.

Note: This documents uses the old product name of Granite instead of SteelFusion in the Management Console, the diagrams, and the text.

This guide includes information relevant to the following products:

 Riverbed Granite product family (Granite appliances or Granite)

 Riverbed Granite Core appliance (Granite Core)

 Riverbed Granite Core Virtual Edition (Core VE)

 Riverbed Granite Edge appliance (Granite Edge)

 Riverbed Optimization System (RiOS)

 Riverbed SteelHead EX (SteelHead EX)

 Riverbed SteelHead (SteelHead)

Granite Core and Edge Appliance Deployment Guide 1 Preface Documentation and Release Notes

 Riverbed SteelCentral Controller for SteelHead (SCC or Controller)

 Riverbed Virtual Services Platform (VSP) This guide is intended to be used together with all the documentation and technical notes available at https://support.riverbed.com.

Audience

This guide is written for storage and network administrators familiar with administering and managing storage arrays, snapshots, backups, and VMs, and Fibre Channel and iSCSI. This guide requires you to be familiar with virtualization technology, the Granite Core Management Console User’s Guide, the Riverbed Command-Line Interface Reference Manual, the Riverbed Granite Core Command-Line Interface Reference Manual, and the SteelHead Management Console User’s Guide.

Document Conventions

This guide uses the following standard set of typographical conventions.

Convention Meaning

italics Within text, new terms, emphasized words, and REST API URIs appear in italic typeface. boldface Within text, CLI commands, CLI parameters, and REST API properties appear in bold typeface. Courier Code examples appears in Courier font: amnesiac > enable amnesiac # configure terminal

< > Values that you specify appear in angle brackets: interface [ ] Optional keywords or variables appear in brackets: ntp peer [version ] { } Required keywords or variables appear in braces: {delete } | The pipe symbol represents a choice to select one keyword or variable to the left or right of the symbol. The keyword or variable can be either optional or required: {delete | upload }

Documentation and Release Notes

To obtain the most current version of all Riverbed documentation, go to the Riverbed Support site at https://support.riverbed.com. If you need more information, see the Riverbed Knowledge Base for any known issues, how-to documents, system requirements, and common error messages. You can browse titles or search for keywords and strings. To access the Riverbed Knowledge Base, log in to the Riverbed Support site at https://support.riverbed.com. Each software release includes release notes. The release notes identify new features in the software as well as known and fixed problems. To obtain the most current version of the release notes, go to the Software and Documentation section of the Riverbed Support site at https://support.riverbed.com.

2 Granite Core and Edge Appliance Deployment Guide Contacting Riverbed Preface

Examine the release notes before you begin the installation and configuration process.

Contacting Riverbed

This section describes how to contact departments within Riverbed.

 Technical support - If you have problems installing, using, or replacing Riverbed products, contact Riverbed Support or your channel partner who provides support. To contact Riverbed Support, open a trouble ticket by calling 1-888-RVBD-TAC (1-888-782-3822) in the United States and Canada or +1 415-247-7381 outside the United States. You can also go to https://support.riverbed.com.

 Professional services - Riverbed has a staff of professionals who can help you with installation, provisioning, network redesign, project management, custom designs, consolidation project design, and custom coded solutions. To contact Riverbed Professional Services, email [email protected] or go to http://www.riverbed.com/services-training/Services-Training.html.

 Documentation - The Riverbed Technical Publications team continually strives to improve the quality and usability of Riverbed documentation. Riverbed appreciates any suggestions you might have about its online documentation or printed materials. Send documentation comments to [email protected].

What Is New

Since the Granite Core and Edge Appliance Deployment Guide April 2014 release, the following information has been changed, added, or updated:

 Updated - “System Components and Their Roles” on page 7

 New - “Riverbed Turbo Boot” on page 18

 Updated - “Granite Core Ports” on page 22

 New - “Granite and Fibre Channel” on page 37

 New - “Interface and Port Configurations” on page 55

 Updated - “Cabling and Connectivity for Clustered Granite Cores” on page 65

 New - “Configuring WAN Redundancy” on page 81

 Updated - “Branch Recovery” on page 92

 New - “Security and Data Resilience” on page 97

 New - “Granite Appliance Upgrade” on page 107

 New - “Network Quality of Service” on page 111

 Updated - “Deployment Best Practices” on page 115

 Made its own chapter and updated - “Granite Appliance Sizing” on page 133

 Updated - “Converting In-Path Interfaces to Data Interfaces” on page 137

Granite Core and Edge Appliance Deployment Guide 3 Preface What Is New

4 Granite Core and Edge Appliance Deployment Guide

CHAPTER 1 Overview of Granite Core and Granite Edge as a System

This chapter describes the Granite Core and Granite Edge appliances as a virtual storage system. It includes the following sections:

 “Introducing Branch Converged Infrastructure” on page 5

 “How the Granite Product Family Works” on page 6

 “System Components and Their Roles” on page 7

Introducing Branch Converged Infrastructure

Granite Core and Granite Edge consolidate data and applications that deliver LAN performance at the branch office over the WAN. By functioning as a branch converged infrastructure, the Granite product family eliminates the need for dedicated storage, including management and related backup resources, at the branch office. With the Granite product family, storage administrators can extend a data center storage array to a remote location, even over a low-bandwidth link. Granite delivers business agility, enabling you to effectively deliver global storage infrastructure anywhere you need it.

Note: Granite Edge functionality is available in SteelHead EX v1.0.2 and above.

The Granite product family provides the following functionality:

 Innovative block storage optimization ensures that you can centrally manage data storage while keeping that data available to business operations in the branch, even in the event of a WAN outage.

 A local authoritative cache ensures LAN-speed reads and fast cold writes at the branch.

 Integration with Microsoft Volume Shadow Copy Service enables consistent point-in-time data snapshots and seamless integration with backup applications.

 Integration with the snapshot capabilities of the storage array and enables you to configure application-consistent snapshots through the Granite Core Management Console.

 Integration with industry-standard Challenge-Handshake Authentication Protocol (CHAP) authenticates users and hosts.

 A secure vault protects sensitive information using AES 256-bit encryption.

Granite Core and Edge Appliance Deployment Guide 5 Overview of Granite Core and Granite Edge as a System How the Granite Product Family Works

 Solid state disks (SSDs) that guarantee data durability and performance.

 An active-active high-availability (HA) deployment option for Granite ensures the availability of storage array logical unit numbers (LUNs) for remote sites.

 Customizable reports provide visibility to key utilization, performance, and diagnostic information. By consolidating all storage at the data center and creating diskless branches, Granite eliminates data sprawl, costly data replication, and the risk of data loss at the branch office.

How the Granite Product Family Works

The Granite product family is designed to enable branch office server systems to efficiently access storage arrays over the WAN. The Granite product family is typically deployed in conjunction with SteelHeads, and is comprised of the following components:

 Granite Core - Granite Core is a physical or virtual appliance deployed in the data center alongside SteelHeads and the centralized storage array. Granite Core mounts iSCSI LUNs provisioned for the branch offices. Additionally, Core VE can mount LUNs via Fibre Channel.

 Granite Edge - Granite Edge runs as a module on SteelHead EX or as a stand-alone appliance, and it virtually presents as a iSCSI target in the branch. Granite Edge presents itself to application servers— either located inside VSP on the SteelHead EX, or outside as physical or virtual server platforms—in the branch as a storage portal. From the portal, the application server can mount iSCSI LUNs and the LUNs are projected across the WAN from the data center. Granite Edge can also host local LUNs for use as temporary storage that are not projected from the data center: for example, temporary or local copies of software repositories. For more information about the matrix list of the storage systems that have been qualified by the original storage vendor, Riverbed internal engineers, and field tested by Riverbed customers and partners, see the Granite Appliance Interoperability Matrix at https://support.riverbed.com. The branch office server connects to Granite Edge, which implements handlers for the iSCSI protocol. The Granite Edge also connects to the block store, a persistent local cache of storage blocks. When the branch office server requests blocks, those blocks are served locally from the block store (unless they are not present, in which case Granite Edge retrieves them from the data center LUN via Granite Core). Similarly, newly written blocks are spooled to the local cache, acknowledged by the Granite Edge to the branch office server, and then asynchronously propagated to the data center. Because each Granite Edge implementation is linked to one or more dedicated LUNs at the data center, the block store is authoritative for both reads and writes and can tolerate WAN outages without affecting cache coherency. Blocks are communicated between Granite Edges and Granite Cores through an internal protocol. The Granite Core then writes the updates to the data center LUNs through the iSCSI or Fibre Channel protocol. (Optionally, you can further optimize traffic between the branch offices and the data center by implementing SteelHeads.) For more information about Fibre Channel, see “Granite and Fibre Channel” on page 37. Granite initially populates the block store using the following methods:

 Reactive prefetch - The system observes block requests, applies heuristics based on these observations to intelligently predict the blocks most likely to be requested in the near future, and then requests those blocks from the data center LUN in advance.

6 Granite Core and Edge Appliance Deployment Guide System Components and Their Roles Overview of Granite Core and Granite Edge as a System

 Policy-based prefetch - Configured policies identify the blocks that are likely to be requested at a given branch office site in advance; the Granite Edge then requests those blocks from the data center LUN in advance.

 First request - Blocks are added to the block store when first requested. Because the first request is cold, it is subject to standard WAN latency. Subsequent traffic is optimized.

System Components and Their Roles

At the data center, Granite Core integrates with existing storage systems and SteelHead deployments. Granite Core connects dedicated LUNs with each Granite Edge at the branch office. The block store is the Granite Edge authoritative persistent cache of storage blocks. The block store is local from a branch perspective and holds blocks from all the LUNs available through a specific Granite Edge. The block store is authoritative because it includes the latest-written blocks before they are sent through the Granite Core to a storage array at the data center. When a server at the branch office requests data blocks, those blocks are served locally from the block store if those blocks are currently present. If they are not present, Granite Edge retrieves them from the data center LUN. Similarly, newly written blocks are spooled to the block store cache, acknowledged by the branch Granite Edge, and then asynchronously flushed to the data center. Blocks are communicated between Granite Edges and Granite Core through an internal protocol. When the Granite Core receives the data, it writes the updates to the LUN on the storage array through the iSCSI or Fibre Channel protocol. The data cache in the block store is stored as is, and it is not deduplicated. Granite appliances are designed to be coupled with SteelHead products, which will assist with data reduction and streamlining between Granite Edge and Granite Core. You can encrypt the block store cache at rest using AES 128/192/256-bit encryption. This encryption eliminates the risk if your appliances are stolen. Similarly, removing physical tape media and backup devices from the remote offices eliminates the same risk. As a result, the block store eliminates the need for separate block storage facilities at the branch office and all the associated maintenance, tools, backup services, hardware, service resources, and so on. For more information about block store encryption, see “At-Rest and In-Flight Data Security” on page 104. Branch office performance depends on the Granite Edge platform, and not the data center storage array.

Granite Core and Edge Appliance Deployment Guide 7 Overview of Granite Core and Granite Edge as a System System Components and Their Roles

The following diagram illustrates a generic deployment. Figure 1-1. Generic Granite Deployment

The basic Granite system components are:

 Branch server - The branch-side server that accesses data from the Granite system instead of a local storage device. This server can also run as a VSP VM on the local SteelHead EX.

 Block store - A persistent local cache of storage blocks. Because each Granite Edge is linked to a dedicated LUN at the data center, the block store is authoritative for both reads and writes. In Figure 1-1, the block store on the branch side synchronizes with LUN1 at the data center.

 iSCSI Initiator - The branch-side client that sends SCSI commands to its iSCSI target (for example, the Granite Edge) at the data center.

 Granite-enabled SteelHead EX - Also referred to as a Granite Edge appliance, the branch-side component of the Granite system links the edge server to the block store and links the block store to the iSCSI target and LUN at the data center. The SteelHead provides WAN optimization services.

 Data center SteelHead - The data center-side SteelHead peer for WAN optimization.

 Granite Core - The data center component of the Granite product family. Granite Core manages block transfers between the LUN and the Granite Edge.

 iSCSI target - The data center-side storage array that communicates with the branch-side iSCSI Initiator.

 LUNs - A unit of storage deployed from the storage array to a branch through Granite Edge.

8 Granite Core and Edge Appliance Deployment Guide

CHAPTER 2 Deploying Granite Core and Granite Edge as a System

This chapter describes the process and procedures for deploying the Granite product family at both branch offices and the data center. This chapter is a general introduction to one of the possible scenarios to form a basic, but typical, Granite deployment. Further details on specific stages of deployment, such as Granite Core and Granite Edge configuration, high availability, configuration scenarios for snapshots, and so on, are covered in following chapters of this guide. This chapter includes the following sections:

 “The Granite Family Deployment Process” on page 9

 “Single-Appliance Versus High-Availability Deployments” on page 12

 “Connecting Granite Core with Granite Edge” on page 14

 “Riverbed Turbo Boot” on page 18

The Granite Family Deployment Process

This section provides a broad outline of the process for deploying the Granite product family. Depending on the type of deployment and products involved (for example, with or without redundancy, iSCSI or Fibre Channel connected storage, and so on) the details of certain steps can vary. Use the outline below to create a deployment plan that is specific for your requirements. The steps are listed in approximate order; dependencies are listed when required. The tasks are as follows:

 “Provisioning LUNs on the Storage Array” on page 9

 “Installing the Granite Appliances” on page 10

 “LUN Pinning and Prepopulation in the Granite Core” on page 11

 “Configuring Snapshot and Data Protection Functionality” on page 11

 “Managing vSphere Datastores on LUNs Presented by Granite Core” on page 12

Provisioning LUNs on the Storage Array

This section describes how to provision LUNs on the storage array.

Granite Core and Edge Appliance Deployment Guide 9 Deploying Granite Core and Granite Edge as a System The Granite Family Deployment Process

To provision LUNs on the storage array

1. Enable the connections for the type of LUNs you intend to expose to the branch: for example, iSCSI and Fibre Channel.

2. Determine the LUNs you want to dedicate to specific branches.

Note: Step 3 and Step 4 are optional. The LUNs to be exposed to the branch can be empty and populated later. For example, if you require the LUNs to be preloaded with virtual machine images as part of the ESX datastore, you only need perform Step 3 and Step 4 if you want to preload the LUNs with data.

3. By connecting to a temporary ESX server, you can deploy virtual machines (VMs) for branch services (including the branch Windows server) to the LUNs. Riverbed recommends that you implement the optional Windows Server plug-ins at this point. For details, see “Implementing Riverbed Host Tools for Snapshot Support” on page 87.

4. After you deploy the VMs, disconnect from the temporary ESX server.

5. Create the necessary initiator or target groups. For more information, see the documentation for your storage array.

Installing the Granite Appliances

This section describes at a high level how to install and configure Granite Core and Granite Edge. For complete installation procedures, see the Granite Core Installation and Configuration Guide and the SteelHead EX Installation and Configuration Guide.

To install and configure Granite Core

1. Install Granite Core or Core VE in the data center network.

2. Connect the Granite Core appliance to the storage array.

3. Through Granite Core, discover and configure the desired LUNs on the storage array.

4. (Recommended) Enable and configure HA.

5. (Recommended) Enable and configure multipath I/O (MPIO). If you have decided to use MPIO, you must configure it at two separate and independent points:

 iSCSI Initiator on Granite Core

 iSCSI target Additional steps are required on the Granite Edge to complete a typical installation. A high-level series of steps is shown the following procedure.

To install and configure Granite Edge

1. Install the SteelHead EX in the branch office network.

10 Granite Core and Edge Appliance Deployment Guide The Granite Family Deployment Process Deploying Granite Core and Granite Edge as a System

2. On the SteelHead EX, configure disk management to enable Granite storage mode.

3. Preconfigure Granite Edge for connection to Granite Core.

4. Connect Granite Edge and Granite Core.

LUN Pinning and Prepopulation in the Granite Core

LUN pinning and prepopulation are two separate features configured through Granite Core that together determine how block data is kept in the block store on the Granite Edge. When you pin a LUN in the Granite Core configuration, you reserve space in the Granite Edge block store that is equal in size to the LUN at the storage array. Furthermore, when blocks are fetched by the Granite Edge, they remain in the block store in their entirety; by contrast, blocks in unpinned LUNs might be cleared on a first-in, first-out basis. Pinning only reserves block store space; it does not populate that space with blocks. The block store is populated as the application server in the branch requests data not yet in the block store (causing the Granite Edge to issue a read through the Granite Core), or through prepopulation. The prepopulation functionality enables you to prefetch blocks to the block store. You can prepopulate a pinned LUN on the block store in one step; however, if the number of blocks is very large, you can configure a prepopulation schedule that prepopulates the block store only during specific intervals of your choice: for example, not during business hours. After the prepopulation process is completed, the schedule stops automatically. For more information about pinning and prepopulation, see the Granite Core Management Console User’s Guide.

To configure pinning and prepopulation

1. Choose Configure > Manage: LUNs to display the LUNs page.

2. Click the LUN configuration to display the configuration settings.

3. Select the Pin/Prepop tab.

4. To pin the LUN, select Pinned from the drop-down list and click Update. When the LUN is pinned, the prepopulation settings are activated for configuration.

Configuring Snapshot and Data Protection Functionality

Granite Core integrates with the snapshot capabilities of the storage array and enables you to configure application-consistent snapshots through the Granite Core Management Console. For details, see “Snapshots and Data Protection” on page 85.

Granite Core and Edge Appliance Deployment Guide 11 Deploying Granite Core and Granite Edge as a System Single-Appliance Versus High-Availability Deployments

Understanding Crash Consistency and Application Consistency In the context of snapshots and backups and data protection in general, two types or states of data consistency are distinguished:

 Crash consistency - A backup or snapshot is crash consistent if all of the interrelated data components are as they were (write-order consistent) at the instant of the crash. To better understand this type of consistency, imagine the status of the data on your PC’s hard drive after a power outage or similar event. A crash-consistent backup is usually sufficient for nondatabase operating systems and applications like file servers, DHCP Servers, print servers, and so on.

 Application consistency - A backup or snapshot is application consistent if, in addition to being write- order consistent, running applications have completed all their operations and flush their buffers to disk (application quiescing). Application-consistent backups are recommended for database operating systems and applications such as SQL, Oracle, and Exchange. The Granite product family ensures continuous crash consistency at the branch and at the data center by using journaling and by preserving the order of WRITEs across all the exposed LUNs. For application- consistent backups, administrators can directly configure and assign hourly, daily, or weekly snapshot policies. Granite Edge interacts directly with both VMware ESXi and Microsoft Windows servers, through VMware Tools and volume snapshot service (VSS) to quiesce the applications and generate application- consistent snapshots of both VMFS and NTFS data drives.

Managing vSphere Datastores on LUNs Presented by Granite Core

Through the vSphere client, you can view inside the LUN to see the VMs previously loaded in the data center storage array. You can add a LUN that contains vSphere VMs as a datastore to the ESX(i) server in the branch. This can be either a regular hardware platform hosting ESX(i) or the VSP inside the SteelHead EX. You can next add the VMs to the ESX(i) inventory run as services through the SteelHead EX. Similarly, you can use vSphere to provision LUNs with VMs from VSP on the SteelHead EX. For more information, see the SteelHead Management Console User’s Guide.

Single-Appliance Versus High-Availability Deployments

This section describes types of Granite appliance deployments. It includes the following topics:

 “Single-Appliance Deployment” on page 13

 “High-Availability Deployment” on page 13 This section assumes that you understand the basics of how the Granite product family works together, and are ready to deploy your appliances.

12 Granite Core and Edge Appliance Deployment Guide Single-Appliance Versus High-Availability Deployments Deploying Granite Core and Granite Edge as a System

Single-Appliance Deployment

In a single-appliance deployment (basic deployment), Granite Core connects to the storage array through the eth0_0 interface. The primary (PRI) interface is dedicated to the traffic VLAN, and the auxiliary (AUX) interface is dedicated to the management VLAN. More complex designs generally use the additional network interfaces. For more information about Granite Core interface names and their possible uses, see “Interface and Port Configuration” on page 22. Figure 2-1. Single Appliance Deployment

High-Availability Deployment

In a high-availability (HA) deployment, two Granite Cores operate as failover peers. Both appliances operate independently with their respective and distinct Granite Edges until one fails; then the remaining operational Granite Core handles the traffic for both appliances. For more information about HA, see “Granite Appliance High-Availability Deployment” on page 61. Figure 2-2. HA Deployment

Granite Core and Edge Appliance Deployment Guide 13 Deploying Granite Core and Granite Edge as a System Connecting Granite Core with Granite Edge

Connecting Granite Core with Granite Edge

This section describes the prerequisites for configuring the data center and branch office components of the Granite product family, and it provides an overview of the procedures required. It includes the following topics:

 “Prerequisites” on page 14

 “Process Overview: Connecting the Granite Product Family Components” on page 14

 “Adding Granite Edges to the Granite Core Configuration” on page 16

 “Configuring Granite Edge” on page 16

 “Mapping LUNs to Granite Edges” on page 16

Prerequisites

Before you configure Granite Core with Granite Edge, ensure that the following tasks have been completed:

 Assign an IP address or hostname to the Granite Core.

 Determine the iSCSI Qualified Name (IQN) to be used for Granite Core. When you configure Granite Core, you set this value in the initiator configuration.

 Set up your storage array: – Register the Granite Core IQN. – Configure iSCSI portal, targets, and LUNs, with the LUNs assigned to the Granite Core IQN.

 Assign an IP address or hostname to the Granite Edge.

Process Overview: Connecting the Granite Product Family Components

The following table summarizes the process for connecting and configuring Granite Core and Granite Edge as a system. Note that you can perform some of the steps in the table in a different order, or even in parallel with each other in some cases. The sequence shown is intended to illustrate a method that enables you to complete one task so that the resources and settings are ready for the next task in the sequence.

Component Procedure Description

Granite Core Determine the network settings Prior to deployment: for Granite Core. • Assign an IP address or hostname to Granite Core. • Determine the IQN to be used for Granite Core. When you configure Granite Core, you set this value in the initiator configuration.

14 Granite Core and Edge Appliance Deployment Guide Connecting Granite Core with Granite Edge Deploying Granite Core and Granite Edge as a System

Component Procedure Description

iSCSI-compliant Register the Granite Core IQN. Granite uses the IQN name format for iSCSI storage array Initiators. For details about IQN, see http:// tools.ietf.org/html/rfc3720. Prepare the iSCSI portals, targets, Prior to deploying Granite Core, you must prepare and LUNs, with the LUNs these components. assigned to the Granite Core IQN. Fibre channel- Enable Fibre Channel For details, see “Granite and Fibre Channel” on compliant storage connections. page 37. array Granite Core Install Granite Core. For details, see the Granite Core Installation and Configuration Guide. Granite Edge Install the Granite-enabled For details, see the SteelHead EX Installation and SteelHead. (A separate Granite Configuration Guide. license might be required.) Granite Edge Configure disk management. You can configure the disk layout mode to allow space for the Granite block store in the Disk Management page. Free disk space is divided between the Virtual Services Platform (VSP) and the Granite block store. For details, see “Configuring Disk Management” on page 59. Configure Granite storage The Granite storage settings are used by Granite settings. Core to recognize and connect to Granite Edge. For details, see “Configuring Granite Storage” on page 59. Granite Core Run the Setup Wizard to perform The Setup Wizard performs the initial, minimal initial configuration. configuration of Granite Core, including: •Network settings • iSCSI Initiator configuration • Mapping LUNs to Granite Edges For details, see the Granite Core Installation and Configuration Guide. Granite Core Configure iSCSI Initiators and Configure the iSCSI Initiator and specify an iSCSI LUNs. portal. This portal discovers all the targets within that portal. Add and configure the discovered targets to the iSCSI Initiator configuration. Configure Targets. After a target is added, all the LUNs on that target can be discovered, and you can add them to the running configuration. Map LUNs to Granite Edges. Using the previously defined Granite Edge self- identifier, connect LUNs to the appropriate Granite Edges. For details about the above procedures, see the Granite Core Management Console User’s Guide.

Granite Core and Edge Appliance Deployment Guide 15 Deploying Granite Core and Granite Edge as a System Connecting Granite Core with Granite Edge

Component Procedure Description

Granite Core Configure CHAP users and Optionally, you can configure CHAP users and storage array snapshots. storage array snapshots. For details, see the Granite Core Management Console User’s Guide. Granite Edge Confirm the connection with After completing the Granite Core configuration, Granite Core. confirm that Granite Edge is connected to and communicating with Granite Core. For details, see “Mapping LUNs to Granite Edges” on page 16.

Adding Granite Edges to the Granite Core Configuration

You can add and modify connectivity with Granite Edge appliances in the Configure > Manage: Granite Edges page in the Granite Core Management Console. This procedure requires you to provide the Granite Edge Identifier for the Granite Edge appliance. This value is defined in the EX Features > Granite: Granite Storage page in the Granite Edge Management Console, or specified through the CLI. For more information, see the Granite Core Management Console User’s Guide, the Riverbed Granite Core Command-Line Interface Reference Manual, and the Riverbed Command-Line Interface Reference Manual.

Configuring Granite Edge

For information about Granite Edge configuration for deployment, see “Configuring the Granite Edge” on page 55.

Mapping LUNs to Granite Edges

This section describes how to configure LUNs and map them to Granite Edge appliances. It includes the following topics:

 “Configuring iSCSI Settings” on page 16

 “Configuring LUNs” on page 17

 “Configuring Granite Edges for Specific LUNs” on page 17

Configuring iSCSI Settings You can view and configure the iSCSI Initiator, portals, and targets in the iSCSI Configuration page. The iSCSI Initiator settings configure how Granite Core communicates with one or more storage arrays through the specified portal configuration. After configuring the iSCSI portal, you can open the portal configuration to configure targets. For more information and procedures, see the Granite Core Management Console User’s Guide, the Riverbed Granite Core Command-Line Interface Reference Manual, and the Riverbed Command-Line Interface Reference Manual.

16 Granite Core and Edge Appliance Deployment Guide Connecting Granite Core with Granite Edge Deploying Granite Core and Granite Edge as a System

Configuring LUNs You configure Block Disk (Fibre Channel), Edge Local, and iSCSI LUNs in the LUNs page. Typically, Block Disk and iSCSI LUNs are used to store production data. They share the space in the block store cache of the associated Granite Edges, and the data is continuously replicated and kept synchronized with the associated LUN in the data center. The Granite Edge block store caches only the working set of data blocks for these LUNs; additional data is retrieved from the data center when needed. Block-disk LUN configuration pertains to Fibre Channel support. Fibre Channel is supported only in Core VE deployments. For more information, see “Configuring Fibre Channel LUNs” on page 28. Edge local LUNs are used to store transient and temporary data or local copies of software distribution repositories. Local LUNs also use dedicated space in the block store cache of the associated Granite Edges, but the data is not replicated back to the data center LUNs.

Configuring Granite Edges for Specific LUNs After you configure the LUNs and Granite Edges for the Granite Core appliance, you can map the LUNs to the Granite Edge appliances. You complete this mapping through the Granite Edge configuration in the Granite Core Management Console on the Configure > Manage: Granite Edges page. When you select a specific Granite Edge, the following controls for additional configuration are displayed.

Control Description

Status This panel displays the following information about the selected Granite Edge: • IP Address - The IP address of the selected Granite Edge. • Connection Status - Connection status to the selected Granite Edge. • Connection Duration - Duration of the current connection. • Total LUN Capacity - Total storage capacity of the LUN dedicated to the selected Granite Edge. • Block Store Encryption - Type of encryption selected, if any. The panel also displays a small-scale version of the Granite Edge Data I/O report. Target Settings This panel displays the following controls for configuring the target settings: • Target Name - Displays the system name of the selected Granite Edge. • Require Secured Initiator Authentication - Requires CHAP authorization when the selected Granite Edge is connecting to initiators. If the Require Secured Initiator Authentication setting is selected, you must set authentication to CHAP in the adjacent Initiator tab. • Enable Header Digest - Includes the header digest data from the iSCSI protocol data unit (PDU). • Enable Data Digest - Includes the data digest data from the iSCSI PDU. • Update Target - Applies any changes you make to the settings in this panel.

Granite Core and Edge Appliance Deployment Guide 17 Deploying Granite Core and Granite Edge as a System Riverbed Turbo Boot

Control Description

Initiators This panel displays controls for adding and managing initiator configurations: • Initiator Name - Specify the name of the initiator you are configuring. • Add to Initiator Group - Select an initiator group from the drop-down list. • Authentication - Select the authentication method from the drop-down list: None - No authentication required. CHAP - Only the target authenticates the initiator. The secret is set just for the target; all initiators that want to access that target must use the same secret to begin a session with the target. Mutual CHAP - The target and the initiator authenticate each other. A separate secret is set for each target and for each initiator in the storage array. If Require Secured Initiator Authentication is selected for the Granite Edge in the Target Settings tab, authentication must be configured for a CHAP option. • Add Initiator - Adds the new initiator to the running configuration. Initiator Groups This panel displays controls for adding and managing initiator group configurations: • Group Name - Specifies a name for the group. • Add Group - Adds the new group. The group name displays in the Initiator Group list. After this initial configuration, click the new group name in the list to display additional controls: •Click Add or Remove to control the initiators included in the group. LUNs This panel displays controls for mapping available LUNs to the selected Granite Edge. After mapping, the LUN displays in the list in this panel. To manage group and initiator access, click the name of the LUN to access additional controls. Prepopulation This panel displays controls for configuring prepopulation tasks: • Schedule Name - Specify a task name. • Start Time - Select the start day and time from the respective drop-down lists. • Stop Time - Select the stop day and time from the respective drop-down list. • Add Prepopulation Schedule - Adds the task to the Task list. This prepopulation schedule is applied to all virtual LUNs mapped to this appliance if you do not configure any LUN-specific schedules. To delete an existing task, click the trash icon in the Task list. The LUN must be pinned to enable prepopulation. For more information, see “LUN Pinning and Prepopulation in the Granite Core” on page 11.

Riverbed Turbo Boot

Riverbed Turbo Boot uses the Windows Performance Toolkit to generate information that enables faster boot times for Windows VMs in the branch office on either external ESXi hosts or on VSP internal to SteelHead EX. Turbo Boot can improve boot times by two to ten times, depending on the customer scenario.

Note: Turbo Boot only applies to Windows VM.

18 Granite Core and Edge Appliance Deployment Guide Riverbed Turbo Boot Deploying Granite Core and Granite Edge as a System

If you are booting a Windows server or client VM from an unpinned LUN, Riverbed recommends that you install the Riverbed Turbo Boot software on the Windows VM. The list of supported operating systems for the Riverbed Turbo Boot software are as follows:

 Windows Vista

 Windows 7

 Windows Server 2008

 Windows Server 2008 r2

 Windows Server 2012

 Windows Server 2012r2 For installation information, see the Granite Core Installation and Configuration Guide.

Granite Core and Edge Appliance Deployment Guide 19 Deploying Granite Core and Granite Edge as a System Riverbed Turbo Boot

20 Granite Core and Edge Appliance Deployment Guide

CHAPTER 3 Deploying the Granite Core Appliance

This chapter describes the deployment processes specific to Granite Core. It includes the following sections:

 “Prerequisites” on page 21

 “Interface and Port Configuration” on page 22

 “Configuring the iSCSI Initiator” on page 27

 “Configuring LUNs” on page 27

 “Configuring Redundant Connectivity with MPIO” on page 29

 “Granite Core Pool Management” on page 30

Prerequisites

Complete the following tasks:

1. Install and connect the Granite Core in the data center network. Include both Granite Cores if you are deploying a high-availability solution. For more information, see the Granite Core Installation and Configuration Guide and the Granite Core Getting Started Guide.

2. Configure the iSCSI Initiators in the Granite Core using the iSCSI Qualified Name (IQN) format. Fibre Channel connections to the Core VE are also supported. For more information, see “Configuring Fibre Channel LUNs” on page 28.

3. Enable and provision LUNs on the storage array. Make sure to include registering the Granite Core IQN and configuring any required LUN masks. For details, see “Provisioning LUNs on the Storage Array” on page 9.

4. Define the Granite Edge identifiers so you can later establish connections between the Granite Core and the corresponding Granite Edge appliances. For details, see “Managing vSphere Datastores on LUNs Presented by Granite Core” on page 12.

Granite Core and Edge Appliance Deployment Guide 21 Deploying the Granite Core Appliance Interface and Port Configuration

Interface and Port Configuration

This section describes a typical port configuration. You might require additional routing configuration depending on your deployment scenario. This section includes the following topics:

 “Granite Core Ports” on page 22

 “Configuring Interface Routing” on page 23

 “Configuring Granite Core for Jumbo Frames” on page 26

Granite Core Ports

The following table summarizes the ports that connect Granite Core to your network.

Port Description

Console Connects the serial cable to a terminal device. You establish a serial connection to a terminal emulation program for console access to the Setup Wizard and the Granite Core CLI. Primary Connects Granite Core to a VLAN switch through which you can connect to the (PRI) Management Console and the Granite Core CLI. You typically use this port for communication with Granite Edges. Auxiliary (AUX) Connects Granite Core to the management VLAN. You can connect a computer directly to the appliance with a crossover cable, enabling you to access the CLI or Management Console. eth0_0 to eth0_3 Connects the eth0_0, eth0_1, eth0_2, and eth0_3 ports of Granite Core to a LAN switch using a straight-through cable. You can use the ports either for iSCSI SAN connectivity or failover interfaces when you configure Granite Core for high availability (HA) with another Granite Core. In an HA deployment, failover interfaces are usually connected directly between Granite Core peers using crossover cables. If you deploy Granite Core between two switches, all ports must be connected with straight-through cables. eth1_0 onwards Granite Cores have four gigabit Ethernet ports (eth0_0 to eth0_3) by default. For additional connectivity, you can install optional NICs in PCIe slots within the Granite Core. These slots are numbered 1 to 5. Supported NICs can be either 1 Gb or 10 Gb depending on connectivity requirements. The NIC ports are automatically recognized by the Granite Core, following a reboot. The ports are identified by the system as ethX_Y where X corresponds to the PCIe slot number and Y corresponds to the port on the NIC. For example, a two-port NIC in PCIe slot 1 is displayed as having ports eth1_0 and eth1_1. Connect the ports to LAN switches or other devices using the same principles as the other Granite network ports. For more details about installing optional NICs, see the Network Interface Card Installation Guide. For more information about the configuration of network ports, see the Granite Core Management Console User’s Guide. Figure 3-1 shows a basic HA deployment indicating some of the Granite Core ports and use of straight- through or crossover cables. For more information about HA deployments, see “Granite Appliance High-Availability Deployment” on page 61.

22 Granite Core and Edge Appliance Deployment Guide Interface and Port Configuration Deploying the Granite Core Appliance

Figure 3-1. Granite Core Ports

Configuring Interface Routing

You configure interface routing in the Configure > Networking: Management Interfaces page of the Granite Core Management Console.

Note: If all the interfaces have different IP addresses, you do not need additional routes.

This section describes the following scenarios:

 “All Interfaces Have Separate Subnet IP Addresses” on page 23

 “All Interfaces Are on the Same Subnets” on page 24

 “Some Interfaces, Except Primary, Share the Same Subnets” on page 25

 “Some Interfaces, Including Primary, Share the Same Subnets” on page 25

All Interfaces Have Separate Subnet IP Addresses In this scenario, you do not need additional routes. The following table shows a sample configuration in which each interface has an IP address on a separate subnet.

Interface Sample Configuration Description

Auxiliary 192.168.10.2/24 Management (and default) interface. Primary 192.168.20.2/24 Interface to WAN traffic. eth0_0 10.12.5.12/16 Interface for storage array traffic. eth0_1 Optional, additional interface for storage array traffic.

Granite Core and Edge Appliance Deployment Guide 23 Deploying the Granite Core Appliance Interface and Port Configuration

Interface Sample Configuration Description

eth0_2 192.168.30.2/24 HA failover peer interface, number 1. eth0_3 192.168.40.2/24 HA failover peer interface, number 2.

All Interfaces Are on the Same Subnets If all interfaces are in the same subnet, only the primary interface has a route added by default. You must configure routing for the additional interfaces. The following table shows a sample configuration.

Interface Sample Configuration Description

Auxiliary 192.168.10.1/24 Management (and default) interface. Primary 192.168.10.2/24 Interface to WAN traffic. eth0_0 192.168.10.3/24 Interface for storage array traffic.

To configure additional routes

1. In the Granite Core Management Console, choose Configure > Networking: Management Interfaces. Figure 3-2. Routing Table on the Management Interfaces Page

2. Under Main IPv4 Routing Table, use the following controls to configure routing as necessary.

. Control Description

Add a New Route Displays the controls for adding a new route. Destination IPv4 Address Specify the destination IP address for the out-of-path appliance or network management device.

IPv4 Subnet Mask Specify the subnet mask. For example, 255.255.255.0. Gateway IPv4 Address Optionally, specify the IP address for the gateway. Interface From the drop-down list, select the interface. Add Adds the route to the table list.

24 Granite Core and Edge Appliance Deployment Guide Interface and Port Configuration Deploying the Granite Core Appliance

3. Repeat for each interface that requires routing.

4. Click Save to save your changes permanently. You can also perform this configuration using the ip route CLI command. For details, see the Riverbed Granite Core Command-Line Interface Reference Manual.

Some Interfaces, Except Primary, Share the Same Subnets If a subset of interfaces, excluding primary, are in the same subnet, you must configure additional routes for those interfaces. The following table shows a sample configuration.

Interface Sample Configuration Description

Auxiliary 10.10.10.1/24 Management (and default) interface. Primary 10.10.20.2/24 Interface to WAN traffic. eth0_0 192.168.10.3/24 Interface for storage array traffic. eth0_1 192.168.10.4/24 Additional interface for storage array traffic.

To configure additional routes

1. In the Granite Core Management Console, choose Configure > Networking: Management Interfaces.

2. Under Main IPv4 Routing Table, use the following controls to configure routing as necessary.

. Control Description

Add a New Route Displays the controls for adding a new route. Destination IPv4 Address Specify the destination IP address for the out-of-path appliance or network management device. IPv4 Subnet Mask Specify the subnet mask. For example, 255.255.255.0. Gateway IPv4 Address Optionally, specify the IP address for the gateway. Interface From the drop-down list, select the interface. Add Adds the route to the table list.

3. Repeat for each interface that requires routing.

4. Click Save to save your changes permanently. You can also perform this configuration using the ip route CLI command. For details, see the Riverbed Granite Core Command-Line Interface Reference Manual.

Some Interfaces, Including Primary, Share the Same Subnets If some but not all interfaces, including primary, are in the same subnet, you must configure additional routes for those interfaces.

Granite Core and Edge Appliance Deployment Guide 25 Deploying the Granite Core Appliance Interface and Port Configuration

The following table shows a sample configuration.

Interface Sample Configuration Description

Aux 10.10.10.2/24 Management (and default) interface. Primary 192.168.10.2/24 Interface to WAN traffic. eth0_0 192.168.10.3/24 Interface for storage array traffic. eth0_1 192.168.10.4/24 Additional interface for storage array traffic. eth0_2 20.20.20.2/24 HA failover peer interface, number 1. eth0_3 30.30.30.2/24 HA failover peer interface, number 2.

To configure additional routes

1. In the Granite Core Management Console, choose Configure > Networking: Management Interfaces.

2. Under Main IPv4 Routing Table, use the following controls to configure routing as necessary.

. Control Description

Add a New Route Displays the controls for adding a new route. Destination IPv4 Address Specify the destination IP address for the out-of-path appliance or network management device. IPv4 Subnet Mask Specify the subnet mask. For example, 255.255.255.0. Gateway IPv4 Address Optionally, specify the IP address for the gateway. Interface From the drop-down list, select the interface. Add Adds the route to the table list.

3. Repeat for each interface that requires routing.

4. Click Save to save your changes permanently. You can also perform this configuration using the ip route CLI command. For details, see the Riverbed Granite Core Command-Line Interface Reference Manual.

Configuring Granite Core for Jumbo Frames

If your network infrastructure supports jumbo frames, Riverbed recommends that you configure the connection between the Granite Core and the storage system as described in this section. Depending on how you configure Granite Core, this can mean you configure the primary interface, or one or more data interfaces. In addition to configuring Granite Core for jumbo frames, you must configure the storage system and any switches, routers, or other network devices between Granite Core and the storage system.

To configure Granite Core for jumbo frames

1. From the Granite Core Management Console, choose Configure > Networking and open the relevant page (Management Interfaces or Data Interfaces) for the interface used by the Granite Core to connect to the storage network. For example, eth0_0.

26 Granite Core and Edge Appliance Deployment Guide Configuring the iSCSI Initiator Deploying the Granite Core Appliance

2. On the interface on which you want to enable jumbo frames: – Enable the interface. – Select the Specify IPv4 Address Manually option and enter the correct value for your implementation. – Riverbed recommends you specify 9000 bytes for the MTU setting.

3. Click Apply to apply the settings to the current configuration.

4. Click Save to save your changes permanently. To configure jumbo frames on you storage array, see the documentation from your storage array vendor.

Configuring the iSCSI Initiator

The iSCSI Initiator settings dictate how Granite Core communicates with one or more storage arrays through the specified portal configuration. iSCSI configuration includes:

 Initiator name

 Enabling header or data digests (optional)

 Enabling CHAP authorization (optional)

 Enabling MPIO and standard routing for MPIO (optional) MPIO functionality is described separately in this document. For more information, see “Configuring Redundant Connectivity with MPIO” on page 29. In the Granite Core Management Console, you can view and configure the iSCSI Initiator, local interfaces for MPIO, portals, and targets in the Configure > Storage: iSCSI, Initiators, MPIO page. For more information, see the Granite Core Management Console User’s Guide. In the Granite Core CLI, use the following commands to access and manage iSCSI Initiator settings:

 storage lun modify auth-initiator to add or remove an authorized iSCSI Initiator to or from the LUN

 storage iscsi data-digest to include or exclude the data digest in the iSCSI (PDU)

 storage iscsi header-digest to include or exclude the header digest in the iSCSI PDU

 storage iscsi initiator to access numerous iSCSI configuration settings

Configuring LUNs

This section includes the following topics:

 “Exposing LUNs” on page 28

 “Configuring Fibre Channel LUNs” on page 28

 “Removing a LUN from a Granite Core Configuration” on page 28

Granite Core and Edge Appliance Deployment Guide 27 Deploying the Granite Core Appliance Configuring LUNs

Before you can configure LUNs in Granite Core, you must provision the LUNs on the storage array and configure the iSCSI Initiator. For more information, see “Provisioning LUNs on the Storage Array” on page 9 and “Configuring the iSCSI Initiator” on page 27.

Exposing LUNs

You expose LUNs by scanning for LUNs on the storage array, and then mapping them to Granite Edge appliances. After exposing LUNs, you can further configure them for failover, MPIO, snapshots, and pinning and prepopulation. In the Granite Core Management Console, you can expose and configure LUNs in the Configure > Manage: LUNs page. For more information, see the Granite Core Management Console User’s Guide. In the Granite Core CLI, you can expose and configure LUNs with the following commands:

 storage iscsi portal host rescan-luns to discover available LUNs on the storage array

 storage lun add to add a specific LUN

 storage lun modify to modify an existing LUN configuration For more information, see the Riverbed Granite Core Command-Line Interface Reference Manual.

Configuring Fibre Channel LUNs

The process of configuring Fibre Channel LUNs for Granite Core requires configuration in both the ESXi server and the Granite Core. For more information, see “Granite and Fibre Channel” on page 37 and the Fibre Channel on Granite Core Virtual Edition Solution Guide.

Removing a LUN from a Granite Core Configuration

This section describes the process to remove a LUN from a Granite Core configuration. This requires actions on both the Granite Core and the server running at the branch.

Note: In the following example procedure, the branch server is assumed to be a Windows server, however, similar steps are required for other types of servers.

To remove a LUN

1. At the branch where the LUN is exposed:

 Power down the local Windows server.

 If the Windows server runs on ESXi, you must also unmount and detach the LUN from ESXi.

2. At the data center, take the LUN offline in the Granite Core configuration. When you take a LUN offline, outstanding data is flushed to the storage array LUN and the block store cache is cleared. The offline procedure can take a few minutes.

28 Granite Core and Edge Appliance Deployment Guide Configuring Redundant Connectivity with MPIO Deploying the Granite Core Appliance

Depending on the WAN bandwidth, latency, utilization, and the amount of data in the Granite Edge block store that has not yet been synchronized back to the data center, this operation can take seconds to many minutes or even hours. Use the reports on the Granite Edge to help understand just how much data is left to be written back. Until all the data is safely synchronized back to the LUN in the data center, the Granite Core keeps the LUN in an offlining state. Only when the data is safe does the LUN status change to offline. To take a LUN offline:

 CLI - Use the storage lun modify offline command.

 Management Console - Choose Configure > Manage: LUNs to open the LUNs page, select the LUN configuration in the list, and select the Details tab.

3. Remove the LUN configuration using one of the following methods:

 CLI - Use the storage lun remove command.

 Management Console - Choose Configure > Manage: LUNs to open the LUNs page, locate the LUN configuration in the list, and click the trash icon. For details about CLI commands, see the Riverbed Granite Core Command-Line Interface Reference. For details about using the Granite Core Management Console, see the Granite Core Management Console User’s Guide.

Configuring Redundant Connectivity with MPIO

The MPIO feature enables you to configure multiple physical I/O paths (interfaces) for redundant connectivity with the local network, storage system, and iSCSI Initiator. Both Granite Core and Granite Edge offer MPIO functionality. However, these features are independent of each other and do not affect each other.

MPIO in Granite Core

The MPIO feature enables you to connect Granite Core to the network and to the storage system through multiple physical I/O paths. Redundant connections help prevent loss of connectivity in the event of an interface, switch, cable, or other physical failure. You can configure MPIO at the following separate and independent points:

 iSCSI Initiator - This configuration allows you to enable and configure multiple I/O paths between Granite Core and the storage system. Optionally, you can enable standard routing if the iSCSI portal is not in the same subnet as the MPIO interfaces.

 iSCSI Target - This configuration allows you to configure multiple portals on the Granite Edge. Using these portals, an initiator can establish multiple I/O paths to the Granite Edge.

Configuring Granite Core MPIO Interfaces

You can configure MPIO interfaces through the Granite Core Management Console or the Granite Core CLI.

Granite Core and Edge Appliance Deployment Guide 29 Deploying the Granite Core Appliance Granite Core Pool Management

In the Granite Core Management Console, choose Configure > Storage Array: iSCSI, Initiator, MPIO page. Configure MPIO using the following controls:

 Enable MPIO.

 Enable standard routing for MPIO. This option is required if the backend iSCSI portal is not in the same subnet of at least two of the MPIO interfaces.

 Add (or remove) local interfaces for the MPIO connections. For details about configuring MPIO interfaces in the Granite Core Management Console, see the Granite Core Management Console User’s Guide. In the Granite Core CLI, open the configuration terminal mode and run the following commands:

 storage iscsi session mpio enable to enable the MPIO feature

 storage iscsi session mpio standard-routes enable to enable standard routing for MPIO. This command is required if the backend iSCSI portal is not in the same subnet of at least two of the MPIO interfaces.

 storage lun modify mpio path to specify a path These commands require additional parameters to identify the LUN. For details about configuring MPIO interfaces in the Granite Core CLI, see the Riverbed Granite Core Command-Line Interface Reference Manual.

Granite Core Pool Management

This section describes Granite Core pool management. It includes the following topics:

 “Overview of Granite Core Pool Management” on page 30

 “Pool Management Architecture” on page 31

 “Configuring Pool Management” on page 31

 “Changing Pool Management Structure” on page 34

 “High Availability in Pool Management” on page 35

Overview of Granite Core Pool Management

Granite Core Pool Management simplifies the administration of large installations in which you need to deploy several Granite Cores. Pool management enables you to manage storage configuration and check storage-related reports on all the Granite Cores from a single Management Console. Pool management is especially relevant to Core VE deployments when LUNs are provided over Fibre Channel. VMware ESX has a limitation for raw device mapping (RDM) LUNs, which limits Core VE to 60 LUNs. In releases prior to Granite v3.0, to manage 300 LUNs, you needed to deploy five separate Core VEs. To ease the Granite Core management in Granite v3.0 and later, you can combine Granite Cores into management pools. In Granite v3.0 and later, you can enable access to the SteelHead REST API framework. This access enables you to generate a REST API access code for use in Granite Core pool management. You can access the REST API on the Configure > Pool Management: REST API Access page. For more information about pool management, see Granite Core Management Console User’s Guide.

30 Granite Core and Edge Appliance Deployment Guide Granite Core Pool Management Deploying the Granite Core Appliance

Pool Management Architecture

Pool management is a two-tier architecture that allows each Granite Core to become either manager or a member of a pool. A Granite Core can be part of only one pool. The pool is a single-level hierarchy with a flat structure, in which all members of the pool except the manager have equal priority and cannot themselves be managers of pools. The pool has a loose membership, in which pool members are not aware of one another, except for the manager. Any Granite Core can be the manager of the pool, but the pool manager cannot be a member of any other pool. You can have up to 32 Granite Cores in one pool, not including the manager. The pool is dissolved when the manager is no longer available (unless the manager has an HA peer). Management of a pool can be taken over by a failover peer. However, a member failover peer cannot be managed by the member pool manager through the member, even if the failover peer is down. For details about HA, see “High Availability in Pool Management” on page 35. From a performance prospective, it does not matter which Granite Core you choose as the manager. The resources required by the pool manager have little to no differences from regular Granite Core operations. Figure 3-3. Granite Core Two-Tier Pool Management

Configuring Pool Management

This section describes how to configure pool management.

Granite Core and Edge Appliance Deployment Guide 31 Deploying the Granite Core Appliance Granite Core Pool Management

These are the high-level steps:

1. “To create a pool” on page 32

2. “To generate a REST access code for a member” on page 32

3. “To add a member to the pool” on page 33 You can configure pool management only through the Management Console.

To create a pool

1. Choose the Granite Core you want to become the pool manager.

2. From the Management Console of the pool manager, choose Configure > Pool Management: Edit Pool.

3. Specify a name for the pool in the Pool Name field.

4. Click Create Pool.

To generate a REST access code for a member

1. From the Management Console of the pool member, choose Configure > Pool Management: REST API Access. Figure 3-4. REST API Access Page

2. Select Enable REST API Access and click Apply.

3. Select Add Access Code.

4. Specify a useful description, such as For Pool Management from , in the Description of Use field.

5. Select Generate New Access Code and click Add. A new code is generated.

32 Granite Core and Edge Appliance Deployment Guide Granite Core Pool Management Deploying the Granite Core Appliance

6. Expand the new entry and copy the access code. Continue to “To add a member to the pool” on page 33 to finish the process. Figure 3-5. REST API Access

Note: You can revoke access of a pool manager by removing the access code or disabling REST API access on the member.

Before you begin the next procedure, you need the hostnames or the IP addresses of the Granite Cores you want add as members.

To add a member to the pool

1. From the Management Console of the pool manager, choose Configure > Pool Management: Edit Pool.

2. Select Add a Pool Member.

3. Add the member by specifying the hostname or the IP address of the member.

4. Paste the REST API access code that you generated in the API Access Code field on the Management Console of the pool member.

Granite Core and Edge Appliance Deployment Guide 33 Deploying the Granite Core Appliance Granite Core Pool Management

When a member is successfully added to the pool, the pool manager Pool Management page displays statistics about the members, such as health, number of LUNs, model, failover status, and so on. Figure 3-6. Successful Pool Management Configuration

Changing Pool Management Structure

A pool manager can remove individual pool members or dissolve the whole pool. A pool member can release itself from the pool.

To remove a pool relationship for a single member or to dissolve the pool completely

1. From the Management Console of the pool manager, choose Configure > Pool Management: Edit Pool.

2. To remove an individual pool member, click the trash can icon in the Remove column of the desired member you want to remove. To dissolve the entire pool, click Dissolve Pool. Riverbed recommends that you release a member from a pool from the Management Console of the manager. Use the following procedure to release a member from the pool only if the manager is either gone or cannot contact the member.

To release a member from a pool

1. From the Management Console of the pool member, choose Configure > Pool Management: Edit Pool. You see the message This appliance is currently a part of pool and is being managed by .

34 Granite Core and Edge Appliance Deployment Guide Granite Core Pool Management Deploying the Granite Core Appliance

2. Click Release me from the Pool. This releases the member from the pool, but you continue to see the member in the pool table on the manager. Figure 3-7. Releasing a Pool Member from the Member Management Console

3. Manually delete the released member from the manager pool table.

High Availability in Pool Management

When you use pool management in conjunction with an HA environment, Riverbed recommends that you configure both peers as members of the same pool. If you choose one of the peers to be a pool manager, its failover peer should join the pool as a member. Without pool management, Granite Core cannot manage its failover peer storage configuration unless failover is active (the failover peer is down). With pool management, the manager can manage the failover peer storage configuration even while the failover peer is up. The manager failover peer can manage the manager storage configuration only when the manager is down. The following scenarios show how you can use HA in pool management:

 The manager is down and its failover peer is active. In this scenario, when the manager is down the failover peer can take over the management of a pool. The manager failover peer can manage storage configuration for the members of the pool using the same configuration as the manager.

 The member is down and its failover peer is active. When a member of a pool is down and it has a failover peer configured (and the peer is not the manager of the member), the failover peer takes over servicing the LUNs of the member. The failover peer can access the storage configuration of the member when it is down. However, the pool manager cannot access the storage configuration of the failed member. To manage storage configuration of the down member, you need to log in to the Management Console of its failover peer directly.

Note: The pool is dissolved when the manager is no longer available, unless the manager has an HA peer.

For more details about HA deployments, see “Granite Appliance High-Availability Deployment” on page 61.

Granite Core and Edge Appliance Deployment Guide 35 Deploying the Granite Core Appliance Granite Core Pool Management

36 Granite Core and Edge Appliance Deployment Guide

CHAPTER 4 Granite and Fibre Channel

This chapter includes general information about Fibre Channel LUNs and how they interact with Granite. It includes the following sections:

 “Overview of Fibre Channel” on page 37

 “Deploying Fibre Channel LUNs on Core VEs” on page 43

 “Configuring Fibre Channel LUNs in a Core VE HA Scenario” on page 46

 “Populating Fibre Channel LUNs” on page 49

 “Best Practices and Recommendations” on page 50

 “Troubleshooting” on page 52

Overview of Fibre Channel

Core VE can connect to Fibre Channel LUNs at the data center and export them to the branch office as iSCSI LUNs. The iSCSI LUNs can then be mounted by the SteelHead EX VSP running the VMware ESX or ESXi hypervisor by external ESX or ESXi servers or directly by Microsoft Windows virtual servers through Microsoft iSCSI Initiator. A virtual Windows file server running on VSP (Figure 4-1) can then share the mounted drive to branch office client PCs via CIFS protocol. This section includes the following topics:

 “Fibre Channel LUN Considerations” on page 39

 “How VMware ESXi Virtualizes Fibre Channel LUNs” on page 39

 “How Core VE Connects to RDM Fibre Channel LUNs” on page 41

Granite Core and Edge Appliance Deployment Guide 37 Granite and Fibre Channel Overview of Fibre Channel

 “Requirements for Core VE and Fibre Channel SANs” on page 42

 “Specifics About Fibre Channel LUNs Versus iSCSI LUNs” on page 42 Figure 4-1. Granite Solution with Fibre Channel

Fibre Channel is the predominant storage networking technology for enterprise business. Fibre Channel connectivity is estimated to be at 78 percent versus iSCSI at 22 percent. IT administrators still rely on the known, trusted, and robust Fibre Channel technology. Fibre Channel is a set of integrated standards developed to provide a mechanism for transporting data at the fastest rate possible with the least delay. In storage networking, Fibre Channel is used to interconnect host and application servers with storage systems. Typically, servers and storage systems communicate using the SCSI protocol. In a storage area network (SAN), the SCSI protocol is encapsulated and transported through Fibre Channel frames. The Fibre Channel (FC) protocol processing on the host servers and the storage systems is mostly carried out in hardware. Figure 4-2 shows the various layers in the FC protocol stack and the portions implemented in hardware and software for a FC host bus adapter (HBA). FC HBA vendors are Qlogic, Emulex, and LSI. Figure 4-2. HBA FC Protocol Stack

38 Granite Core and Edge Appliance Deployment Guide Overview of Fibre Channel Granite and Fibre Channel

Special switches are also required to transport Fibre Channel traffic. Vendors in this market are Cisco and Brocade. Switches implement many of the FC protocol services, such as name server, domain server, zoning, and so on. Zoning is particularly important because, in collaboration with LUN masking on the storage systems, it implements storage access control by limiting access to LUNs to specific initiators and servers through specific targets and LUNs. An initiator and a target are visible to each other only if they belong to the same zone. LUN masking is an access control mechanism implemented on the storage systems. NetApp implements LUN masking through initiator groups, which enable you to define a list of worldwide names (WWNs) that are allowed to access a specific LUN. EMC implements LUN masking using masking views that contain storage groups, initiator groups, and port groups. LUN masking is important because Windows-based servers, for example, attempt to write volume labels to all available LUNs. This attempt can make the LUNs unusable by other operating systems and can result in data loss.

Fibre Channel LUN Considerations

Fibre Channel LUNs are distinct from iSCSI LUNs in several important ways:

 No MPIO configuration - Multipathing support is performed by the ESXi system.

 SCSI reservations - SCSI reservations are not taken on Fibre Channel LUNs.

 Additional HA configuration required - Configuring HA for Core VE failover peers requires that each appliance be deployed on a separate ESXi system.

 Maximum of 60 Fibre Channel LUNs per ESXi system - ESXi allows a maximum of 60 RDMs into a VM. Within a VM an RDM is represented by a virtual SCSI device. A VM can only have four virtual SCSI controllers with 15 virtual SCSI devices each.

How VMware ESXi Virtualizes Fibre Channel LUNs

The VMware ESXi hypervisor provides not only CPU and memory virtualization but also host-level storage virtualization, which logically abstracts the physical storage layer from virtual machines. Virtual machines do not access the physical storage or LUNs directly, but instead use virtual disks. To access virtual disks, a virtual machine uses virtual SCSI controllers. Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides on a VMware Virtual Machine File System (VMFS) datastore, an NFS-based datastore, or a raw disk. From the standpoint of the virtual machine, each virtual disk appears as if it were a SCSI drive connected to a SCSI controller. Whether the actual physical disk device is being accessed through parallel SCSI, iSCSI, network, or Fibre Channel adapters on the host is transparent to the guest operating system.

Virtual Machine File System In a simple configuration, the disks of virtual machines are stored as files on a Virtual Machine File System (VMFS). When guest operating systems issue SCSI commands to their virtual disks, the virtualization layer translates these commands to VMFS file operations.

Granite Core and Edge Appliance Deployment Guide 39 Granite and Fibre Channel Overview of Fibre Channel

Raw Device Mapping (RDM) A raw device mapping (RDM) is a special file in a VMFS volume that acts as a proxy for a raw device, such as a Fibre Channel LUN. With the RDM, an entire Fibre Channel LUN can be directly allocated to a virtual machine. Figure 4-3. ESXi Storage Virtualization

40 Granite Core and Edge Appliance Deployment Guide Overview of Fibre Channel Granite and Fibre Channel

How Core VE Connects to RDM Fibre Channel LUNs

Core VE uses RDM to mount Fibre Channel LUNs and export them to the Granite Edge component running on the SteelHead EX at the branch office. Granite Edge exposes those LUNs as iSCSI LUNs to the branch office clients. Figure 4-4. Granite Core-VM FC LUN to RDM Mapping

When Core VE interacts with an RDM Fibre Channel LUN, the following process takes place:

1. Core VE issues SCSI commands to the RDM disk.

2. The device driver in the Core VE operating system communicates with the virtual SCSI controller.

3. The virtual SCSI controller forwards the command to the ESXi virtualization layer or VMkernel.

4. The VMkernel performs the following tasks:

 Locates the RDM file in the VMFS.

 Maps the SCSI requests for the blocks on the RDM virtual disk to blocks on the appropriate Fibre Channel LUN.

 Sends the modified I/O request from the device driver in the VMkernel to the HBA.

5. The HBA performs the following tasks:

 Packages the I/O request according to the rules of the FC protocol.

 Transmits the request to the storage system.

 A Fibre Channel switch receives the request and forwards it to the storage system that the host wants to access.

Granite Core and Edge Appliance Deployment Guide 41 Granite and Fibre Channel Overview of Fibre Channel

Requirements for Core VE and Fibre Channel SANs

The following table describes the hardware and software requirements for deploying Core VE with Fibre Channel SANs.

Requirements Notes

SteelHead EX with EX v2.5 or later Core VE with Granite v2.5 or later VMware ESX/ESXi version 4.1 or later Storage system, HBA, and combination supported For details, see the VMware Compatibility Guide. in conjunction with ESX/ESXi systems Reserve CPU(s) and RAM on the ESX/ESXi system Granite Core model V1000U: 2 GB RAM, 2 CPU Granite Core model V1000L: 4 GB RAM, 4 CPU Granite Core model V1000H: 8 GB RAM, 8 CPU Granite Core model V1500L: 32 GB RAM, 8 CPU Granite Core model V1500H: 48 GB RAM, 12 CPU Fibre Channel license on the storage system In some storage systems, Fibre Channel is a licensed feature.

Specifics About Fibre Channel LUNs Versus iSCSI LUNs

Using Fibre Channel LUNs on Core VE in conjunction with VMware ESX/ESXi differs from using iSCSI LUN directly on the Granite Core in a number of ways.

Feature Fibre Channel LUNs vs. iSCSI LUNs

Multipathing The ESX/ESXi system, and not Granite Core, performs multipathing for the Fibre Channel LUNs. VSS snapshots Snapshots created using the Microsoft Windows diskshadow command are not supported on Fibre Channel LUNs. SCSI reservations SCSI reservations are not taken on Fibre Channel LUNs. Granite Core HA Active and failover Core VEs must be deployed in a separate ESX/ESXi system. deployment Max 60 Fibre Channel ESX/ESXi systems enable a maximum of 4 SCSI controllers. Each controller supports a LUNs per ESX/ESXi maximum of 15 SCSI devices. Hence, a maximum of 60 Fibre Channel LUNs are system supported per ESX/ESXi system. VMware vMotion not Core VEs cannot be moved to a different ESXi server using VMware vMotion. supported VMware HA not A virtual Granite Core appliance cannot be moved to another ESXi server via VMware supported HA mechanism. It is recommended to ensure that the virtual Granite Core stay on the specific ESXi server by creating an affinity rule as described in this knowledge base: http://kb.vmware.com/selfservice/microsites/ search.do?language=en_US&cmd=displayKC&externalId=1005508

42 Granite Core and Edge Appliance Deployment Guide Deploying Fibre Channel LUNs on Core VEs Granite and Fibre Channel

Deploying Fibre Channel LUNs on Core VEs

This section describes the process and procedures for deploying Fibre Channel LUNs on Core VEs. It includes the following sections:

 “Deployment Prerequisites” on page 43

 “Configuring Fibre Channel LUNs” on page 43

Deployment Prerequisites

Before you can deploy Fibre Channel LUNs on Core VEs, the following conditions must be met:

 The active Core VE must be deployed and powered up on the ESX/ESXi system.

 The failover Core VE must be deployed and powered up on the second ESX/ESXi system.

 A Fibre Channel LUN must be available on the storage system.

 Preconfigured initiator and storage groups for LUN mapping to the ESX/ESXi systems must be available.

 Preconfigured zoning on the Fibre Channel switch for LUN visibility to the ESX/ESXi systems across the SAN fabric must be available.

 You must have administrator access to the storage system, the ESX/ESXi system, and Granite appliances. For more information about how to set up Fibre Channel LUNs with the ESX/ESXi system, see the VMware Fibre Channel SAN Configuration Guide and the VMware vSphere ESXi vCenter Server 5.0 Storage Guide.

Configuring Fibre Channel LUNs

Perform the procedures in the following sections to configure the Fibre Channel LUNs:

1. “Discovering and configuring Fibre Channel LUNs as Granite Core RDM disks on an ESX/ESXi system” on page 44

2. “Discovering and configuring exposed Fibre Channel LUNs though an ESX/ESXi system on the Core VE” on page 45

Granite Core and Edge Appliance Deployment Guide 43 Granite and Fibre Channel Deploying Fibre Channel LUNs on Core VEs

Discovering and configuring Fibre Channel LUNs as Granite Core RDM disks on an ESX/ESXi system

1. Navigate to the ESX system Configuration tab, click Storage Adapters, select the FC HBA, and click Rescan All to discover the Fibre Channel LUNs. Figure 4-5. FC Disk Discovery

2. Right-click the name of the Core VE and select Edit Settings. The virtual machine properties dialog box opens.

3. Click Add and select Hard Disk for device type.

4. Click Next and select Raw Device Mappings for type of disk to use. Figure 4-6. Select Raw Device Mappings

5. Select the LUNs to expose to the Core VE. If you do not see the LUN, follow the steps described in “Troubleshooting” on page 52.

6. Select the datastore on which you want store the LUN mapping.

44 Granite Core and Edge Appliance Deployment Guide Deploying Fibre Channel LUNs on Core VEs Granite and Fibre Channel

7. Select Store with Virtual Machine. Figure 4-7. Store mappings with VM

8. For compatibility mode, select Physical.

9. For advanced options, use the default virtual device node setting.

10. Review the final options and click Finish. The Fibre Channel LUN is now set up as an RDM and ready to be used by the Virtual Granite Core appliance.

Discovering and configuring exposed Fibre Channel LUNs though an ESX/ESXi system on the Core VE

1. From the Granite Core Management Console, choose Configure > Manage: LUNs and select Add a LUN.

2. Select Block Disk.

3. From the drop-down menu, select Rescan for new LUNs to discover the newly added RDM LUNs (Figure 4-8). Figure 4-8. Rescan for new block disks

Granite Core and Edge Appliance Deployment Guide 45 Granite and Fibre Channel Configuring Fibre Channel LUNs in a Core VE HA Scenario

4. Select the LUN Serial Number.

5. Select Add Block Disk LUN to add it to the Core VE. Map the LUN to the desired Granite Edge and configure the access lists of the initiators. Figure 4-9. Add New Block Disk

Configuring Fibre Channel LUNs in a Core VE HA Scenario

This section describes how to deploy Core VEs in HA environments. It includes the following topics:

 “The ESXi Servers Hosting the Core VEs Are Managed by vCenter” on page 47

 “The ESXi Servers Hosting the Core VEs Are Not Managed by vCenter” on page 49 Riverbed recommends that when you deploy Core VEs in an HA environment, you install the two appliances on separate ESX servers so that there is no single point of failure. You can deploy the Core VEs differently depending on whether the ESX servers hosting the Core VEs are managed by a vCenter or not. The methods described in this section are only relevant when Core VEs manage FC LUNs (also called block disk LUNs).

46 Granite Core and Edge Appliance Deployment Guide Configuring Fibre Channel LUNs in a Core VE HA Scenario Granite and Fibre Channel

For both the deployment methods, modify the storage system Storage Group to expose the LUN to both ESXi systems. Figure 4-10 shows that LUN 0 is assigned to both worldwide names of the HBAs or the ESXi HBAs. Figure 4-10. Core VE HA Deployment

The ESXi Servers Hosting the Core VEs Are Managed by vCenter

Two Core VEs are deployed in HA and hosted on ESX, managed by vCenter. After adding a LUN in as RDM to Core VE1, vCenter does not present the LUN in the list of LUNs available to add as RDM to Core VE2. It is not available because the LUN filtering mechanism being turned on in vCenter by default to help prevent LUN corruption. One way to solve the problem is by adding LUNs to the two Core VEs in HA, with the ESX servers in a vCenter without turning off LUN filtering by using the following procedures. You must also have a shared datastore on a SAN that the ESXi hosts can access and that can be used to store the RDM-mapping files.

Add LUNs to the first Core VE

1. In the vSphere Client inventory, select the first Core VE and select Edit Settings. The Virtual Machine Properties dialog box opens.

2. Click Add, select Hard Disk, and click Next.

3. Select Raw Device Mappings and click Next.

4. Select the LUN to be added and click Next.

5. Select a datastore and click Next. This datastore must be on a SAN because you need a single shared RDM file for each shared LUN on the SAN.

Granite Core and Edge Appliance Deployment Guide 47 Granite and Fibre Channel Configuring Fibre Channel LUNs in a Core VE HA Scenario

6. Select Physical as the compatibility mode and click Next. A SCSI controller is created when the virtual hard disk is created.

7. Select a new virtual device node. For example, select SCSI (1:0), and click Next. This must be a new SCSI controller. You cannot use SCSI 0.

8. Click Finish to complete creating the disk.

9. In the Virtual Machine Properties dialog box, select the new SCSI controller and set SCSI Bus Sharing to Physical and click OK.

To Add LUNs to the second Core VE

1. In the vSphere Client inventory, select the HA Core VE and select Edit Settings. The Virtual Machine Properties dialog box appears.

2. Click Add, select Hard Disk, and click Next.

3. Select Use an existing virtual disk and click Next.

4. In Disk File Path, browse to the location of the disk specified for the first node. Select Physical as the compatibility mode and click Next. A SCSI controller is created when the virtual hard disk is created.

5. Select the same virtual device node you chose for the first Core VE's LUN (for example, SCSI [1:0]), and click Next. The location of the virtual device node for this LUN must match the corresponding virtual device node for the first Core VE.

6. Click Finish.

7. In the Virtual Machine Properties dialog box, select the new SCSI controller and set SCSI Bus Sharing to Physical and click OK. Keep in mind the following caveats:

 You cannot use SCSI controller 0; so the number of RDM LUNs supported on Core VE running on ESXi 5.x reduces from 60 to 48.

 You can only change the SCSI controller SCSI bus sharing setting when the Core VE is powered down; so you need to power down the Core VE each time you want to add a new controller. Each controller supports 16 disks.

 vMotion is not supported with Core VE. Another solution is to turn off LUN filtering (RDM filtering) on the vCenter. If you are not be willing to turn off RDM filtering for the entire vCenter, you cannot disable LUN filtering per data center or per LUN in vCenter. If you turn off LUN filtering temporarily, you can do the following:

1. Turn off RDM filtering on vCenter. The LUN filtering mechanism during RDM creation adds LUNs to both Core VEs.

2. Turn RDM filtering back on.

48 Granite Core and Edge Appliance Deployment Guide Populating Fibre Channel LUNs Granite and Fibre Channel

You must repeat these steps every time new LUNs are added to the Core VEs. However, VMware does not recommend turning LUN filtering off unless there are other methods to prevent LUN corruption. This method should be used with caution.

The ESXi Servers Hosting the Core VEs Are Not Managed by vCenter

When ESX is hosting the Core VEs in HA are not managed by the same vCenter or not managed by vCenter at all, you can add the LUNs as RDM to both the Core VEs without any issues or special configuration requirements.

Populating Fibre Channel LUNs

This section provides the basic steps you need to populate Fibre Channel LUNs prior to deploying them into Granite Core.

To populate a Fibre Channel LUN

1. Create a LUN (Volume) in the storage array and allow the ESXi host where the Granite Core is installed to access it.

2. Go to the ESXi host and chose Configuration > Advance Settings > RdmFilter and deselect RdmFilter to disable it. You must complete this step if you intend on deploying Granite Core in HA config.

3. Navigate to the ESX system Configuration tab, click Storage Adapters, select the FC HBA, and click Rescan All… to discover the Fibre Channel LUNs (Figure 4-5 on page 44).

4. On the ESXi server, select Storage click Add.

5. Select Disk/LUN for the storage type and click Next. You might need to wait a few moments before the new Fibre Channel LUN appears in the list.

6. Select the Fibre Channel drive and click Next.

7. Select VMFS-5 for the file system version and click Next.

8. Click Next, enter a name for the datastore, and click Next.

9. For Capacity, use the default setting of Maximum available space and click Next.

10. Click Finish.

11. Copy files from an existing datastore to the datastore you just added.

12. Select the new datastore and unmount. You need to unmount and detach the device and rescan it, and then reattach it before you can proceed.

Granite Core and Edge Appliance Deployment Guide 49 Granite and Fibre Channel Best Practices and Recommendations

To unmount and detach a datastore

1. To unmount the device, right-click the device in the Devices list and choose Unmount.

2. To detach the device, right-click the device in the Devices list and choose Detach.

3. Rescan the data twice.

4. Reattach the device by right-clicking the device in the Devices list and choosing Attach. Do not rescan the device.

To add the LUN to the Core VE

1. Right-click the Core VE and select Edit Settings.

2. Click Add and select Hard Disk.

3. Click Next, and when prompted to select a disk to use, choose Raw Device Mappings.

4. Select the target LUN to use.

5. Select the datastore on which to store the LUN mapping and choose Store with Virtual Machine.

6. Select Physical for compatibility mode.

7. For advanced options, use the default setting.

8. Review the final options and click Finish. The Fibre Channel LUN is now set up as RDM and ready to be used by the Core VE. When the LUN is projected in the branch site and is attached to the Branch ESXi Server (VSP or other device), you are prompted to select VMFS mount options. Select Keep the existing signature.

Best Practices and Recommendations

This section describes the best practices for deploying Fibre Channel on Core VE. Riverbed recommends that you follow these suggestions because they lead to designs that are easier to configure and troubleshoot, and they get the most out of your Granite appliances. This section includes the following topics:

 “Best Practices” on page 51

 “Recommendations” on page 51

50 Granite Core and Edge Appliance Deployment Guide Best Practices and Recommendations Granite and Fibre Channel

Best Practices

The following table shows the Riverbed best practices for deploying Fibre Channel on Core VE.

Best Practice Description

Keep iSCSI and Fibre Channel Do not mix iSCSI and Fibre Channel LUNs in the same Core LUNs on separate Granite Core VE. appliances

Use ESX/ESXi version 4.1 or Make sure that the ESX/ESXi system is running version 4.1 or later later. Use gigabit links Make sure that you map the Core VE interfaces to gigabit links and are not shared with other traffic. Dedicate physical NICs Use one-to-one mapping between physical and virtual NICs for the Granite Core data interfaces. Reserve CPU(s) and RAM Reserve CPU(s) and RAM for the Virtual Granite Core appliance, following the guidelines listed the following table.

The following table shows the CPU and RAM guidelines for deployment.

Model Memory Disk Space CPU Data Set Size Branches

VGC-1000-U 2 GB 25 GB 2 @ 2.2 GHz 2 TB 5 VGC-1000-L 4 GB 25 GB 4 @ 2.2 GHz 5 TB 10 VGC-1000-M 8 GB 25 GB 8 @ 2.2 GHz 10 TB 20 VGC-1500-L 32 GB 350 GB 8 @ 2.2 GHz 20 TB 30 VGC-1500-M 48 GB 350 GB 12 @ 2.2 GHz 35 TB 30

Recommendations

The following table shows the Riverbed recommendations for deploying Fibre Channel on Core VE.

Best Practice Description

Deploy a dual-redundant FC The FC HBA connects the ESXi system to the SAN. Dual-redundant HBA HBAs help to keep an active path always available. ESXi multipath software is used for controlling and monitoring HBA failure. In case of path or HBA failure, the workload is failed-over to the working path. Use recommended practices for Before deleting, offlining, or unmapping the LUNs from the storage removing/deleting FC LUNs system or removing the LUNs from the zoning configuration, remove the LUNs/block disks from the Granite Core and unmount the LUNs from the ESXi system. ESXi might become unresponsive and sometimes might need to be rebooted if all paths to a LUN are lost. Do not use the block disks on Fibre Channel LUNs (also known as block disks) are not supported on the Granite Core the physical Granite Core.

Granite Core and Edge Appliance Deployment Guide 51 Granite and Fibre Channel Troubleshooting

Troubleshooting

This section describes common deployment issues and solutions. If the FC LUN is not detected on the ESXi system on which the Core VE is running, try performing these debugging steps:

1. Rescan the ESXi system storage adapters.

2. Make sure that you are looking at the right HBA on ESXi system.

3. Make sure that the ESXi system has been allowed to access the FC LUN on the storage system, and check initiator and storage groups.

4. Make sure that the zoning configuration on the FC switch is correct.

5. Refer to VMware documentation and support for further assistance on troubleshooting FC connectivity issues. If you deployed the VM on the LUN with the same ESXi or ESXi cluster to deploy the Core VE, and the datastore is still mounted, you might detect the FC LUN on the ESXi system, but the LUN does not appear on the list of the LUNs that are presented as RDM to Core VE. If this is the case, perform the following procedure to unmount the datastore from the ESXi system.

To unmount the datastore from the ESXi system

1. To unmount the FC VMFS datastore, select the Configuration tab, view Datastores, right-click a datastore, and select Unmount. Figure 4-11. Unmounting a Datastore

2. To detach the corresponding device from ESXi, view Devices, right-click a device, and select Detach.

52 Granite Core and Edge Appliance Deployment Guide Troubleshooting Granite and Fibre Channel

3. Rescan twice. Figure 4-12. Rescanning a Device

4. To attach the device by viewing Devices, right-click a device, and select Attach.

5. Do not rescan but view Devices and verify that the datastore is removed from the datastore list.

6. Readd the device as RDM disk to Core VE. If the FC RDM LUN is not visible on Core VE, try the following debugging procedures:

 Select the Rescan for new LUNs process on the Core VE several times.

 Check the Core VE logs for failures.

Granite Core and Edge Appliance Deployment Guide 53 Granite and Fibre Channel Troubleshooting

54 Granite Core and Edge Appliance Deployment Guide

CHAPTER 5 Configuring the Granite Edge

This chapter describes the process of configuring Granite Edge at the branch office. It includes the following:

 “Interface and Port Configurations” on page 55

 “Granite Edge Storage Specifications” on page 58

 “Configuring Disk Management” on page 59

 “Configuring Granite Storage” on page 59

 “MPIO in Granite Edge” on page 60

Interface and Port Configurations

This section describes a typical port configuration. You might require additional routing configuration depending on your deployment scenario. This section includes the following topics:

 “Granite Edge Ports” on page 56

 “Configuring Granite Edge for Jumbo Frames” on page 57

 “Configuring iSCSI Initiator Timeouts” on page 58

Granite Core and Edge Appliance Deployment Guide 55 Configuring the Granite Edge Interface and Port Configurations

Granite Edge Ports

The following table summarizes the interfaces that connect Granite Edge to your network.

Port Description

Console The console port connects the serial cable to a terminal device. You establish a serial connection to a terminal emulation program for console access to the Setup Wizard and the SteelHead EX CLI. Primary (PRI) When Granite Edge is enabled in the SteelHead EX, this interface is typically used for iSCSI traffic. The iSCSI traffic is between external application servers in the branch office and the LUNs provided by the Granite Edge block store. This interface is also used to connect to Granite Core through the SteelHead EX in- path interface. If Granite Edge is not enabled on the SteelHead EX, you can use this port for the management VLAN. Auxiliary (AUX) When Granite Edge is enabled on the SteelHead EX, use this port to connect SteelHead EX to the management VLAN. You can connect a computer directly to the appliance with a crossover cable, enabling you to access the CLI or Management Console of the SteelHead EX. lan0_0 The SteelHead EX uses one or more in-path interfaces to provide Ethernet network connectivity for optimized traffic. Each in-path interface is comprised of two physical ports: the LAN port and the WAN port. Use the LAN port to connect the SteelHead EX to the internal network of the branch office. When Granite Edge is enabled on the SteelHead EX you can also use this port for a connection to the Primary port. This port enables the block store traffic sent between Granite Edge and Granite Core to transmit across the WAN link. The in-path interfaces and their corresponding LAN and WAN ports are individually identified as in-pathX_Y, lanX_Y, and wanX_Y. The numbers increment with each additional in-path interface (for example, inpath0_0, lan0_0, wan0_0, and then inpath0_1, lan0_1, wan0_1, and then inpath1_0, and so on). wan0_0 The WAN port is the second of two ports that comprise the SteelHead in-path interface. The WAN port is used to connect the SteelHead EX toward WAN-facing devices such as a , , or other equipment located at the WAN boundary. eth1_0 to eth1_3 These ports are available as part of an optional four-port NIC addition to the SteelHead EX. Note that the SteelHead EX 560 and 760 models do not support the use of additional NICs. When configured for use by Granite Edge, the ports can provide additional iSCSI interfaces for storage traffic to external servers. These configured ports allow greater bandwidth and the ability to provide redundancy in the form of MPIO or Granite Edge clustering. The eth1_0, eth1_1, eth1_2, and eth1_3 ports of Granite Edge are connected to a LAN switch using a straight-through cable.

Note: All the above interfaces are gigabit capable. Where it is practical, use gigabit speeds on interface ports that are used for iSCSI traffic.

56 Granite Core and Edge Appliance Deployment Guide Interface and Port Configurations Configuring the Granite Edge

Figure 5-1 shows a typical combination of ports in use by the Granite Edge that is enabled on the SteelHead EX. Notice the external application server in the branch can use both the primary port and the eth1_0 port of the Granite Edge for iSCSI traffic to and from NIC-A and NIC-B. Figure 5-1. Granite Edge Ports

Configuring Granite Edge for Jumbo Frames

You can have one or more external application servers in the branch office that use the LUNs accessible from the Granite Edge iSCSI portal. If your network infrastructure supports jumbo frames, Riverbed recommends that you configure the connection between the Granite Edge and application servers as described below. If you are using VSP for hosting all your branch application servers, then you can ignore the following two procedures because the iSCSI traffic is internal to the Granite Edge.

Note: VSP VMs do not support jumbo frames.

In addition to configuring Granite Edge for jumbo frames, you must configure the external application servers and any switches, routers, or other network devices between Granite Edge and the application server for jumbo frame support.

To configure Granite Edge primary interface for jumbo frames

1. From the Granite Edge Management Console, choose Networking > Networking: Base Interfaces.

2. In the Primary Interface box:

 Select Enable Primary Interface.

 Select Specify IPv4 Address Manually option, and specify the correct values for your implementation.

 For the MTU setting, specify 9000 bytes.

3. Click Apply to apply the settings to the current configuration.

Granite Core and Edge Appliance Deployment Guide 57 Configuring the Granite Edge Granite Edge Storage Specifications

4. Click Save to save your changes permanently.

Note: For more details about interface settings, see the SteelHead Management Console User’s Guide.

To configure Granite Edge Ethernet interfaces for jumbo frames

1. From the Granite Edge Management Console, choose Configure > Networking: Data Interfaces.

2. In the Data Interface Settings box:

 Select the required data interface (for example: eth1_0).

 Select Enable Interface.

 Select the Specify IPv4 Address Manually option and specify the correct values for your implementation.

 For the MTU setting, specify 9000 bytes.

3. Click Apply to apply the settings to the current configuration.

4. Click Save to save your changes permanently. For more details about interface settings, see the SteelHead Management Console User’s Guide. For more information about jumbo frames, see “Configure Jumbo Frames” on page 127.

Configuring iSCSI Initiator Timeouts

The Granite Edge acts as the iSCSI portal for any internal (VSP) hosted application servers, but also for any external application servers. In the case of external servers, consider adjusting the iSCSI Initiator timeout settings on the server. This adjustment could improve the ability for the initiator to survive minor outages involving MPIO or other HA configurations. For more details and guidance, see “Microsoft iSCSI Initiator Timeouts” on page 129 and documentation provided by the iSCSI Initiator supplier.

Granite Edge Storage Specifications

The Granite Edge branch storage features are available only on the SteelHead EX, xx60 models. You can configure how the free disk space usage on the SteelHead EX is divided between the block store and VSP. For details about the possible disk space allocations between VSP and Granite Edge storage on xx60 models and installing and configuring xx60 model series appliances, see the SteelHead EX Installation and Configuration Guide.

58 Granite Core and Edge Appliance Deployment Guide Configuring Disk Management Configuring the Granite Edge

Configuring Disk Management

You can configure the disk layout mode to allow space for the Granite Edge block store in the Administration > System Settings: Disk Management page of the SteelHead EX Management Console. Disk space in the SteelHead EX can be partitioned in different ways depending on how the appliance is used and which license has been purchased. This page does not allow you to allot disk space; you can only select the desired mode.

Note: You cannot change the disk layout mode unless all VSP slots are currently uninstalled. For details, see the SteelHead Management Console User’s Guide.

Note: You cannot change the disk layout when Granite Edge is already connected to a Granite Core. You must reboot after you change the disk layout. Riverbed recommends that you change the disk layout before connecting to a Granite Core for the first time.

For Granite deployments, choose one of the following modes:

 Extended VSP and Granite Storage Mode - Select this mode to keep the operating systems and production drives of the virtual servers running on the SteelHead EX local (that is, not consolidated at the data center).

 Granite Storage Mode - Select this mode to use the storage delivery capability of the Granite Edge. This mode dedicates most of the disk space to the Granite Edge block store while still allotting the required amount for VSP functionality and WAN optimization. This mode also allows you to consolidate at the data center the operating systems and production drives of the virtual servers running on the SteelHead EX.

 VSP and Granite Storage Mode - Evenly divides the available disk space between VSP and Granite functionality. The remaining modes (not documented here) are for non-Granite deployments. For details about these other modes and disk layout configuration in general, see the SteelHead Management Console User’s Guide.

Configuring Granite Storage

On the Granite Edge, you complete the connection to the Granite Core in the EX Features > Granite: Granite Storage page by specifying the Granite Core IP address and defining the Granite Edge Identifier (among other settings). You need the following information to configure Granite Edge storage:

 Hostname/IP address of the Granite Core.

 Granite Edge Identifier, the value of which is used on the Granite Core-side configuration for mapping LUNs to specific Granite Edge appliances. The Granite Core identifier is case sensitive.

 If you configure failover, both appliances must use the same self-identifier. In this case, you can use a value that represents the group of appliances.

Granite Core and Edge Appliance Deployment Guide 59 Configuring the Granite Edge MPIO in Granite Edge

 Port number of Granite Core. The default port is 7970.

 The interface for the current Granite Edge to use when connecting with Granite Core. For details about this procedure, see the SteelHead Management Console User’s Guide.

MPIO in Granite Edge

In Granite Edge, you enable multiple local interfaces through which the iSCSI Initiator can connect to the Granite Core. Redundant connections help prevent loss of connectivity in the event of an interface, switch, cable, or other physical failure. In the Granite Core Management Console, navigate to the Configure > Storage Arrays: iSCSI, Initiators, MPIO page to access controls to add or remove MPIO interfaces. Once specified, the interfaces are available for the iSCSI Initiator to connect with the Granite Edge. For details, see the SteelHead Management Console User’s Guide for the SteelHead EX.

60 Granite Core and Edge Appliance Deployment Guide

CHAPTER 6 Granite Appliance High-Availability Deployment

This chapter describes high-availability (HA) deployments for Granite Core and Granite Edge. It includes the following sections:

 “Overview of Storage Availability” on page 61

 “Granite Core High Availability” on page 62

 “Granite Edge High Availability” on page 73

 “Configuring WAN Redundancy” on page 81 For information about setting up Core VE HA, see “Configuring Fibre Channel LUNs in a Core VE HA Scenario” on page 46.

Overview of Storage Availability

Applications of any type that read and write data to and from storage can suffer from two fundamental types of availability loss:

 Loss of storage itself or access to the storage

 Loss of the data residing on the storage As with a typical storage deployment, you might consider data HA and redundancy as a mandatory requirement rather than an option. Applications accessing data are always expecting the data, and the storage that the data resides on, to be available at all times. If for some reason the storage is not available, then the application ceases to function. The requirement to protect against loss of stored data is subtly different to that of normal lost data. In the case of data loss, whether due to accidental deletion, corruption, theft, or another event, it is a question of recovering the data from a snapshot, backup, or some other form of archive. Of course, in this case, if you can recover the lost data it implies that you previously had a process to copy data, either through snapshot, backup, replication, or another data management operation. In general, the net effect of data loss or lack of storage availability is the same—loss of productivity. But the two types of data loss are distinct and addressed in different ways. The subject of data availability in conjunction with the Granite product family is documented in a number of white papers and other documents that describe the use of snapshot technology and data replication as well as backup and recovery tools. To read the white papers, go to https://support.riverbed.com.

Granite Core and Edge Appliance Deployment Guide 61 Granite Appliance High-Availability Deployment Granite Core High Availability

The following sections discuss how to make sure you have storage availability in both Granite Core and Granite Edge deployments. It is worth noting that Granite Core HA and Granite Edge HA are independent from each other. That is to say, you can have Granite Core HA with no Granite Edge HA, and vice versa.

Granite Core High Availability

This section describes HA deployments for Granite Core. It contains the following topics:

 “Granite Core with MPIO” on page 62

 “Granite Core HA Concepts” on page 63

 “Configuring HA for Granite Core” on page 64 You can deploy a Granite Core as a single, stand-alone implementation. However, Riverbed strongly recommends that you always deploy Granite Core as pairs in an HA cluster configuration. The storage arrays and the storage area network (SAN) the Granite Core attaches to are generally deployed in a redundant manner. For more information about Granite Core HA clusters, see “Granite Core HA Concepts” on page 63. For more information about single-appliance implementation, see “Single-Appliance Deployment” on page 13. In addition to the operational and hardware redundancy provided by the deployment of Granite Core clusters, you can also cater to network redundancy. When connecting to a SAN using iSCSI, Granite Cores support the use of multiple path input and output (multipath I/O or MPIO). MPIO uses two separate network interfaces on the Granite Core to connect to two separate iSCSI portals on the storage array. The storage array must support MPIO. Along with network redundancy, MPIO enables for scalability by load-balancing storage traffic between the Granite Core and the storage array.

Note: MPIO is also supported on Granite Edge deployments in which the LUNs available from Granite Edge are connected to servers operating in the branch office.

For details about MPIO with Granite Edge, see “Granite Edge with MPIO” on page 77. For more details about MPIO with Granite Core, see “Granite Core with MPIO” on page 62. For information about setting up Core VE HA, see “Configuring Fibre Channel LUNs in a Core VE HA Scenario” on page 46.

Granite Core with MPIO

MPIO ensures that a failure of any single component (such as a network interface card, switch, or cable) does not result in a communication problem between the Granite Core and the storage array.

62 Granite Core and Edge Appliance Deployment Guide Granite Core High Availability Granite Appliance High-Availability Deployment

Figure 6-1 shows an example of a basic Granite Core deployment using MPIO. The figure is comprised of a single Granite Core with two network interfaces connecting to the iSCSI SAN. The SAN has a simple full mesh network design enabling each Granite Core interface to connect to each iSCSI portal on the storage array. Figure 6-1. Basic Topology for Granite Core MPIO

When you configure a Granite Core for MPIO, by default it uses a round-robin policy for any read operations to the LUNs in the storage array. Write operations use a fixed-path policy, only switching to an alternative path in the event of a path or portal failure. For more details about MPIO configuration for Granite Core, see the Granite Core Management Console User’s Guide.

Granite Core HA Concepts

A pair of Granite Cores deployed in an HA-failover cluster configuration are active-active. In other words, each Granite Core is the primary to itself and secondary to its peer. Both peers in the cluster are attached to storage in the data center. But individually they each are responsible for projecting one or more LUNs to one or more Granite Edge devices in branch locations. Each Granite Core is configured separately for the LUNs and Granite Edges it is responsible for. When you enable failover on the Granite Core, you can choose which individual LUNs are part of the HA configuration. By default, all LUNs are automatically configured for failover. You can selectively disable failover on an individual LUN basis in the Management Console on the Configure > Manage: LUNs page. LUNs that are not included in the HA configuration are not be available at the Granite Edge if the Granite Core fails. As part of the HA deployment, you configure each Granite Core with the details of its failover peer. This deployment comprises of two IP addresses of network interfaces called failover interfaces. These interfaces are used for heartbeat and synchronization of the peer configuration. After the failover interfaces are configured, the failover peers use their heartbeat connections (failover interfaces) to share the details of their storage configuration. This information includes the LUNs they are responsible for and the Granite Edges they are projecting the LUNs to.

Granite Core and Edge Appliance Deployment Guide 63 Granite Appliance High-Availability Deployment Granite Core High Availability

If either peer fails, the surviving Granite Core can take over control of the LUNs from the failed peer and continue projecting them to the Granite Edges.

Note: Make sure that you size both failover peers correctly so that they have enough capacity to support the other Granite Core storage in the event of a peer failure. If the surviving peer does not have enough resources (CPU and memory), then performance might degrade in a failure situation.

After a failed Granite Core has recovered, the failback is automatic.

Configuring HA for Granite Core

This section describes best practices and the general procedure for configuring high availability between two Granite Cores.

Note: Granite Core HA configuration is independent of Granite Edge HA configuration.

This section contains the following topics:

 “Cabling and Connectivity for Clustered Granite Cores” on page 65

 “Configuring Failover Peers” on page 66

 “Accessing a Failover Peer from a Granite Core” on page 70

 “Failover States and Sequences” on page 71

 “Removing Granite Cores from an HA Configuration” on page 71

64 Granite Core and Edge Appliance Deployment Guide Granite Core High Availability Granite Appliance High-Availability Deployment

Cabling and Connectivity for Clustered Granite Cores Figure 6-2 shows an example of a basic HA topology including details of the different network interfaces used.

Note: Riverbed strongly recommends that you use crossover cables for connecting ports in clustered Granite Cores.

Figure 6-2. Basic Topology for Granite Core HA

In the scenario shown in Figure 6-2, both Granite Cores (Granite Core A and Granite Core B) connect to the storage array through their respective eth0_0 interfaces. Notice that the eth0_1 interfaces are not used in this example, but you could use them for MPIO or additional SAN connectivity. The Granite Cores communicate between each other using the failover interfaces that are configured as eth0_2 and eth0_3. Their primary interfaces are dedicated to the traffic VLAN that carries data to and from Granite Edge devices. The auxiliary interfaces are connected to the management VLAN and used to administer the Granite Cores. You can to administer a Granite Core from any of its configured interfaces assuming they are reachable. Riverbed strongly recommends that you use the AUX interface as a dedicated management interface rather than using one of the other interfaces that might be in use for storage data traffic. When it is practical, Riverbed recommends that you use two dedicated failover interfaces for the heartbeat. Connect the interfaces through crossover cables and configure them using private IP addresses. This minimizes the risk of a split-brain scenario in which both Granite Core peers consider the other to have failed. If you cannot configure two dedicated interfaces for the heartbeat, then an alternative is to specify the primary and auxiliary interfaces. Consider this option only if the traffic interfaces of both Granite Core peers are connecting to the same switch or are wired so that a network failure means one of the Granite Cores loses connection to all Granite Edge devices. You can configure Granite Cores with additional NICs to provide more network interfaces. These NICs are installed in PCIe slots within the Granite Core. Depending on the type of NIC you install, the network ports could be 1-Gb Ethernet or 10-Gb Ethernet. In either case, you can use the ports for storage or heartbeat connectivity. The ports are identified as ethX_Y where X corresponds to the PCIe slot (from 1 to 5) and Y refers to the port on the NIC (from 0 to 3 for a four-port NIC and from 0 to 1 for a two-port NIC). For more information about Granite Core ports, see “Interface and Port Configuration” on page 22.

Granite Core and Edge Appliance Deployment Guide 65 Granite Appliance High-Availability Deployment Granite Core High Availability

You can use these additional interfaces for iSCSI traffic or heartbeat. Use the same configuration guidance as already described above for the eth0_0 to eth0_3 ports. Under normal circumstances the heartbeat interfaces need only be 1 Gb; therefore, it is simpler to keep with using eth0_2 and eth0_3 as already described. However, there can be a need for 10-Gb connectivity to the iSCSI SAN, in which case you can use an additional NIC with 10-Gb ports in place of eth0_0 and eth0_1. If you install the NIC in PCIe slot 1 of the Granite Core, then the interfaces are identified as eth1_0 and eth1_1 in the Granite Core Management Console. When using multiple interfaces for storage connectivity in an HA deployment, Riverbed strongly recommends that all interfaces are matched in terms of their capabilities. Therefore, avoid mixing combinations of 1 Gb and 10 Gb for storage connectivity.

Configuring Failover Peers You configure Granite Core high availability on the Configure > Failover: Failover Configuration page. To configure failover peers for Granite Core, you need to provide the following information for each of the Granite Core peers:

 The IP address of the peer appliance

 The local failover interface through which the peers exchange and monitor heartbeat messages

 An additional IP address of the peer appliance

 An additional local failover interface through which the peers exchange and monitor heartbeat messages Figure 6-3 shows an example deployment with failover interface IP addresses. You can configure any interface as a failover interface, but to maintain some consistency Riverbed recommends that you configure and use eth0_2 and eth0_3 as dedicated failover interfaces. Figure 6-3. Granite Core HA Failover Interface Design

66 Granite Core and Edge Appliance Deployment Guide Granite Core High Availability Granite Appliance High-Availability Deployment

Figure 6-4 shows the Failover Configuration page for Granite Core A in which the peer is Granite Core B. The failover interface IP addresses are 20.20.20.22 and 30.30.30.33 through interfaces eth0_2 and eth0_3 respectively. The page shows eth0_2 and eth0_3 selected from the Local Interface drop-down list and the IP addresses of Granite Core B interfaces are completed. Notice that from the Configuration page you can select the interface you want to use for connections from the failover peer Granite Edge devices. This example shows that the primary interface has been chosen. Figure 6-4. Granite Core Failover Configuration Page

Granite Core and Edge Appliance Deployment Guide 67 Granite Appliance High-Availability Deployment Granite Core High Availability

After you click Enable Failover, the Granite Core attempts to connect through the failover interfaces sending the storage configuration to the peer. If successful, you see the Device Failover Settings as shown in Figure 6-5. Figure 6-5. Granite Core HA Failover Configuration Page 2

68 Granite Core and Edge Appliance Deployment Guide Granite Core High Availability Granite Appliance High-Availability Deployment

After the Granite Core failover has been successfully configured, you can log in to the Management Console of the peer Granite Core and view its Failover Configuration page. Figure 6-6 shows that the configuration page of the peer is automatically configured with the relevant failover interface settings from the other Granite Core. Figure 6-6. Granite Core HA Peer Failover Configuration Page 3

Even though the relevant failover interfaces are automatically configured on the peer, you must configure the peer Preferred Interfaces for Granite Edge Connections. By default, the primary interface is selected. For more information about HA configuration settings, see the Granite Core Management Console User’s Guide. In the Granite Core CLI, you can configure failover using the device-failover peer and device-failover peerip commands. To display the failover settings use the show device-failover command. For more information, see the Granite Core Command-Line Interface Reference Manual. If the failover configuration is not successful, then details are available in the Granite Core log files and a message is displayed in the user interface. The failure can be for any number of different reasons. Some examples, along with items to check, are as follows:

 Unable to contact peer - Check the failover interface configurations (IP addresses, interface states and cables).

 Peer is already configured as part of a failover pair - Check that you have selected the correct Granite Core appliance.

 The peer configuration includes one or more LUNs that are already assigned to the other Granite Core in the failover pair - Check the LUN assignments and correct the configuration.

Granite Core and Edge Appliance Deployment Guide 69 Granite Appliance High-Availability Deployment Granite Core High Availability

 The peer configuration includes one or more Granite Edge devices that are already assigned to the other Granite Core in the failover pair - Check the Granite Edge assignments and correct the configuration. After the failover configuration is complete and active, the configurations of the two peers in the cluster are periodically exchanged through a TCP connection using port 7971 on the failover interfaces. If you change or save either Granite Core configuration, the modified configuration is sent to the failover peer. In this way, each peer always has the latest configuration details of the other. You configure any Granite Edge appliance that is connecting to a Granite Core HA configuration with the primary Granite Core details (hostname or IP). After connected to the primary Granite Core, the Granite Edge is automatically updated with the peer Granite Core information. This information ensures that during a Granite Core failover situation in which a Granite Edge loses its primary Granite Core, the secondary Granite Core can signal the Granite Edge that it is taking over. The automatic update also minimizes the configuration activities required at the Granite Edge regardless of whether you configure Granite Core HA or not.

Accessing a Failover Peer from a Granite Core When you configure a Granite Core for failover with a peer Granite Core, all storage configuration pages include an additional feature that enables you to access and modify settings for both the current appliance you are logged in to and its failover peer. You can use a drop-down list below the page title to select Self (the current appliance) or Peer. The page includes the message Device Failover is enabled, along with a link to the Failover Configuration page. Figure 6-7 shows two sample iSCSI Configuration pages: one without HA enabled and one with HA enabled, showing the drop-down list. Figure 6-7. Failover-Enabled feature on Storage configuration pages

Note: Because you can change and save the storage configuration settings for the peer in a Granite Core HA deployment, ensure that any configuration changes are made for the correct Granite Core.

70 Granite Core and Edge Appliance Deployment Guide Granite Core High Availability Granite Appliance High-Availability Deployment

Additionally, the Granite Core storage report pages include a message that indicates when device failover is enabled, along with a link to the Failover Configuration page. You must log in to the peer Granite Core to view the storage report pages for the peer.

Failover States and Sequences At the same time as performing their primary functions associated with projecting LUNs, each Granite Core in an HA deployment is using its heartbeat interfaces to check if the peer is still active. By default, every three seconds, the peers check each other through a heartbeat message. The heartbeat message is sent through TCP port 7972 and contains the current state of the peer that is sending the message. The state is one of the following:

 ActiveSelf - The Granite Core is healthy, running its own configuration and serving its LUNs as normal. It has an active heartbeat with its peer.

 ActiveSolo - The Granite Core is healthy but the peer is down. It is running its own configuration and that of the failed peer. It is serving its LUNs and also the LUNs of the failed peer.

 Passive - The default state when Granite Core starts up. Depending on the status of the peer, the Granite Core state transitions to ActiveSolo or ActiveSelf. If there is no response from three consecutive heartbeats, then the secondary Granite Core declares the primary failed and initiates a failover. Note that because both Granite Cores in an HA deployment are primary for their own functions, and secondary for the peer. Therefore, whichever Granite Core fails, it is the secondary that takes control of the LUNs from the failed peer. After the failover is initiated, the following sequence of events occurs:

1. The secondary Granite Core preempts a SCSI reservation to the storage array for all of the LUNs that the failed Granite Core is responsible for in the HA configuration.

2. The secondary Granite Core contacts all Granite Edge appliances that are being served by the failed (primary) Granite Core.

3. The secondary Granite Core begins serving LUNs to the Granite Edge appliances. The secondary Granite Core continues to issue heartbeat messages. Failback is automatic after the failed Granite Core comes back online and can send its own heartbeat messages again.

Removing Granite Cores from an HA Configuration This section describes the procedure for removing two Granite Cores from their failover configuration. The basic steps of this task are to:

1. force one of the Granite Cores into a failed state by stopping its service.

2. disable failover on the other Granite Core.

3. start the service on the first Granite Core again.

4. disable the failover on the second Granite Core.

Granite Core and Edge Appliance Deployment Guide 71 Granite Appliance High-Availability Deployment Granite Core High Availability

You can perform these steps using either the Management Console or the CLI. Figure 6-8 shows an example configuration. Figure 6-8. Example Configuration of Granite Core HA Deployment

To remove Granite Cores from an HA deployment using the Management Console (as shown in Figure 6-8)

1. From the Management Console of Granite Core A, choose Settings > Maintenance: Service page.

2. Stop the Granite Core service.

3. From the Management Console of Granite Core B, choose Configure > Failover: Failover Configuration.

4. Click Disable Failover.

5. Return to the Management Console of Granite Core A, and choose Settings > Maintenance: Service page.

6. Start the Granite Core service.

7. From the Management Console of Granite Core A, choose Configure > Failover: Failover Configuration.

8. Click Disable Failover.

9. Click Activate Local Configuration. Granite Core A and Granite Core B are no longer operating in an HA configuration.

To remove Granite Cores from an HA deployment using the CLI (as shown in Figure 6-8)

1. Connect to the CLI of Granite Core A and enter the following commands to stop the Granite Core service: enable configure terminal no service enable

72 Granite Core and Edge Appliance Deployment Guide Granite Edge High Availability Granite Appliance High-Availability Deployment

2. Connect to the CLI of Granite Core B and enter the following commands to clear the local failover configuration: enable configure terminal device-failover peer clear write memory

3. Return to the CLI of Granite Core A and enter the following commands to start the Granite Core service, clear the local failover configuration, and return to nonfailover mode: enable configure terminal service enable device-failover peer clear device-failover self-config activate. write memory Granite Core A and Granite Core B are no longer operating in an HA configuration.

Granite Edge High Availability

This section describes HA deployments for Granite Edge. It contains the following topics:

 “Using the Correct Interfaces for Granite Edge Deployment” on page 74

 “Choosing the Correct Cables” on page 76

 “Overview of Granite Edge HA” on page 77

 “Granite Edge HA Peer Communication” on page 80

Note: This section assumes that you understand the procedures for VSP HA, Granite Edge HA, and Granite storage, and have read the relevant sections in the SteelHead Management Console User’s Guide for SteelHead EX (xx60).

Granite Edge presents itself to application servers—either located inside VSP on the SteelHead EX or externally to physical or virtual server platforms—in the branch as a storage portal. Depending on the model of SteelHead EX, the Granite Edge function is either a licensed option or a no-cost option. From a Granite Edge perspective, the SteelHead EX can be deployed with all three functions enabled or in Granite-only mode, in which VSP and Granite Edge are the only functions available on the appliance. Note that in Granite-only mode, both Data Streamlining and Transport Streamlining are still applied to the Rdisk connections for each projected LUN. This process requires the presence of a SteelHead in the data center in which the Granite Core is located. Depending on the requirements in the branch, Granite Edge can offer both projected LUNs and local LUNs. In the case of the former, the LUNs are hosted within storage arrays in the data center and projected by a Granite Core device across the WAN to the Granite Edge. Both read and write operations are serviced by the Granite Edge, and any written data is asynchronously sent back across the WAN link to the data center. Local LUNs are hosted internally by the Granite Edge, and any read and write operations are performed only within the storage of the Granite Edge itself. Whether the LUNs are pinned, unpinned, or local, they are all occupying disk capacity in the block store of the Granite Edge. The following sections describe configuring Granite Edge in HA with iSCSI access to the block store by the application servers and the contents of the block store (the LUNs).

Granite Core and Edge Appliance Deployment Guide 73 Granite Appliance High-Availability Deployment Granite Edge High Availability

Using the Correct Interfaces for Granite Edge Deployment

This section reviews the network interfaces on SteelHead EXs and how you can configure them for Granite Edge. For more information about Granite Edge ports, see “Granite Edge Ports” on page 56. By default, all SteelHead EXs are equipped with the following physical interfaces:

 Primary

 Auxiliary

 lan0_0

 wan0_0

 lan0_1

 wan0_1 Traditionally, the LAN and WAN interface pairs are used by the SteelHead as an in-path interface for WAN optimization. The primary and auxiliary interfaces are generally used for management and other services, like RiOS data store synchronization between SteelHead pairs. A SteelHead EX configured with Granite Edge can use these interfaces in different ways. For details about port usage for both Granite Edge and VSP, see the SteelHead Management Console User’s Guide for SteelHead EX (xx60). While there are many combinations of port usage, you can generally expect that iSCSI traffic to and from external servers in the branch uses the primary interface. Likewise, the Rdisk traffic to and from the Granite Core uses the primary interface by default and is routed through the SteelHead inpath0_0 interface. The Rdisk traffic gains some benefit from WAN optimization. Management traffic for the SteelHead and Granite Edge typically uses the auxiliary interface. Figure 6-9 shows a basic configuration example for Granite Edge deployment. The Granite Edge traffic flows for Rdisk and iSCSI traffic are shown. Figure 6-9. Basic Interface Configuration for Granite Edge with External Servers

74 Granite Core and Edge Appliance Deployment Guide Granite Edge High Availability Granite Appliance High-Availability Deployment

Figure 6-10 shows no visible sign of iSCSI traffic. This is because the servers that are using the LUNs projected from the data center are hosted within the VSP resident on the SteelHead EX. Therefore, all iSCSI traffic is internal to the appliance. If there is no other need for the SteelHead or Granite Edge functions to be connected for general branch office WAN optimization purposes (in the case of a Granite-only deployment), then the primary interface can be connected directly to the lan0_0 interface using a crossover cable, enabling the Rdisk traffic to flow in and out of the primary interface. In this case, management of the appliance is performed through the auxiliary interface. Figure 6-10. Basic Interface Configuration for Granite Edge with Servers Hosted in VSP

Figure 6-11 shows a minimal interface configuration. The iSCSI traffic is internal to the appliance in which the servers are hosted within VSP. Because you can configure Granite Edge to use the SteelHead in-path interface for Rdisk traffic, this makes for a very simple and nondisruptive deployment. The primary interface is still connected and can be used for management. Riverbed does not recommend this type of deployment for permanent production use, but it can be suitable for a proof of concept in lieu of a complicated design. Figure 6-11. Alternative Interface Configuration for Granite Edge with Servers Hosted in VSP

Riverbed recommends that you make full use of all the connectivity options available in the SteelHead EX for production deployments of Granite Edge. Careful planning can ensure that important traffic, such as iSCSI traffic to external servers, Rdisk to and from Granite Core, and block store synchronization for high availability, are kept apart from each other. This separation helps with ease of deployment, creates a more defined management framework, and simplifies any potential troubleshooting activity.

Granite Core and Edge Appliance Deployment Guide 75 Granite Appliance High-Availability Deployment Granite Edge High Availability

Depending on the model, SteelHead EX can be shipped or configured in the field with one or more additional four-port network interfaces cards (NICs). By default, when the additional NIC is installed the SteelHead recognizes it as a four-port bypass NIC that you can use for WAN optimization. You must reconfigure the NIC if you want Granite Edge to use it for iSCSI traffic and high availability, using the hardware nic slot command. The command requires the number of the slot where the NIC is located and the mode of operation. For example: amnesiac (config) # hardware nic slot 1 mode data This command configures the four-port NIC in slot one of the SteelHead EX into data mode so that it can be used exclusively by the Granite Edge. For more details about this command, consult the latest version of the Riverbed Command-Line Interface Reference Manual. For additional details about four-port network interface cards, see “Granite Edge Network Reference Architecture” on page 137. Granite Edge requires an additional four-port NIC in Granite Edge HA deployments. If you do not install an additional NIC, the primary and auxiliary interfaces easily become a bottleneck.

Choosing the Correct Cables

The LAN and WAN ports on the SteelHead bypass cards act like host interfaces during normal operation. During fail-to-wire mode, the LAN and WAN ports act as the ends of a crossover cable. Using the correct cable to connect these ports to other network equipment ensures proper operation during fail-to-wire mode and normal operating conditions. This is especially important when you are configuring two SteelHead EXs in a serial in-path deployment for HA. Riverbed recommends that you do not rely on automatic MDI/MDI-X to automatically sense the cable type. The installation might be successful when the SteelHead is optimizing traffic, but it might not be successful if the in-path bypass card transitions to fail-to-wire mode. One way to help ensure that you use the correct cables during an installation is to connect the LAN and WAN interfaces of the SteelHead while the SteelHead is powered off. This proves that the devices either side of the SteelHead can communicate correctly without any errors or other problems. In the most common in-path configuration, a SteelHead LAN port is connected to a switch and the SteelHead WAN port is connected to a router. In this configuration, a straight-through Ethernet cable can connect the SteelHead LAN to the switch, and you must use a crossover cable to connect the SteelHead WAN port to the router. When you configure Granite Edge in HA, it is likely that you have one or more additional NICs installed into the SteelHead EX to provide extra interfaces. You can use the interfaces for MPIO and block store synchronization. In this scenario, configure the NIC for data mode. For details about configuring the NIC, see “Using the Correct Interfaces for Granite Edge Deployment” on page 74. When you configure a NIC for data mode, the individual interfaces do not behave in the same way as LAN and WAN ports described previously. There is no fail-to-wire capability, but instead, each interface (data port) behaves like any standard network interface port and you can choose cables accordingly.

76 Granite Core and Edge Appliance Deployment Guide Granite Edge High Availability Granite Appliance High-Availability Deployment

This table summarizes the correct cable usage in the SteelHead when you are connecting LAN and WAN ports or when you are connecting data ports.

Devices Cable

SteelHead to SteelHead Crossover SteelHead to router Crossover SteelHead to switch Straight-through SteelHead to host Crossover

Overview of Granite Edge HA

This section describes HA features, design, and deployment of Granite Edge. You can assign the LUNs provided by Granite Edge (which are projected from the Granite Core in the data center) in a variety of ways. Whether used as a datastore for VMware ESXi in the VSP of the SteelHead EX, or for other hypervisors and discrete servers hosted externally in the branch office, the LUNs are always served from the Granite Edge using the iSCSI protocol. Because of this, you can achieve HA with Granite Edge by using one or both of the following two options:

 “Granite Edge with MPIO” on page 77

 “Granite Edge HA Using Block Store Synchronization” on page 79 Both of these options are independent of any HA Granite Core configuration in the data center that is projecting one or more LUNs to the Granite Edge. However, because of different SteelHead EX and Granite Edge deployment options and configurations, there are several scenarios for HA. For example, you can consider hardware redundancy consisting of multiple power supplies or RAID inside the SteelHead EX a form of HA, or both. For more information, see the product specification documents. Alternatively, when you deploy two SteelHead EXs in the branch, you can configure the VSP on both devices to provide an active-passive capability for any VMs that can be hosted on VSP. In this context, HA is purely from the point of view of the VMs themselves, and there is a separate SteelHead EX providing a failover instance of the VSP configuration. For more details about how to configure Granite Edge HA and VSP HA, see the SteelHead Management Console User’s Guide for SteelHead EX (Series xx60).

Granite Edge with MPIO In a similar way to how you use Granite Core and data center storage arrays, you can use Granite Edge with MPIO at the branch. Using Granite Edge with MPIO ensures that a failure of any single component (such as a network interface card, switch, or cable) does not result in a communication problem between Granite Edge and the iSCSI Initiator in the host device at the branch.

Granite Core and Edge Appliance Deployment Guide 77 Granite Appliance High-Availability Deployment Granite Edge High Availability

Figure 6-12 shows a basic MPIO architecture for Granite Edge. In this example, the primary and eth1_0 interfaces of the Granite Edge are configured as the iSCSI portals and the server interfaces (NIC-A and NIC- B) are configured as iSCSI Initiators. Combined with the two switches in the storage network, this basic configuration allows for failure of any of the components in the data path while continuing to enable the server to access the iSCSI LUNs presented by the Granite Edge. Figure 6-12. Basic Topology for Granite Edge MPIO

While you can use other interfaces on the Granite Edge as part of an MPIO configuration, Riverbed recommends that you use the primary interface and one other interface that you are not using for another purpose. The Granite Edge can have an additional four-port NIC installed to provide extra interfaces. This is especially useful in HA deployments. The eth1_0 interface in this example is provided by the add-on four- port NIC. For more information about four-port NIC cards, see “Using the Correct Interfaces for Granite Edge Deployment” on page 74. When using MPIO with Granite Edge, Riverbed recommends that you verify and adjust certain timeout variables of the iSCSI Initiator in the server to make sure that you have correct failover behavior. By default, the Microsoft iSCSI Initiator LinkDownTime timeout value is set to 15 seconds. This timeout value determines how much time the initiator holds a request before reporting an iSCSI connection error. If you are using Granite Edge in an HA configuration, and MPIO is configured in the Microsoft iSCSI Initiator of the branch server, change the LinkDownTime timeout value to 60 seconds to allow the failover to finish.

Note: When you view the iSCSI MPIO configuration from the ESXi vSphere management interface, even though both iSCSI portals are configured and available, only iSCSI connections to the active Granite Edge are displayed.

For more details about the specific configuration of MPIO, see the SteelHead Management Console User’s Guide for SteelHead EX (Series xx60).

78 Granite Core and Edge Appliance Deployment Guide Granite Edge High Availability Granite Appliance High-Availability Deployment

Granite Edge HA Using Block Store Synchronization While MPIO can cater to HA requirements involving network redundancy in the branch office, it still relies on the Granite Edge itself being available to serve LUNs. To survive a failure of the Granite Edge without downtime, you must deploy a second appliance. If you configure two appliances as an HA pair, the Granite Edge can continue serving LUNs without disruption to the servers in the branch, even if one of the Granite Edge devices fails. The serving of LUNs in a Granite Edge HA deployment can be used by the VSP of a SteelHead EX (which is itself hosting the Granite Edge function) and by external servers within the branch office. The scenario described in this section has two Granite Edges operating in an active-standby role. Note that this scenario is irrespective of whether the Granite Core is configured for HA in the data center. The active Granite Edge is connected to the Granite Core in the data center and is responding to the read and write requests for the LUNs it is serving in the branch. This is effectively the same method of operation as with a single Granite Edge; however there are some additional pieces that make up the complete picture for HA. The standby Granite Edge does not service any of the read and write requests but is ready to take over from the active peer. As the server writes new data to LUNs through the block store of the active Granite Edge, the data is reflected synchronously to the standby peer block store. When the standby peer has acknowledged to the active peer that it has written the data to its own block store, the active peer then acknowledges the server. In this way, the block stores of the two Granite Edges are kept in lock step. Figure 6-13 illustrates a basic HA configuration for Granite Edge. While this is a very simplistic deployment diagram, it highlights the importance of the best practice to use two dedicated interfaces between the Granite Edge peers to keep their block stores synchronized. With an additional four-port NIC installed in the SteelHead EXs, you can configure the Granite Edges to use eth1_2 and eth1_3 as their interfaces for synchronization and failover status. Using dedicated interfaces through crossover cables ensures that a split-brain scenario (in which both peer devices think the other has failed and start serving independently) is minimized. Figure 6-13. Basic Topology for Granite Edge High Availability

Granite Core and Edge Appliance Deployment Guide 79 Granite Appliance High-Availability Deployment Granite Edge High Availability

Granite Edge HA Peer Communication

When you configure two Granite Edges as active-standby peers for HA, they communicate with each other at regular intervals. The communication is required to ensure that the peers have their block stores synchronized and that they are operating correctly based on their status (active or standby). The block store synchronization happens through two network interfaces that you configure for this purpose on the Granite Edge. Ideally, these are dedicated interfaces, preferably connected through crossover cables. Although not the preferred method, you can send block store synchronization traffic through other interfaces that are already being used for another purpose. If interfaces must be shared, then avoid using the same interfaces for both iSCSI traffic and block store synchronization traffic. These two types of traffic are likely to be quite intensive. Instead, use an interface that is more lightly loaded: for example, management traffic. The interfaces used for the actual block store synchronization traffic are also used by each peer to check the status of one another through the heartbeat messages. The heartbeat messages provide each peer with the status of the other peer and can include peer configuration details. A heartbeat message is sent by default every three seconds through TCP port 7972. If the peer fails to receive three successive heartbeat messages, then a failover event can be triggered. Because heartbeat messages are sent in both directions between Granite Edge peers, there is a worst-case scenario in which failover can take up to 18 (3 x 3 x 2) seconds. Failovers can also occur due to administrative intervention: for example, rebooting or powering off a Granite Edge. The block store synchronization traffic is sent between the peers using TCP port 7973. By default, the traffic uses the first of the two interfaces you configure. If the interface is not responding for some reason, the second interface is automatically used. If neither interface is operational, then the Granite Edge peers enter into some predetermined failover state based on the failure conditions. The failover state on a Granite Edge peer can be one of the following:

 Discover - Attempting to establish contact with the other peer

 Active sync - Actively serving client requests; the standby peer is in sync with the current state of the system

 Standby sync - Passively accepting updates from the active peer; in sync with the current state of the system

 Active Degraded - actively serving client requests; cannot contact the standby peer

 Active Rebuild - actively serving client requests; sending the standby peer updates that were missed during an outage

 Standby Rebuild - passively accepting updates from the active peer; not yet in sync with the state of the system For detailed information about how to configure two Granite Edges as active-standby failover peers, the various failover states that each peer can assume while in an HA deployment, and the procedure required to remove an active-standby pair from that state, see the SteelHead Management Console User’s Guide for SteelHead EX (Series xx60).

80 Granite Core and Edge Appliance Deployment Guide Configuring WAN Redundancy Granite Appliance High-Availability Deployment

Configuring WAN Redundancy

This section describes how to configure WAN redundancy. It includes the following topics:

 “Configuring WAN Redundancy with No Granite Core HA” on page 81

 “Configuring WAN Redundancy in an HA Environment” on page 83

You can configure the Granite Core and Granite Edge with multiple interfaces to use with MPIO. You can consider this a form of local network redundancy. You can also use the interfaces to provide a degree of redundancy across the WAN between Granite Core and Granite Edge. This ensures that any failure along the WAN path can be tolerated by Granite Core and Granite Edge, and is called WAN redundancy. WAN redundancy provides multiple paths for connection in case the main Granite Core to Granite Edge link fails. If the Granite Core detects a connection failure, it has alternative links that it can fail over to. To configure WAN redundancy, you perform a series of steps on both the data center and branch side. You can use both the in-path interfaces (inpathX_Y) or Ethernet interfaces (ethX_Y) for redundant WAN link configuration. In the examples below the term intf is used to imply either in-path or Ethernet network interfaces.

Configuring WAN Redundancy with No Granite Core HA

This section describes how to configure WAN redundancy when you do not have a Granite Core HA deployment.

Granite Core and Edge Appliance Deployment Guide 81 Granite Appliance High-Availability Deployment Configuring WAN Redundancy

To configure WAN redundancy

1. Configure local interfaces on the Granite Edge. The interfaces are used to connect to the Granite Core:

 From the SteelHead EX Management Console, choose EX Features > Granite: Granite Storage.

 Select Add Interface. Figure 6-14. Granite Edge Interfaces

2. Configure preferred interfaces for connecting to Granite Edge on Granite Core:

 From the Granite Core Management Console, choose Configure > Manage: Granite Edges.

 Select Show Preferred Interfaces for Granite Edge Connections.

 Select Add Interface. Figure 6-15. Adding Granite Core Interfaces

On first connection, the Granite Core sends all the preferred interface information to the Granite Edge. The Granite Edge uses this information along with configured local interfaces to connect on each link (local-interface and preferred-interface pair) until a successful connection is formed. After the Granite Core and the Granite Edge have established a successful alternative link, the Granite Edge updates its Rdisk configuration with the change, so that the configuration is on the same link as the management channel between Granite Core and Granite Edge.

82 Granite Core and Edge Appliance Deployment Guide Configuring WAN Redundancy Granite Appliance High-Availability Deployment

3. Remove the local interface for WAN redundancy on Granite Edge:

 From the SteelHead EX Management Console, choose EX Features > Granite: Granite Storage.

 Open the interface you want to remove.

 Click Remove Interface.

4. Remove preferred interfaces for WAN redundancy on Granite Core:

 From the Granite Core Management Console, choose Configure > Manage: Granite Edges.

 Select Show Preferred Interfaces for Granite Edge Connections.

 Open the interface you want to remove.

 Delete the interface. Any change in the preferred interfaces on the Granite Core is communicated to the Granite Edge and the connection is updated as needed.

Configuring WAN Redundancy in an HA Environment

In an Granite Core HA environment, the preferred interfaces information of the failover Granite Core is sent to the Granite Edge by the primary Granite Core. The connection between the Granite Edge and the failover Granite Core follows the same logic in which a connection is tried on each link (local interface and preferred interface pair) until a connection is formed.

Granite Core and Edge Appliance Deployment Guide 83 Granite Appliance High-Availability Deployment Configuring WAN Redundancy

84 Granite Core and Edge Appliance Deployment Guide

CHAPTER 7 Snapshots and Data Protection

This chapter describes how Granite Core integrates with the snapshot capabilities of the storage array, enabling you to configure application-consistent snapshots through the Granite Core Management Console. It includes the following sections:

 “Setting Up Application-Consistent Snapshots” on page 85

 “Configuring Snapshots for LUNs” on page 86

 “Volume Snapshot Service Support” on page 87

 “Implementing Riverbed Host Tools for Snapshot Support” on page 87

 “Configuring the Proxy Host for Backup” on page 89

 “Configuring the Storage Array for Proxy Backup” on page 89

 “Data Protection” on page 90

 “Data Recovery” on page 91

 “Branch Recovery” on page 92 For details about storage qualified for native snapshot and backup support, see the Granite Core Installation and Configuration Guide.

Setting Up Application-Consistent Snapshots

This section describes the general process for setting up snapshots. Granite Core integrates with the snapshot capabilities of the storage array. You can configure snapshot settings, schedules, and hosts directly through the Granite Core Management Console or CLI. For a description of application consistency and crash consistency, see “Understanding Crash Consistency and Application Consistency” on page 12.

Granite Core and Edge Appliance Deployment Guide 85 Snapshots and Data Protection Configuring Snapshots for LUNs

To set up snapshots

1. Define the storage array details for the snapshot configuration. Before you can configure snapshot schedules, application-consistent snapshots, or proxy backup servers for specific LUNs, you must specify for Granite Core the details of the storage array, such as IP address, type, protocol, and so on. The storage driver does not remap any blocks; the remapping takes place within the array. To access snapshot schedule policy configuration settings:

 In the Granite Core Management Console, choose Configure > Backups: Snapshots to open the Snapshots page.

 In the Granite Core CLI, use the storage snapshot policy modify commands.

2. Define snapshot schedule policies. You define snapshot schedules as policies that you can apply later to snapshot configurations for specific LUNs. After applied, snapshots are automatically taken based on the parameters set by the snapshot schedule policy. Snapshot schedule policies can specify weekly, daily, or day-specific schedules. Additionally, you can specify the total number of snapshots to retain. To access snapshot schedule policy configuration settings:

 In the Granite Core Management Console, choose Configure > Backups: Snapshots to open the Snapshots page.

 In the Granite Core CLI, use the storage snapshot policy modify commands.

3. Define snapshot host credentials. You define snapshot host settings as storage host credentials that you can apply later to snapshot configurations for specific LUNs. To access snapshot host credential configuration settings:

 In the Granite Core Management Console, choose Configure > Backups: Handoff Host to open the Storage Snapshots page.

 In the Granite Core CLI, use the storage host-info commands. For details about CLI commands, see the Riverbed Granite Core Command-Line Interface Reference Manual. For details about using the Granite Core Management Console, see the Granite Core Management Console User’s Guide.

Configuring Snapshots for LUNs

This section describes the general steps for applying specific snapshot configurations to LUNs through Granite Core. For information about configuring LUNs, see “Configuring LUNs” on page 17.

86 Granite Core and Edge Appliance Deployment Guide Volume Snapshot Service Support Snapshots and Data Protection

To apply snapshot configurations to a LUN

1. Select the LUN for the snapshot and access the snapshot settings. You can access the snapshot settings for a specific LUN in the Configure > Manage: LUNs page. Select the desired LUN to display controls that include the Snapshots tab. The Snapshots tab itself has three tabs: Configuration, Scheduler, and History.

2. Apply a snapshot schedule policy to the current LUN. The controls in the Scheduler tab enable you to apply a previously configured policy to the current LUN. You can create a new schedule directly in this panel.

3. Specify the storage array where the LUN resides. The controls in the Configuration tab enable you to specify the storage array where the current LUN resides and to apply a static name that is prepended to the names of snapshots.

4. Specify the client type. The controls in the Configuration tab enable you to specify the client type. To configure application- consistent snapshots and a proxy backup, you must set this value to Windows or VMware.

5. Enable and configure application-consistent snapshots. The controls in the Configuration tab enable you to enable and configure application-consistent snapshots. The settings vary depending on which client type is selected.

Volume Snapshot Service Support

Riverbed supports Volume Snapshot Service (VSS) through the Riverbed Hardware Snapshot Provider (RHSP) and Riverbed Snapshot Agent. For details, see “Implementing Riverbed Host Tools for Snapshot Support” on page 87.

Implementing Riverbed Host Tools for Snapshot Support

Riverbed Host Tools are installed and implemented separately on the branch office Windows Server. The toolkit provides the following services:

 RHSP - Functions as a snapshot provider for the VSS by exposing Granite Core snapshot capabilities to the Windows Server. RHSP ensures that users get an application-consistent snapshot.

 Riverbed Snapshot Agent - A service that enables the Granite Edge appliance to drive snapshots on a schedule. This schedule is set through the Granite Core snapshot configuration. For details, see the Granite Core Management Console User’s Guide. Riverbed Host Tools support 64-bit editions of Microsoft Windows Server 2008 or later.

Granite Core and Edge Appliance Deployment Guide 87 Snapshots and Data Protection Implementing Riverbed Host Tools for Snapshot Support

This section includes the following topics:

 “Overview of RHSP and VSS” on page 88

 “Riverbed Host Tools Operation and Configuration” on page 88

Overview of RHSP and VSS

RHSP exposes Granite Edge through iSCSI to the Windows Server as a snapshot provider. Only use RHSP when an iSCSI LUN is mounted on Windows through the Windows initiator. The process begins when you (or a backup script) request a snapshot through the VSS on the Windows Server:

1. VSS directs all backup-aware applications to flush their I/O operations and to freeze.

2. VSS directs RHSP to take a snapshot.

3. RHSP forwards the command to the Granite Edge.

4. Granite Edge exposes a snapshot to the Windows Server.

5. Granite Edge and Granite Core commit all pending write operations to the storage array.

6. The Granite Edge takes the snapshot against the storage array.

Note: The default port through which the Windows Server communicates with Granite Edge is 4000.

Riverbed Host Tools Operation and Configuration

Riverbed Host Tools installation and configuration requires configuration in both Windows Server and Granite Core.

To configure Granite Core

1. In the Granite Core Management Console, choose Configure > Backups: Snapshots to configure snapshots.

2. In the Granite Core Management Console, choose Configure > Storage Array: iSCSI, Initiators, MPIO to configure an iSCSI with the necessary storage array credentials. The credentials must reflect a user account on the storage array appliance that has permissions to take and expose snapshots. For details about both steps, see the Granite Core Management Console User’s Guide.

To install and configure Riverbed Host Tools

1. Obtain the installer package from Riverbed.

2. Run the installer on your Windows Server.

88 Granite Core and Edge Appliance Deployment Guide Configuring the Proxy Host for Backup Snapshots and Data Protection

3. Confirm the installation as follows:

 From the Start menu, choose Run....

 At the command prompt, enter diskshadow to access the Windows DiskShadow interactive shell.

 In the DiskShadow shell, enter list providers.

 Confirm that RHSP is among the providers returned.

Configuring the Proxy Host for Backup

This section describes the general procedures for configuring the proxy host for backup in both ESXi and Windows environments.

To configure an ESXi proxy host

 Configure the ESXi proxy host to connect to the storage array using iSCSI or Fibre Channel.

To configure a Windows proxy host

1. Configure the Windows proxy host to connect to the storage array using iSCSI or Fibre Channel.

2. Configure a local administrator user that has administrator privileges on the Windows proxy host. To create a user with administrator privileges, create the following registry setting: – HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System – For Key Type, specify DWORD. – For Key Name, specify LocalAccountTokenFilterPolicy. – Set Key Value to 1.

3. Disable the Automount feature through the DiskPart command interpreter: automount disable

4. Add the storage array target to the favorites list on the proxy host to ensure that the iSCSI connection is reestablished when the proxy host is rebooted.

Configuring the Storage Array for Proxy Backup

This section describes the general processes for configuring Dell EqualLogic, EMC CLARiiON, EMC VNX, and NetApp storage arrays for backup. For details, see also the Granite Solution Guide - Granite with NetApp Storage Systems, the Granite Solution Guide - Granite with EqualLogic Storage Arrays, and the Granite Solution Guide - Granite with EMC CLARiiON Storage Systems.

To configure Dell EqualLogic

1. Go to the LUN and select the Access Tab.

Granite Core and Edge Appliance Deployment Guide 89 Snapshots and Data Protection Data Protection

2. Add permissions to Granite Core initiator/IP address and assign volume-only level access.

3. Add permissions to proxy host for the LUN and assign snapshots-only access. For details, see also the Granite Solution Guide - Granite with EqualLogic Storage Arrays.

To configure EMC CLARiiON and EMC VNX

1. Create a storage group.

2. Assign the proxy servers to the group. Riverbed recommends that you provide the storage group information to the proxy host storage group. For details, see also the Granite Solution Guide - Granite with EMC CLARiiON Storage Systems.

To configure NetApp

1. Create an initiator group.

2. Assign the proxy servers to the group. Riverbed recommends that you provide the storage group information to the proxy host storage group. For details, see also the Granite Solution Guide - Granite with NetApp Storage Systems.

Data Protection

The Granite system provides tools to preserve or enhance your existing data protection strategies. If you are currently using host-based backup agents or host-based consolidated backups at the branch, you can continue to do so within the Granite context. However, Granite Core also enables a wider range of strategies, including:

 Backing up from a crash-consistent LUN snapshot at the data center The Granite product family continuously synchronizes the data created at the branch with the data center LUN. As a result, you can use the storage array at the data center to take snapshots of the LUN and thereby avoid unnecessary data transfers across the WAN. These snapshots can be protected either through the storage array replication software or by mounting the snapshot into a backup server. Such backups are only crash consistent because the storage array at the data center does not instruct the applications running on the branch server to quiesce their I/Os and flush their buffers before taking the snapshot. As a result, such a snapshot might not contain all the data written by the branch server up to the time of the snapshot.

 Backing up from an application-consistent LUN snapshot at the data center This option uses the Granite Microsoft VSS integration in conjunction with Granite Core storage array snapshot support. You can trigger VSS snapshots on the iSCSI data drive of your branch Windows servers at the branch and Granite Edge ensures that all data is flushed to the data center LUN and triggers application-consistent snapshots on the data center storage array. As a result, backups are application consistent because the Microsoft VSS infrastructure has informed the applications to quiesce their I/Os before taking the snapshot.

90 Granite Core and Edge Appliance Deployment Guide Data Recovery Snapshots and Data Protection

This option requires the installation of the Riverbed Host Tools on the branch Windows server. For details about Riverbed Host Tools, see “Implementing Riverbed Host Tools for Snapshot Support” on page 87.

 Backing up from Granite-assisted consolidated snapshots at the data center This option relieves backup load on virtual servers, prevents the unnecessary transfer of backup data across the WAN, produces application-consistent backups, and backs up multiple virtual servers simultaneously over VMFS or NTFS. In this option, the ESX server, and subsequently Granite Core, takes the snapshot, which is stored on a separately configured proxy server. The ESX server flushes the virtual machine buffers to the data stores and the Granite Edge appliance flushes the data to the data center LUN, resulting in application- consistent snapshots on the data center storage array. You must separately configure the proxy server and storage array for backup. For details, see the “Configuring the Proxy Host for Backup” on page 89. This option does not require the installation of the Riverbed Host Tools on the branch Windows server. However, you must create a script that triggers first ESX-based snapshots and then Granite Edge snapshots. For details about data protection, backing up strategies, as well as a detailed discussion of crash-consistent and application-consistent snapshots, see the Granite Data Protection and Recovery Guide. For a discussion of application consistency and crash consistency in general, see “Understanding Crash Consistency and Application Consistency” on page 12.

Data Recovery

In the event your data protection strategy fails, the Granite product family enables several strategies for file- level recovery. The recovery approach depends on the protection strategy you used. This section describes the following strategies:

 File recovery from Granite Edge snapshots at the branch When snapshots are taken at the branch using Windows VSS in conjunction with RHSP, each snapshot is available to the Windows host as a separate drive. In order to recover a file, browse to the drive associated to the desired snapshot, locate the file, and restore it. For more information about RHSP, see “Implementing Riverbed Host Tools for Snapshot Support” on page 87.

 File recovery from the backup application catalog file at the branch When backups are taken at the branch using a backup application such as Symantec® NetBackup™ or IBM Tivoli® Storage Manager, you access and restore files directly from the backup server. Riverbed recommends that you restore the files to a different location in case you need to resort to the current files.

 Recover individual files from a data center snapshot (VMDK files) To recover individual files from a storage array snapshot of a LUN containing virtual disk (VMDK) files, present the snapshot to a VMware ESX Server, attach the VMDK to an existing VM running the same operating system (or an operating system that reads the file system used inside the VMDKs in question). You can then browse the file system to retrieve the files stored inside the VM.

Granite Core and Edge Appliance Deployment Guide 91 Snapshots and Data Protection Branch Recovery

 Recover individual files from a data center snapshot (individual files) To recover individual files from a storage array snapshot of a LUN containing individual files, present the snapshot to a server running the same operating system (or an operating system that reads the file system used on the LUN). You can then browse the file system to retrieve the files.

 File recovery from a backup application at the data center You can back up snapshots taken at the data center with a backup application or through Network Data Management Protocol (NDMP) dumps. In this case, file recovery remains unchanged from the existing workflow. Use the backup application to restore the file. You can send the file to the branch office location. Alternatively, you can take the LUN offline from the branch office and inject the file directly into the LUN at the data center. However, Riverbed does not recommend this procedure because it requires taking the entire LUN down for the duration of the procedure.

 File recovery from Windows VSS at the branch You can enable Windows VSS and previous versions can also be enabled at the branch on a Granite LUN no matter which main backup option you implement. When using Windows VSS, you can directly access the drive, navigate to the previous version tab, and recover deleted, damaged or overwritten files. Windows uses its default VSS software provider to back up the previous 64 versions of each file. In addition to restoring individual files to a previous version, VSS also provides the ability to restore an entire volume. Setting up this recovery strategy requires considerations too numerous to detail here. For more details about recovery strategies, see the Granite Data Protection and Recovery Guide.

Branch Recovery

Granite v3.0 and later includes the branch recovery feature that allows you to define the working data set of LUNs projected by Granite Core. During a catastrophic and irrecoverable failure, you can lose access to the working set of LUNs. Branch recovery enables proactive prepopulation of the working set when the LUNs are restored. This section includes the following topics:

 “Overview of Branch Recovery” on page 92

 “How Branch Recovery Works” on page 93

 “Branch Recovery Configuration” on page 93

 “Branch Recovery CLI Configuration Example” on page 94

Overview of Branch Recovery

The branch recovery feature in Granite v3.0 and later enables you to track disk-accesses for both Windows LUNs and VMFS LUNs (hosting Windows VMs) and quickly recover from a catastrophic failure. During normal operations, the Granite Edge caches the relevant and recently accessed user data on a working set of projected LUNs.

92 Granite Core and Edge Appliance Deployment Guide Branch Recovery Snapshots and Data Protection

In event of catastrophic failure in which you cannot recover the Granite Edge, the working set of projected LUNs is also lost. With branch recovery enabled, the working set is proactively streamed to the branch when a new Granite Edge is installed and LUNs mapped. Branch recovery ensures that after the branch is recovered, the user experience at the branch does not change. Do not confuse regular prefetch and intelligent prepopulation with branch recovery. Branch recovery prepopulates the working set proactively, as opposed to pulling related data on access, as in the case of regular prefetch. Branch recovery is different from intelligent prepopulation because the intelligent prepopulation pushes all the used blocks in a volume with no regard to the actual working set. Branch recovery is based on Event Tracing for Windows (ETW), a kernel-level facility. Riverbed supports only Windows 7, Windows 2008 R2, Windows 2012, and Windows 2012 R2. Branch recovery is supported for both physical Windows hosts and Windows VMs. You must format physical Window host LUNs with NTFS. For VMs, NTFS-formatted VMDK hosted on VMware VMFS LUNs are supported.

How Branch Recovery Works

The following are the major components of branch recovery:

 Branch recovery agent

 Branch recovery support on Granite Core The branch recovery agent is a service that runs in branch on a Windows host or VM. The agent uses Windows ETW-provided statistics to collect and log all disk access I/O. Periodically, the collected statistics are written to a file that is stored on the same disk for which the statistics are collected for. The file is called lru*.log and is located in the \Riverbed\BranchRecovery\ directory.

Note: The Granite Turbo Boot plugin is not compatible with the branch recovery agent. For more information about the branch recovery agent, see the Granite Core Management Console User’s Guide.

You must enable branch recovery support for the LUN, prior to mapping LUNs to the new Granite Edge. When VMFS LUN or a snapshot is mapped to a new Granite Edge, the Granite Core crawls the LUN and parses all the lru*.log files. If the files that are previously created by a branch recovery agent are found, the Granite Core pushes the referenced blocks to the new Granite Edge. The branch recovery agent sends the most recently accessed blocks first, sequentially for each VM. When data for a certain time frame (hour, day, week, or month) is recovered for one VM, the process moves on to the next VM in round-robin fashion, providing equal recovery resources to all VMs.

Branch Recovery Configuration

You must install the branch recovery agent on each VM for which you want the benefit of the branch recovery feature. You must have administrative privileges to perform the installation. After the installation, the agent starts to monitor I/O operations and records the activity into designated files. When recovering a branch from a disaster, you must enable branch recovery for the LUN. For VMFS LUNs, you can enable branch recovery for all the VMs on the LUN, or pick and choose specific VMs on which you want the feature enabled. You must disable the branch recovery feature prior to configuration changes. If you want to add or remove any VMs from the configurations, follow these steps:

1. Disable branch recovery.

Granite Core and Edge Appliance Deployment Guide 93 Snapshots and Data Protection Branch Recovery

2. Make changes.

3. Enable branch recovery. You can choose a start time for branch recovery. This option enables you to control bandwidth usage and to choose the best time to start the recovery when you restore branches. For example, you can choose to schedule the recovery during the night, when the least amount of bandwidth is being used. In addition, you can set a cap (that is a percentage of the total disk size) for the amount of data that is pushed per (virtual) disk. You can configure branch recovery in the Management Console and in the CLI. In the Management Console, choose Configure > Manage: LUNs, and select the Branch Recovery tab on the desired LUN. For more information about configuring branch recovery, see the Granite Core Management Console User’s Guide and the Riverbed Granite Core Command-Line Interface Reference Manual. Figure 7-1. Branch Recovery Tab on the LUNs Page

Branch Recovery CLI Configuration Example

The following example shows how to use the CLI to configure branch recovery on Granite Core. The first output shows a LUN that is not configured with branch recovery. The example then shows how to start a schedule (with output), how to configure specific VMs, how to enable branch recovery, and output for a successfully recovered LUN. The following output is from a new VMFS LUN, with branch recovery not enabled: # show storage lun alias 200GB branch-recovery Branch Recovery configuration : Enabled : no Status : Not Enabled Phase : Not Enabled Progress : Not Enabled Start date : Not Configured Start time : Not Configured #

94 Granite Core and Edge Appliance Deployment Guide Branch Recovery Snapshots and Data Protection

Use the following command to start a branch recovery schedule: # conf terminal (config)# storage lun modify alias 200GB branch-recovery schedule start-now The output from the VMFS LUN now has a started schedule, but branch recovery remains disabled: # show storage lun alias alias-vmfs_lun branch-recovery Branch Recovery configuration : Enabled : no Status : not_started Progress : 0 Bytes pushed Start date : 2014/03/14 Start time : 15:01:16 # The output does not list any VMs. If you have not defined them, all VMs are added by default. If you want to enable branch recovery for specific VMs on a specific LUN, use the following command: (config) # storage lun modify alias 200GB branch-recovery add-vm oak-sh486-vm1

Note: VM names are discovered by prefetch and available for automatic completion. The default cap is set to 10. You can change the default with the storage lun modify alias 200GB branch-recovery add-vm oak-sh486 cap 50 command.

Use the following command to show the result of configuring specific VMs: # show storage lun alias 200GB branch-recovery Branch Recovery configuration : Enabled : no Status : not_started Phase : not_started Progress : 0 Bytes pushed Start date : 2014/02/20 Start time : 10:32:59 VMs : Name : oak-sh486-vm1 Status : Not Started Cap : 10 % Percent Complete: Branch recovery not started or not enabled on VM Recovering data from: Branch recovery not started or not enabled on VM # When you are done configuring, scheduling, and adding the VMs, you can enable branch recovery for the LUNs by using the following command: (config) # storage lun modify alias 200GB branch-recovery enable Notice that with branch recovery enabled, data blocks are actively being restored to the LUN. Use the following command to check the status of the recovery on a LUN: # show storage lun alias 200GB branch-recovery Branch Recovery configuration : Enabled : yes Status : started Phase : day Progress : 3729920 Bytes pushed Start date : 2014/02/20 Start time : 10:32:59 VMs : Name : oak-sh486 Status : In-progress Cap : 10 % Percent Complete : 9 %

Granite Core and Edge Appliance Deployment Guide 95 Snapshots and Data Protection Branch Recovery

Recovering data from : Mon Feb 19 15:25 2014 # When the recovery of the LUN is complete, you see the following output: # show storage lun alias 200GB branch-recovery Branch Recovery configuration : Enabled : yes Status : complete Progress : complete Start date : 2014/02/20 Start time : 10:32:59 VMs : Name : oak-sh486-vm1 Status : Complete Cap : 10 % Percent Complete : 100 % Recovering data from : N/A

96 Granite Core and Edge Appliance Deployment Guide

CHAPTER 8 Security and Data Resilience

The chapter describes security and data resilience deployment procedures and design considerations. It contains the following sections:

 “Recovering a Single Granite Core” on page 97

 “Granite Edge Replacement” on page 99

 “Disaster Recovery Scenarios” on page 100

 “Best Practice for LUN Snapshot Rollback” on page 103

 “At-Rest and In-Flight Data Security” on page 104

Recovering a Single Granite Core

If you decide you want to deploy only a single Granite Core, read this section to minimize downtime and data loss when recovering from a Granite Core failure. This section includes the following topics:

 “Recovering a Single Physical Granite Core” on page 97

 “Recovering a Single Core VE” on page 98

Caution: Riverbed strongly recommends that you deploy Granite Core as an HA pair so that in an event of a failure, you can seamlessly continue operations. Both physical and virtual Granite Core HA deployments provide a fully automated failover without end-user impact. For more information about HA, see “Granite Appliance High- Availability Deployment” on page 61.

Recovering a Single Physical Granite Core

The Granite Core internal configuration file is crucial to rebuilding your environment in the event of a failure. The possible configuration file recovery scenarios are as follows:

 Up-to-date Granite Core configuration file is available on an external server - When you replace the failed Granite Core with a new Granite Core, you can import the latest configuration file to resume operations. The Granite Edges reconnect to the Granite Core and start replicating the new writes that were created after the Granite Core failed. In this scenario, you do not need to perform any additional configuration and there is no data loss on the Granite Core and the Granite Edge.

Granite Core and Edge Appliance Deployment Guide 97 Security and Data Resilience Recovering a Single Granite Core

Riverbed recommends that you frequently back up the Granite Core configuration file. For details about the backup and restore procedures for device configurations, see the SteelCentral Controller for SteelHead User’s Guide. Use the following CLI commands to export the configuration file: enable configure terminal configuration bulk export scp://username:password@server/path/to/config Use the following CLI commands to replace the configuration file: enable configure terminal no service enable configuration bulk import scp://username:password@server/path/to/config service enable

 Granite Core configuration file is available but it is not up to date - If you do not regularly back up the Granite Core configuration file, you can be missing the latest information. When you import the configuration file, you retain all data since the last export. In other words, the data written to the configuration file after the Granite Core failure to Granite Edges and LUNs are lost. You must manually add the components of the environment that were added after the configuration file was exported.

 No Granite Core configuration file is available - This is the worst-case scenario. In this case you need to build a new Granite Core and reconfigure all Granite Edges as if they were new. All data in the Granite Edges is invalidated, and new writes to Granite LUNs after Granite Core failure are lost. There is no data loss at Granite Core. If there were applications running at Granite Edge that cannot handle the loss of most recent data, they need to be recovered from an application-consistent snapshot and backup from the data center. For more instruction on how to export and import the configuration file, see “Granite Core Configuration Export” on page 129 and “Granite Core in HA Configuration Replacement” on page 129. For general information about the configuration file, see the Granite Core Management Console User’s Guide.

Recovering a Single Core VE

The following recommendation will help to recover from potential failures and disasters and minimize data loss in a Core VE and Granite Edges.

 Configure the Core VE with iSCSI and use VMware HA - VMware HA is a component of the vSphere platform, which provides high availability for applications running in virtual machines. In the event of physical server failure, affected virtual machines are automatically restarted on other production servers. If you configure VMware HA for the Core VE, you have an automated failover for the single Core VE. You must be using iSCSI and not with Fiber Channel RDM disks.

 Continually back up the Core VE configuration file to an external shared storage - See the scenarios described in “Recovering a Single Physical Granite Core” on page 97.

Note: Core VE is not compatible with VMware Fault Tolerance (FT).

98 Granite Core and Edge Appliance Deployment Guide Granite Edge Replacement Security and Data Resilience

Granite Edge Replacement

In event of catastrophic failure, you might need to replace the Granite Edge hardware and remap the LUNs. It is usually impossible to properly shut down a Granite Edge LUN and bring it offline because the Granite Edge wants to commit all its pending writes (for the LUN) to the Granite Core. If the Granite Edge has failed, and you cannot successfully bring the LUN offline, you need to manually remove the LUN. The block store is a part of Granite Edge, and if you replace the Granite Edge, the cached data on the failed block store is discarded. To protect the Granite Edge against a single point of failure, consider an HA deployment of Granite Edge. For more information, see “Granite Edge High Availability” on page 73. Use the following procedure for a Granite Edge disaster recovery scenario in which there is an unexpected Granite Edge or remote site failure.

To replace Granite Edge

1. Schedule time that is convenient to be offline (if possible).

2. On the Granite Core, enter the following CLI command: edge modify id clear-serial

3. On the replacement Granite Edge, connect to the Granite Core using the value of the failed Granite Edge. The LUNs now project to the replacement Granite Edge.

Note: If you run the clear-serial command on the Granite Core for a Granite Edge (or Granite Edges in HA), which is functional, Granite Core rejects further connections from the Granite Edge. Running this command results in the block store overpopulating. As a precaution, Riverbed recommends that you perform a system dump before using the command. The system dump can help recover some configuration information that might occur from overpopulation.

You can lose data on the LUNs when writes to the Granite Edge are not committed to the Granite Core. In the case of minimal data loss, it is possible that you can easily recover the LUNs from a crash consistent state, such as with a filesystem check. However, this would depend on the type of applications that were using the LUNs. If you have concerns about the data consistency, Riverbed recommends that you roll back the LUN to a latest application-consistent snapshot. For details, see “Best Practice for LUN Snapshot Rollback” on page 103.

Granite Core and Edge Appliance Deployment Guide 99 Security and Data Resilience Disaster Recovery Scenarios

Disaster Recovery Scenarios

This section describes basic Granite appliance disaster scenarios, and includes general recovery recommendations. It includes the following topics:

 “Granite Appliance Failure—Failover” on page 100

 “Granite Appliance Failure—Failback” on page 101 Keep in mind the following definitions:

 Failover - to switch to a redundant computer server, storage, and network upon the failure or abnormal termination of the production server, storage, hardware component, or network.

 Failback - the process of restoring a system, component, or service previously in a state of failure back to its original, working state.

 Production site - the site in which applications, systems, and storage are originally designed and configured. Also known as the primary site.

 Disaster recovery site - the site that is set up in preparation for a disaster. Also known as the secondary site.

Granite Appliance Failure—Failover

In the case of a failure or a disaster affecting the entire site, Riverbed recommends you take the following considerations into account. The exact process depends on the storage array and other environment specifics. You must create thorough documentation of the disaster recovery plan for successful recovery implementation. Riverbed recommends that you perform regular testing so that the information in the plan is maintained and up to date. This sections includes the following topics:

 “Data Center Failover” on page 100

 “Branch Office Failover” on page 101

Data Center Failover In the event that an entire data center experiences failure or a disaster, you can restore the Granite Core operations assuming you have met the following prerequisites:

 The disaster recovery site has the storage array replicated from the production site.

 The network infrastructure is configured on the disaster recovery site similarly to the production site, enabling the Granite Edges to communicate with Granite Core.

 Granite Core and SteelHeads (or their virtual editions) at the disaster recovery site are installed, licensed, and configured similarly to the production site.

 Ideally, the Granite Core at the disaster recovery site is configured identically to the Granite Core on the production site. You can import the configuration file from Granite Core at the production site to ensure that you have configured both Granite Cores the same way. Unless the disaster recovery site is designed to be an exact replica of the production site, minor differences are inevitable: for example, the IP addresses of the Granite Core, the storage array, and so on. Riverbed recommends that you regularly replicate the Granite Core configuration file to the disaster recovery site and import it into the disaster recovery instance. You can script the necessary adjustments to the configuration to automate the configuration adoption process.

100 Granite Core and Edge Appliance Deployment Guide Disaster Recovery Scenarios Security and Data Resilience

Likewise, the configuration of SteelHeads in the disaster recovery site should reflect the latest changes to the configuration in the production site. All the relevant in-path rules must be maintained and kept up to date. There are some limitations:

 If you have different LUN IDs in the disaster recovery site than in the production site, you need to reconfigure the Granite Core and all the Granite Edges and deploy them as new. You must know which LUNs belong to which Granite Edge and map them correspondingly. Riverbed recommends that you implement a naming convention.

 Unless the data from the production storage array is replicated in synchronous mode, you can assume that there is already-committed data to the production storage, but that the data has not been replicated to disaster recovery site yet. This means that a gap in data consistency can occur if, after the failover, the Granite Edges immediately start writing to the disaster recovery Granite Core. To prevent the data corruption, you need to configure all the LUNs at Granite Edges as new. When you configure Granite Edges as new, this configuration empties out their block store, causing data loss of all the writes occurred after a disaster at the production site.

 If you want data consistency on the application level, Riverbed recommends that you perform a rollback to one of the previous snapshots. For details, see “Best Practice for LUN Snapshot Rollback” on page 103.

 Keep in mind that initially after the recovery, the block store on Granite Edges does not have any data in the cache.

Branch Office Failover When Granite Edge in a branch becomes inaccessible from outside the branch office due to a network outage, the operation in the branch office might continue. Granite products are designed with disconnected operations resiliency in mind. If your workflow enables branch office users to operate independently for a period of time (which is defined during the network planning stage and implemented with a correctly-sized appliance), the branch office continues as operational and synchronizes with the data center later. In the case when the branch office is completely lost, or it is imperative for the business to have a service in the branch office online sooner, you can choose to deploy the Granite Edge in another branch or in the data center. Riverbed recommends that if you chose to deploy a Granite Edge in the data center, that you remove the LUNs from Granite Core as to not enable data corruption by multiple write access to the LUNs. Riverbed recommends that you roll back to a latest application-consistent snapshot. If mostly read access is required to the data projected to the branch office, a good alternative is to temporarily mount a snapshot to a local host. This snapshot enables the data to be accessible to the data center, while the branch office is operating in disconnected-operation mode. Avoiding the failover will also simplify fallback to the production site. If you chose to deploy Granite Edge in another branch office, follow the steps in “Granite Edge Replacement” on page 99. You must understand that in this scenario, all the uncommitted writes at the branch are not stored. Riverbed recommends that you to roll back the LUNs to the latest application- consistent snapshot.

Granite Appliance Failure—Failback

After a disaster is over, or a failure is fixed, you might need to revert the changes and move the data and computing resources to where they were located before the disaster, while ensuring that the data integrity is not compromised. This is called failback. Unlike the failover process that can occur in a rush, you can thoroughly plan and test the failback process.

Granite Core and Edge Appliance Deployment Guide 101 Security and Data Resilience Disaster Recovery Scenarios

This section includes the following topics:

 “Data Center Failback” on page 102

 “Branch Office Failback” on page 102

Data Center Failback As Granite relies on primary storage to keep the data intact, the Granite Core failback can only follow a successful storage array replication from the disaster recovery site back to the production site. There are multiple ways to perform the recovery; however, Riverbed recommends that you use the following method. The process most likely requires a downtime, which you can schedule in advance. Riverbed also recommends that you create an application-consistent snapshot and backup prior to performing the following procedure. Perform these steps on one appliance at a time.

To perform the Granite Core failback process

1. Shut down hosts and unmount LUNs.

2. Export the configuration file from the Granite Core at the disaster recovery site.

3. From the Granite Core, initiate taking the Granite Edge offline. This process forces the Granite Edge to replicate all the committed writes to the Granite Core.

4. Remove iSCSI Initiator access from the LUN at the Granite Core. Note that there is not access to write any data to the LUN until you the data has become available after the failback completes.

5. Make sure that you replicate the LUN with the storage array from the disaster recovery site back to the production site.

6. On a storage array at the production site, make the replicated LUN the primary LUN. Depending on a storage array, you might need to create a snapshot, clone, or promote the clone to a LUN—or all the above. For more information, see the user guide for your storage array. The preferred method is the method that preserves the LUN ID, which might not work for all the arrays. If the LUN ID is going to change, you need to add the LUN as new on first the Granite Core and then on the Granite Edge.

7. If you had to make changes on the disaster recovery site due to a LUN ID change, import the Granite Core configuration file from the disaster recovery site and make the necessary adjustments to IP addresses and so on.

8. Add access to the LUN for Granite Core. If the LUN ID remained the same, the Granite Core at production site begins servicing the LUN instantaneously.

9. At the branch office, check to see if you need to change the Granite Core IP address.

Branch Office Failback The branch office failback process is similar to the Granite Edge replacement process. The procedure requires downtime that you can schedule in advance. If the production LUNs were mapped to another Granite Edge, use the following procedure.

102 Granite Core and Edge Appliance Deployment Guide Best Practice for LUN Snapshot Rollback Security and Data Resilience

To perform the Granite Edge failback process

1. Shut down hosts and unmount LUNs.

2. Take the LUNs offline from the disaster recovery Granite Edge. This process forces the Granite Edge to replicate all the committed writes to Granite Core.

3. If any changes were made to the LAN mapping configuration, you need to merge the changes during the fallback process. If you need assistance with this process, contact Riverbed Support.

4. Shut down the Granite Edge at the disaster recovery site.

5. Bring up the Granite Edge at the production site.

6. Follow the steps described in “Granite Edge Replacement” on page 99. Keep in mind that after the fallback process is completed, the block store on Granite Edges does not have any data in the cache. If you took out the production LUNs of Granite Core and used them locally in the data center, shut down hosts and unmount LUNs and then continue the setup process as described in the Granite Core Installation and Configuration Guide.

Best Practice for LUN Snapshot Rollback

When single file restore is impossible or impractical, you can roll back the entire LUN snapshot on the storage array at the data center and projected out to the branch. Riverbed recommends the following procedure for a LUN snapshot rollback.

Note: A single file restore is to recover your deleted file from a backup or a snapshot without rolling back the entire file system to a point of time in which a file still existed in the file system. When you use the LUN rollback, everything that was written to (and deleted from) the file system is lost.

To roll back the LUN snapshot

1. Set the LUN offline at the server running at Granite Edge.

2. Remove iSCSI Initiator access from the LUN at the Granite Core.

3. Remove the LUN from the Granite Core.

4. Restore the LUN from on the storage array from a snapshot.

5. Add the LUN to the Granite Core.

6. Add iSCSI Initiator access for the LUN at Granite Core. You can now access the LUN snapshot from a server on Granite Edge. Keep in mind that after this process is completed, the block store on Granite Edges does not have any data in the cache.

Granite Core and Edge Appliance Deployment Guide 103 Security and Data Resilience At-Rest and In-Flight Data Security

At-Rest and In-Flight Data Security

For organizations that require high levels of security or face stringent compliance requirements, Granite Edge provides data at-rest and in-flight encryption capabilities for the data blocks written on the block store cache. This section includes the following topics:

 “Enable Data At-Rest Block Store Encryption” on page 104

 “Enable Data In-Flight Secure Peering Encryption” on page 106 Supported encryption standards include AES-128, AES-192, and AES-256. The keys are maintained in an encrypted secure vault. In 2003, the United States government declared a review of the three algorithm key lengths to see if they were sufficient for protection of classified information up to the secret level. Top secret information requires 192-bit or 256-bit keys. The vault is encrypted by AES with a 256-bit key and a 16-byte cipher, and you must unlock it before the block store is available. The secure vault password is verified upon every power up of the appliance, assuring that the data is confidential in case the Granite Edge is lost or stolen. Initially, the secure vault has a default password known only to the RiOS software so the Granite Edge can automatically unlock the vault during system startup. You can change the password so that the Granite Edge does not automatically unlock the secure vault during system startup and the block store is not available until you enter the password. When the system boots, the contents of the vault are read into memory, decrypted, and mounted (through EncFS, a FUSE-based cryptographic file system). Because this information is only in memory, when an appliance is rebooted or powered off, the information is no longer available and the in-memory object disappears. Decrypted vault contents are never persisted on disk storage. Riverbed recommends that you keep your secure vault password safe. Your private keys cannot be compromised, so there is no password recovery. In the event of a lost password, you can reset the secure vault only after erasing all the information within the secure vault.

To reset a lost the password

 From the SteelHead EX, enter the following CLI commands: > enable # config term (conf)# secure-vault clear When you use the secure-vault clear command, you lose the data in the block store, if it was encrypted. You then need to reload or regenerate the certificates and private keys.

Note: The Granite Edge block store encryption is the same mechanism that is used in the RiOS data store encryption. For more information, see the security information in the SteelHead Deployment Guide.

Configuring data encryption requires extra CPU resources and might affect performance, hence Riverbed recommends you enable block store encryption only if you require a high level of security or dictated by compliance requirements.

Enable Data At-Rest Block Store Encryption

The following example shows how to configure block store encryption on a Granite Edge. The commands are entered on the Granite Core at the data center.

104 Granite Core and Edge Appliance Deployment Guide At-Rest and In-Flight Data Security Security and Data Resilience

To configure block store encryption on Granite Edge

1. From the Granite Core, enter the following commands: > enable # configure (config) # edge id blockstore enc-type

2. To verify whether encryption has been enabled on Granite Edge, enter the following commands: > enable # show edge id blockstore Write Reserve : 10% Encryption type : AES_256 You can do the same procedure in the Granite Core Management Console by choosing Configure > Manage: Granite Edges. Figure 8-1. Adding Block Store Encryption

Granite Core and Edge Appliance Deployment Guide 105 Security and Data Resilience At-Rest and In-Flight Data Security

To verify whether encryption is enabled on your Granite Edge device, look at the Blockstore Encryption field on your Granite Edge status window as shown in Figure 8-2. Figure 8-2. Verify Block Store Encryption

Enable Data In-Flight Secure Peering Encryption

Granite Rdisk protocol operates on clear text and there is a possibility that remote branch data can be exposed to hackers during transfer over the WAN. To counter this, the Granite Edge provides data inflight encryption capabilities when the data blocks are asynchronously propagated to the data center LUN. You can use secure peering between Granite Edge and the data center SteelHead to create a secure SSL channel and protect the data in-flight over the WAN. For more information about security and SSL, see the SteelHead Deployment Guide and the SteelHead Deployment Guide - Protocols.

106 Granite Core and Edge Appliance Deployment Guide

CHAPTER 9 Granite Appliance Upgrade

This chapter provides some general guidance when upgrading your Granite appliances. It includes the following topics:

 “Planning Software Upgrades” on page 107

 “Upgrade Sequence” on page 108

 “Minimize Risk During Upgrading” on page 108

 “Performing the Upgrade” on page 109

Planning Software Upgrades

Before you perform a software upgrade to a Granite deployment, there are a few steps to consider. This section describes best practices that you can incorporate into your own upgrade and change control procedures. For detailed instructions and guidance on upgrading each of the products, see the SteelHead EX Installation and Configuration Guide and the Granite Core Installation and Configuration Guide. Prior to upgrading, complete the following prerequisites:

 Alert users - Depending on your deployment you might have a full-HA Granite configuration at the data center and at the branch office. This allows you to perform software upgrades with minimal or zero disruption to your production environment. Whether this is your case or not, Riverbed recommends that you schedule either downtime or an at risk period so that your users are aware of any possible interruption to service.

 Alert IT staff - Because you might also be using your Granite Edge appliances (SteelHead EX) simultaneously for WAN optimization services, Riverbed recommends that you alert other IT departments within your organization: for example, networking, monitoring, applications, and so on.

 Software - Gather all the relevant software images from the Riverbed Support site and consider using the SCC to store the images and assist with the upgrade. When downloading the software images, make sure to download the release notes so that you are aware of any warnings, special instructions, or known problems that can affect your upgrade plans.

 Configuration data - Ensure all your Granite Cores and Granite Edges have their current running configurations saved to a suitable location external to the appliances themselves. You can use the SCC to assist with this task.

Granite Core and Edge Appliance Deployment Guide 107 Granite Appliance Upgrade Upgrade Sequence

Upgrade Sequence

If you are planning to upgrade both Granite Core and Granite Edge as part of the same procedure, then the normal sequence—in which there is no HA configuration at the Granite Core—is to upgrade the Granite Edge appliances and then upgrade the Granite Core appliances.

Note: If you are only upgrading Granite Core or Granite Edge, but not both, this section does not apply.

If there is HA at the Granite Edge and no HA at the Granite Core, the sequence is the same—Granite Edge first followed by Granite Core. However, if there is HA at the Granite Core, regardless of whether or not there is HA in the branch office with Granite Edges, upgrade the Granite Core first, followed by the Granite Edge. The following table summarizes the sequence.

Upgrade Sequence Deployment First Second

Granite Core - Granite Edge All Granite Edge appliances Granite Core owned by the Granite Core Granite Core HA - Granite Edge Granite Core HA All Granite Edge appliances owned by the Granite Core HA

Granite Core - Granite Edge HA All Granite Edges are owned by Granite Core the Granite Core. Upgrade the standby Granite Edge first, and wait for it to be synchronized with the active Granite Edge. Next, the active Granite Edge. Granite Core HA - Granite Edge HA Granite Core HA All Granite Edges are owned by the Granite Core HA. Upgrade standby Granite Edge first. If you have an HA deployment, it is possible you can have mixed software version between HA peers for a short period of time. You can also run with mismatched software versions between Granite Core and Granite Edge for short periods of time; however, Riverbed does not recommend this practice. If there are any doubts about these procedures, contact Riverbed Support.

Minimize Risk During Upgrading

Although it is expected that the upgrade procedure will progress and complete without any problems, Riverbed recommends that you have a contingency plan to back out or restore the previous state of operations.

108 Granite Core and Edge Appliance Deployment Guide Performing the Upgrade Granite Appliance Upgrade

Both Granite Core and Granite Edge upgrade procedures automatically install the new software image into a backup partition on the appliance. The existing (running) software image is in a separate (active) partition. During the reboot, which is part of the upgrade procedure, the roles of the backup and active partitions are reversed. This ensures that if you require a downgrade to restore the previous software version, a partition swap and reboot are all that should be required. If you have a lab or development environment in which some nonproduction Granite appliances are in use, consider doing a trial upgrade. This ensures you have some exposure to the upgrade processes, enables you to measure the time taken to perform the tasks, and gain any other useful experience. You can complete the trial upgrade well ahead of the production upgrade to confirm the new version of software operates as expected.

Performing the Upgrade

This section describes the tasks involved to upgrade your Granite appliances. It contains the following sections:

 “Granite Edge Upgrade” on page 109

 “Granite Core Upgrade” on page 109

Once you are ready (and if there is no HA configuration for the Granite Core) start by upgrading the Granite Edge appliances first. After these appliances are successfully upgraded, proceed to upgrade the Granite Core appliances. If you have Granite Core deployed in an HA deployment, then upgrade the Granite Core appliances first, followed by the Granite Edge appliances.

For the proper sequence, see “Upgrade Sequence” on page 108.

Granite Edge Upgrade

Remember, Granite Edge software and functionality is incorporated into the SteelHead EX software image. When performing the upgrade there is a reboot of the appliance, which causes an interruption or degradation of service both to Granite Edge and WAN optimization (if there is no HA). While it is not necessary to disconnect the Granite Edge from the Granite Core, Riverbed recommends that you stop all read and write operations for any VSP-hosted services and any external application servers that are using the Granite Edge for storage connectivity. Preferably, shut down the servers, and in the case of VSP, place the ESXi instance into maintenance mode. In the case of Granite Edge HA deployments, upgrade one of the Granite Edge peers first, leaving the other Granite Edge in a normal operating state. During the upgrade process the surviving Granite Edge enters a degraded state. This is expected behavior. After the upgrade of the first Granite Edge in the HA configuration is complete, check that the two Granite Edge HA peers rediscover each other before proceeding with the upgrade of the second Granite Edge.

Granite Core Upgrade

Before upgrading the Granite Core, Riverbed strongly recommends that you ensure that any data written by Granite Edge to LUNs projected by the Granite Core is synchronized to the LUNs in the data center storage array. In addition, take a snapshot of any LUNs prior to the upgrade.

Granite Core and Edge Appliance Deployment Guide 109 Granite Appliance Upgrade Performing the Upgrade

If a Granite Core is part of an HA configuration with a second Granite Core, then you must upgrade both before the Granite Edges the Granite Cores are responsible for are also upgraded. When upgrading a Granite Core that is not part of an HA configuration, there is an interruption to service for the projected LUNs to the Granite Edges. You do not need to disconnect the Granite Edge appliances from the Granite Core, nor do you need to unmap any required LUNs managed by the Granite Core from the storage array. When upgrading a Granite Core that is part of an HA configuration, the peer Granite Core appliance triggers an HA failover event. This is expected behavior. After the upgrade of the first Granite Core is complete, check to ensure that the two Granite Core HA peers can rediscover each other before upgrading the second Granite Core.

110 Granite Core and Edge Appliance Deployment Guide

CHAPTER 10 Network Quality of Service

Granite technology enables remote branch offices to use storage provisioned at the data center through unreliable, low-bandwidth and high-latency WAN links. Adding this new type of traffic to the WAN links creates new considerations in terms of guaranteeing quality of service (QoS) to existing WAN applications and to the Granite Rdisk protocol to function at the best possible level. This chapter contains the following topics:

 “Rdisk Protocol Overview” on page 111

 “QoS for Granite Replication Traffic” on page 113

 “QoS for LUNs” on page 113

 “QoS for Branch Offices” on page 113

 “Time-Based QoS Rules Example” on page 114 For general information about QoS, see the SteelHead Deployment Guide.

Rdisk Protocol Overview

To understand the QoS requirements for the Granite Rdisk protocol, you must understand how it works. The Rdisk protocol defines how the Granite Edge and Granite Core appliances communicate and how they transfer data blocks over the WAN. Rdisk uses five TCP ports for data transfers and one TCP port for management.

Granite Core and Edge Appliance Deployment Guide 111 Network Quality of Service Rdisk Protocol Overview

The following table summarizes the TCP ports used by the Rdisk protocol. It maps the different Rdisk operations to each TCP port.

TCP Port Operation Description

7970 Management Manages information exchange between Granite Edge and Granite Core. The majority of the data flows from the Granite Core to the Granite Edge. 7950 Read Transfers data requests for data-blocks absent in Granite Edge from the data center. The majority of the data flows from the Granite Edge to the Granite Core. 7951 Write Transfers new data created at the Granite Edge to the data center and snapshot operations. The majority of the data flows from the Granite Edge to the Granite Core. 7952 Prefetch0 Prefetches data for which Granite has highest confidence (for example, file Read Ahead). The majority of the data flows from the Granite Core to the Granite Edge. 7953 Prefetch1 Prefetches data for which Granite has medium confidence (for example, Boot). The majority of the data flows from the Granite Core to the Granite Edge. 7954 Prefetch2 Prefetches data for which Granite has lowest confidence (for example, Prepop). The majority of the data flows from the Granite Core to the Granite Edge.

Note: Rdisk Protocol creates five TCP connections per exported LUN.

Different Rdisk operations use different TCP ports. The following table summarized the Rdisk QoS requirements for each Rdisk operation and its respective TCP port.

TCP Port Operation Outgoing Outgoing Branch Outgoing Data Center Outgoing Data Branch Office Office Priority Bandwidth Center Priority Bandwidth

7970 Management Low Normal Low Normal 7950 Read Low Business critical High Business critical

7951 Write High (off-peak Low priority Low Normal hours) Low (during peak hours) 7952 Prefetch0 Low Business critical High Business critical 7953 Prefetch1 Low Business critical Medium Business critical 7954 Prefetch2 Low Business critical High Best effort

For more information about Rdisk, see “Rdisk Traffic Routing Options” on page 118

112 Granite Core and Edge Appliance Deployment Guide QoS for Granite Replication Traffic Network Quality of Service

QoS for Granite Replication Traffic

To prevent Granite replication traffic from consuming bandwidth required for other applications during business hours, Riverbed recommends that you allow more bandwidth for Rdisk write traffic (port 7951) during the off-peak hours and less bandwidth during the peak hours. Also consider carefully your recovery point objectives (RPO) and recovering time objectives (RTO) when configuring QoS for Rdisk Granite traffic. Depending on which Granite features you use, you might need to consider different priorities and bandwidth requirements.

QoS for LUNs

This section contains the following topics:

 “QoS for Unpinned LUNs” on page 113

 “QoS for Pinned LUNs” on page 113 For more information about pinned LUNs, see “Pin the LUN and Prepopulate the Block Store” on page 116 and “When to PIN and Prepopulate the LUN” on page 128.

QoS for Unpinned LUNs

In a unpinned LUNs scenario, Riverbed recommends that you prioritize traffic on port 7950 so that the SCSI read requests for data blocks not present on the Granite Edge block store cache can arrive from the data center LUN in a timely manner. Riverbed also recommends that you prioritize traffic on ports 7952, 7953, and 7954 so that the prefetch data can arrive at the block store when needed.

QoS for Pinned LUNs

In a pinned, prepopulated LUN scenario, all the data is present at the Granite Edge. Riverbed recommends that you prioritize only port 7951 so that the Rdisk protocol can transfer newly written data blocks from the Granite Edge block store to the data center LUN through Granite Core.

QoS for Branch Offices

This section contains the following topics:

 “QoS for Branch Offices That Mainly Read Data from the Data Center” on page 114

 “QoS for Branch Offices Booting Virtual Machines from the Data Center” on page 114

Granite Core and Edge Appliance Deployment Guide 113 Network Quality of Service Time-Based QoS Rules Example

QoS for Branch Offices That Mainly Read Data from the Data Center

In the case of branch office users who are not producing new data but instead are mainly reading data from the data center and the LUNs are not pinned, Riverbed recommends that you prioritize traffic on port 7950 and 7952 so that the iSCSI read requests for data blocks not present on the Granite Edge block store cache can arrive from the data center LUN in a timely manner.

QoS for Branch Offices Booting Virtual Machines from the Data Center

In the case of branch office users who are booting virtual machines from the data center and the LUNs are not pinned, Riverbed recommends that Port 7950 is the top priority for non-pinned LUN and that you prioritize traffic on port 7953 so that boot data is prefetched on this port in a timely manner.

Time-Based QoS Rules Example

This example illustrates how to configure time-based QoS rules on a SteelHead. You want to create two recurring jobs, each undoing the other, using the standard job CLI command. One sets the daytime cap on throughput or a low minimum guarantee, and the other then removes that cap or sets a higher minimum guarantee. steelhead (config) # job 1 date-time hh:mm:ss year/month/day "Start time" steelhead (config) # job 1 recurring 864000 "Occurs once a day" steelhead (config) # job 1 command 1 steelhead (config) # job 1 command 2 "Commands to set daytime cap" steelhead (config) # job 1 enable

steelhead (config) # job 2 date-time hh:mm:ss year/month/day "Start time" steelhead (config) # job 2 recurring 864000 "Occurs once a day" steelhead (config) # job 2 command 1 steelhead (config) # job 2 command 2 "Commands to remove daytime cap" steelhead (config) # job 2 enable

114 Granite Core and Edge Appliance Deployment Guide

CHAPTER 11 Deployment Best Practices

Every deployment of the Granite product family differs due to variations in specific customer needs and types and sizes of IT infrastructure. The following recommendations and best practices are intended to guide you to achieving optimal performance while reducing configuration and maintenance requirements. However, these guidelines are general; for detailed worksheets for proper sizing, contact your Riverbed account team. This chapter includes the following sections:

 “Granite Edge Best Practices” on page 115

 “Granite Core Best Practices” on page 127

 “iSCSI Initiators Timeouts” on page 129

 “Operating System Patching” on page 130

Granite Edge Best Practices

This section describes best practices for deploying Granite Edge. It includes the following topics:

 “Segregate Traffic” on page 116

 “Pin the LUN and Prepopulate the Block Store” on page 116

 “Segregate Data onto Multiple LUNs” on page 116

 “Ports and Type of Traffic” on page 116

 “Changing IP Addresses on Granite Edge, ESXi Host, and Servers” on page 117

 “Configure Disk Management” on page 118

 “Rdisk Traffic Routing Options” on page 118

 “Deploying Granite with Third-Party Traffic Optimization” on page 119

 “Windows and ESX Server Storage Layout—Granite-Protected LUNs Versus Local LUNs” on page 119

 “VMFS Datastores Deployment on Granite LUNs” on page 123

 “Enable Windows Persistent Bindings for Mounted iSCSI LUNs” on page 124

 “Set Up Memory Reservation for VMs Running on VMware ESXi in the VSP” on page 125

Granite Core and Edge Appliance Deployment Guide 115 Deployment Best Practices Granite Edge Best Practices

 “Boot from an Unpinned iSCSI LUN” on page 126

 “Running Antivirus Software” on page 126

 “Running Disk Defragmentation Software” on page 126

 “Running Backup Software” on page 126

 “Configure Jumbo Frames” on page 127

Segregate Traffic

At the remote branch office, Riverbed recommends that you separate storage iSCSI traffic and WAN/Rdisk traffic from LAN traffic. This practice helps to increase overall security, minimize congestion, minimize latency, and simplify the overall configuration of your storage infrastructure.

Pin the LUN and Prepopulate the Block Store

In specific circumstances, Riverbed recommends that you pin the LUN and prepopulate the block store. Additionally, you can have the write-reserve space resized accordingly; by default, the Granite Edge has a write-reserve space that is 10 percent of the block store. To resize the write-reserve space, contact your Riverbed representative. Riverbed recommends that you pin the LUN in the following circumstances:

 Unoptimized file systems - Granite supports core intelligent prefetch optimization on NFTS and VMFS file systems. For unoptimized file systems such as fat, fat32, ext3, and others, Riverbed recommends that you pin the LUN and prepopulate the block store.

 Database applications - If the LUN contains database applications that use raw disk file formats or proprietary file systems, Riverbed recommends that you pin the LUN and prepopulate the block store.

 WAN outages are likely or common - Ordinary operation of Granite depends on WAN connectivity between the branch office and the data center. If WAN outages are likely or common, Riverbed recommends that you pin the LUN and prepopulate the block store.

Segregate Data onto Multiple LUNs

Riverbed recommends that you separate storage into three LUNs, as follows:

 Operating system - In case of recovery, the operating system LUN can be quickly restored from the Windows installation disk or ESX datastore, depending on the type of server used in the deployment.

 Production data - The production data LUN is hosted on the Granite Edge and therefore safely backed up at the data center.

 Swap space - Data on the swap space LUN is transient and therefore not required in disaster recovery. Riverbed recommends that you use this LUN as a Granite Edge local LUN.

Ports and Type of Traffic

You should only allow iSCSI traffic on primary and auxiliary interfaces. Riverbed does not recommend that you configure your external iSCSI Initiators to use the IP address configured on the in-path interface. Some appliance models can optionally support an additional NIC to provide extra network interfaces. You can also configure these interfaces to provide iSCSI connectivity.

116 Granite Core and Edge Appliance Deployment Guide Granite Edge Best Practices Deployment Best Practices

Changing IP Addresses on Granite Edge, ESXi Host, and Servers

When you have a Granite Edge and ESXi running on the same converged platform, you must change IP addresses in a specific order to keep the task simple and fast. You can use this procedure when staging Granite Edges in the data center or moving them from one site to another. This procedure assumes that the Granite Edges are configured with IP addresses in a staged or production environment. You must test and verify all ESXi, servers, and interfaces before making these changes.

To change the IP addresses on Granite Edge, ESXi host, and servers

1. Starting with the Windows server, use your vSphere client to connect to the console, login and change it to DHCP or the new destination IP address, and finally shut down the Windows server from the console.

2. Use virtual network computing (VNC) client to connect to the ESXi console, change the IP to the new destination IP address, and shut ESXi down from the console. If you did not configure VNC during the ESXi installation wizard, you may also use vSphere Client and change it from Configuration > Networking > rvbd_vswitch_pri > Properties. Some devices perform better with TightVNC versus RealVNC.

3. On the SteelHead EX Management Console, Choose Networking > Networking: In-Path Interfaces, then change the IP address for inpath0_0 to the new destination IP address.

4. Use the included console cable to connect to the console port on the back of the SteelHead EX and log in as the administrator.

5. Enter the following command to change the IP address to your new destination IP address. enable config terminal interface primary ip address 1.7.7.7 /24 ip default-gateway 1.7.7.1 write memory

6. Enter the following command to shut down the SteelHead EX: reload halt

7. Move the Granite Edge to the new location.

8. Start your Windows server at the new location and open the iSCSI Initiator.

 Select the Discovery tab and remove the old portal.

 Click OK.

 Open the tab again and select Discover Portal.

 Add the new Granite Edge primary IP address. This process brings the original data disk to functioning.

Granite Core and Edge Appliance Deployment Guide 117 Deployment Best Practices Granite Edge Best Practices

Configure Disk Management

You can partition the disk space in the SteelHead EX in different ways based on how you want to use the appliance and which license you purchased. VSP and Granite storage mode is the default disk layout configured on the SteelHead EX during the manufacturing process. This mode evenly divides the disk space between VSP and Granite functionalities. However, if you plan on using the storage delivery capabilities (Granite feature) of the SteelHead EX, Riverbed recommends that you select the Granite storage mode disk layout. In the Granite storage mode, most of the disk space is dedicated to Granite Edge block store cache, while leaving the required amount for VSP and WAN optimization functionalities. VSP can then use Granite storage for its datastores—instead of local unprotected storage. This mode allows you to centralize your data center storage for both the operating system and the production data drives of the virtual servers running at the remote branch. The extended VSP stand-alone storage mode and the legacy VSP stand-alone storage mode are designed for SteelHead EX appliances that do not have the Granite feature. Use the extended VSP stand-alone storage mode in cases in which you do not want to consolidate the operating system drive of the virtual servers into the data center storage, but instead wants to keep it locally on the SteelHead EX.

Rdisk Traffic Routing Options

You can route Rdisk traffic out of the primary or the in-path interfaces. The section contains the following topics:

 “In-Path Interface” on page 118

 “Primary Interface” on page 118 For more information about Rdisk, see “Network Quality of Service” on page 111. For information about WAN redundancy, see “Configuring WAN Redundancy” on page 81.

In-Path Interface Riverbed recommends that you select the in-path interface when you deploy the SteelHead EX in Granite- only mode. When you configure SteelHead EX to use the in-path interface, the Rdisk traffic is intercepted, optimized, and sent directly out of the WAN interface toward the Granite Core deployed at the data center. Use this option during proof of concepts (POC) installations or if the primary interface is dedicated to management. The drawback of this mode is the lack of redundancy in the event of WAN interface failure. In this configuration, only the WAN interface needs to be connected. Disable link state propagation.

Primary Interface Riverbed recommends that you select the primary interface when you deploy the SteelHead EX in SteelHead EX + Granite mode. When you configure SteelHead EX to use the primary interface, the Rdisk traffic is sent unoptimized out of the primary interface to a switch or a router that in turn redirects the traffic back into the LAN interface of the SteelHead EX to get optimized. The traffic is then sent out of the WAN interface toward the Granite Core deployed at the data center. This configuration offers more redundancy because you can have both in-path interfaces connected to different switches.

118 Granite Core and Edge Appliance Deployment Guide Granite Edge Best Practices Deployment Best Practices

Deploying Granite with Third-Party Traffic Optimization

The Granite Edges and Granite Cores communicate with each other and transfer data-blocks over the WAN using six different TCP port numbers: 7950, 7951, 7952, 7953, 7954, and 7970. Figure 11-1 shows a deployment in which the remote branch and data center third-party optimization appliances are configured through WCCP. You can optionally configure WCCP redirect lists on the router to redirect traffic belonging to the six different TCP ports of Granite to the SteelHeads. Configure a fixed- target rule for the six different TCP ports of Granite to the in-path interface of the data center SteelHead. Figure 11-1. Granite Behind a Third-Party Deployment Scenario

Windows and ESX Server Storage Layout—Granite-Protected LUNs Versus Local LUNs

This section describes different LUNs and storage layouts. It includes the following topics:

 “Physical Windows Server Storage Layout” on page 121

 “Virtualized Windows Server on SteelHead EX and Granite Storage Layout” on page 122

 “Virtualized Windows Server on ESX Infrastructure with Production Data LUN on ESX Datastore Storage Layout” on page 123

Note: Granite-Protected LUNs are also known as iSCSI LUNs. This section refers to iSCSI LUNs as Granite- Protected LUNs.

Transient and temporary server data is not required in the case of disaster recovery and therefore does not need to be replicated back to the data center. For this reason, Riverbed recommends that you separate transient and temporary data from the production data by implementing a layout that separates the two into multiple LUNs.

Granite Core and Edge Appliance Deployment Guide 119 Deployment Best Practices Granite Edge Best Practices

In general, Riverbed recommends that you plan to configure one LUN for the operating system, one LUN for the production data, and one LUN for the temporary swap or paging space. Configuring LUNs in this manner greatly enhances data protection and operations recovery in case of a disaster. This extra configuration also facilitates migration to server virtualization if you are using physical servers. For more information about disaster recovery, see “Security and Data Resilience” on page 97. In order to achieve these goals Granite implements two types of LUNs: Granite-Protected (iSCSI) LUNs and local LUNs. You can add LUNs on the Configure > Manage: LUNs page. Use Granite-Protected LUNs to store production data. They share the space of the block store cache. The data is continuously replicated and kept in sync with the associated LUN back at the data center. The Granite Edge cache only keeps the working set of data blocks for these LUNs. The remaining data is kept at the data center and predictably retrieved at the edge when needed. During WAN outages, edge servers are not guaranteed to operate and function at 100 percent because some of the data that is needed can be at the data center and not locally present in the Granite Edge block store cache. One particular type of Granite-Protected LUN is the pinned LUN. Pinned LUNs are used to store production data but they use dedicated space in the Granite Edge. The space required and dedicated in the block store cache is equal to the size of the LUN provisioned at the data center. The pinned LUN enables the edge servers to continue to operate and function during WAN outages because 100 percent of data is kept in block store cache. Like regular Granite LUNs the data is replicated and kept in sync with the associated LUN at the data center. For more information about pinned LUNs, see “When to PIN and Prepopulate the LUN” on page 128. Use local LUNs to store transient and temporary data. Local LUNs also use dedicated space in the block store cache. The data is never replicated back to the data center because it is not required in the case of disaster recovery.

120 Granite Core and Edge Appliance Deployment Guide Granite Edge Best Practices Deployment Best Practices

Physical Windows Server Storage Layout When deploying a physical Windows server, Riverbed recommends that you separate its storage into three different LUNs: the operating system and swap space (or page file) can reside in two partitions on the server internal hard drive (or two separate drives), while production data should reside on the Granite-Protected LUN (Figure 11-2). Figure 11-2. Physical Server Layout

This layout facilitates future server virtualization and service recovery in the case of hardware failure at the remote branch. The production data is hosted on a Granite-Protected LUN, which is safely stored and backed up at the data center. In case of a disaster, you can stream this data with little notice to a newly deployed Windows server without having to restore the entire dataset from backup.

Granite Core and Edge Appliance Deployment Guide 121 Deployment Best Practices Granite Edge Best Practices

Virtualized Windows Server on SteelHead EX and Granite Storage Layout When you deploy a virtual Windows server into the VSP SteelHead EX infrastructure, Riverbed recommends that you separate its storage in three different LUNs (Figure 11-3) as follows:

 You can virtualize the operating system disk (OS LUN) on a VMDK file and hosted on the Granite- Protected LUN, allowing for data center backup and instant recovery in the event of SteelHead EX hardware failure.

 You can store Swap and vSwap space containing transient data on to local LUNs because this data does not need to be recovered after a disaster.

 Production data continues to reside on a Granite-Protected LUN, allowing for data center backup and instant recovery in the event of SteelHead EX hardware failure. Figure 11-3. Virtual Server Layout 1

122 Granite Core and Edge Appliance Deployment Guide Granite Edge Best Practices Deployment Best Practices

Virtualized Windows Server on ESX Infrastructure with Production Data LUN on ESX Datastore Storage Layout When you deploy a virtual Windows server into an ESX infrastructure, you can also store the production data on an ESX datastore mapped to a Granite-Protected LUN (Figure 11-4). This deployment facilitates service recovery in the event of hardware failure at the remote branch because Granite appliances optimize not only LUNs formatted directly with NTFS file system but also optimize LUNs that are first virtualized with VMFS and are later formatted with NTFS. Figure 11-4. Virtual Server Layout 2

VMFS Datastores Deployment on Granite LUNs

When you deploy VMFS datastores on Granite-Protected LUNs, for best performance, Riverbed recommends that you choose the Thick Provision Lazy Zeroed disk format (VMware default). Because of the way we use block store in Granite Edge, this disk format is the most efficient option. Thin provisioning is when you assign a LUN to be used by a device (in this case a VMFS datastore for an ESXi server, host) and you tell the host how big the LUN is (for example, 10 GB). However, as an administrator you can choose to pretend that the LUN is 10 GB, and only assign the host 2 GB. This fake number is useful if you know that the host needs only 2 GB to begin with. As time goes by (days or months), the host starts to write more data and needs more space, the storage array automatically grows the LUN until eventually it really is 10 GB in size. Thick provisioning means there is no pretending. You allocate all 10 GB from the beginning whether the host needs it from day one or not. Whether you choose thick or thin provisioning, you need to initialize (format) the LUN like any other new disk. The formatting is essentially a process of writing a pattern to the disk sectors (in this case zeros). You cannot write to a disk before you format it. Normally, you have to wait for the entire disk to be formatted before you can use it—for large disks, this can take hours. Lazy Zeroed means the process works away slowly in the background and as soon as the first few sectors have been formatted the host can start using it. This means the host does not have to wait until the entire disk (LUN) is formatted.

Granite Core and Edge Appliance Deployment Guide 123 Deployment Best Practices Granite Edge Best Practices

Enable Windows Persistent Bindings for Mounted iSCSI LUNs

Riverbed recommends that you make iSCSI LUNs persistent across Windows server reboots; otherwise, you must manually reconnect them. To configure Windows servers to automatically connect to the iSCSI LUNs after system reboots, select the Add this connection to the list of Favorite Targets check box (Figure 11-5) when you connect to the Granite Edge iSCSI target. Figure 11-5. Favorite Targets

To make iSCSI LUNs persistent and ensure that Windows does not consider the iSCSI service fully started until connections are restored to all the Granite volumes on the binding list, remember to add the Granite Edge iSCSI target to the binding list of the iSCSI service. This addition is important particularly if you have data on an iSCSI LUN that other services depend on: for example, a Windows file server that is using the iSCSI LUN as a share.

124 Granite Core and Edge Appliance Deployment Guide Granite Edge Best Practices Deployment Best Practices

The best option to do this is to select the Volumes and Devices tab from the iSCSI Initiator's control panel and click Auto Configure (Figure 11-6). This binds all available iSCSI targets to the iSCSI startup process. If you want to choose individual targets to bind, click Add. To add individual targets, you must know the target drive letter or mount point. Figure 11-6. Target Binding

Set Up Memory Reservation for VMs Running on VMware ESXi in the VSP

By default, VMware ESXi dynamically tries to reclaim unused memory from guest virtual machines, while the Windows operating system uses free memory to perform caching and avoid swapping to disk. To significantly improve performance of Windows virtual machines, Riverbed recommends that you configure memory reservation to the highest possible value of the ESXi memory available to the VM. This advice applies whether the VMs are hosted within the VSP of the SteelHead EX or on an external ESXi server in the branch that is using LUNs from Granite. Note that setting the memory reservation to the configured size of the virtual machine results in a per virtual machine vmkernel swap file of zero bytes, which consumes less storage and helps to increase performance by eliminating ESXi host-level swapping. The guest operating system within the virtual machine maintains its own separate swap and page file.

Granite Core and Edge Appliance Deployment Guide 125 Deployment Best Practices Granite Edge Best Practices

Boot from an Unpinned iSCSI LUN

If you are booting a Windows server or client from an unpinned iSCSI LUN, Riverbed recommends that you install the Riverbed Turbo Boot software on the Windows machine. The Riverbed Turbo Boot software greatly improves boot over the WAN performance because it allows Granite Core to send to Granite Edge only the files needed for the boot process.

Note: The Granite Turbo Boot plugin is not compatible with the branch recovery agent. For more information about the branch recovery agent, see “How Branch Recovery Works” on page 93 and the Granite Core Management Console User’s Guide.

Running Antivirus Software

There are two types of antivirus scanning mode:

 Ondemand - Scans the entire LUN data files for viruses at scheduled intervals.

 Onaccess - Scans the data files dynamically as they are read or written to disk. There are two common locations to perform the scanning:

 Onhost - Antivirus software is installed on the application server.

 Offhost - Antivirus software is installed on dedicated servers that can access directly the application server data. In typical Granite deployments in which the LUNs at the data center contain the full amount of data and the remote branch cache contains the working set, Riverbed recommends that you run ondemand scan mode at the data center and onaccess scan mode at the remote branch. Running ondemand full file system scan mode at the remote branch causes the block store to wrap and evict the working set of data leading to slow performance results. However, if the LUNs are pinned, ondemand full file system scan mode can also be performed at the remote branch. Whether scanning onhost or offhost, the Granite solution does not dictate one way versus another, but in order to minimize the server load, Riverbed recommends offhost virus scans.

Running Disk Defragmentation Software

Disk defragmentation software is another category of software that can possibly cause the Granite block store cache to wrap and evict the working set of data. Riverbed does not recommend that you run disk defragmentation software. Riverbed recommends that you disable default-enabled disk defragmention on Windows 7 or later.

Running Backup Software

Backup software is another category of software that will possibly cause the Granite block store cache to wrap and evict the working set of data, especially during the execution of full backups. In a Granite deployment, Riverbed recommends that you run differential, incremental, and synthetic full and a full backup at the data center.

126 Granite Core and Edge Appliance Deployment Guide Granite Core Best Practices Deployment Best Practices

Configure Jumbo Frames

If jumbo frames are supported by your network infrastructure, Riverbed recommends that you use jumbo frames between Granite Core and storage arrays. Riverbed has the same recommendation for Granite Edge and any external application servers (not hosted within VSP) that are using LUNs from the Granite Edge. The application server interfaces must support jumbo frames. For details, see “Configuring Granite Edge for Jumbo Frames” on page 57.

Granite Core Best Practices

This section describes best practices for deploying Granite Edge. It includes the following topics:

 “Deploy on Gigabit Ethernet Networks” on page 127

 “Use Challenge Handshake Authentication Protocol” on page 127

 “Configure Initiators and Storage Groups or LUN Masking” on page 127

 “Granite Core Hostname and IP Address” on page 128

 “Segregate Storage Traffic from Management Traffic” on page 128

 “When to PIN and Prepopulate the LUN” on page 128

 “Granite Core Configuration Export” on page 129

 “Granite Core in HA Configuration Replacement” on page 129

Deploy on Gigabit Ethernet Networks

The iSCSI protocol enables block-level traffic over IP networks. However, iSCSI is both latency and bandwidth sensitive. To optimize performance reliability, Riverbed recommends that you deploy Granite Core and the storage array on Gigabit Ethernet networks.

Use Challenge Handshake Authentication Protocol

For additional security, Riverbed recommends that you use the Mutual Challenge Handshake Authentication Protocol (CHAP) between Granite Core and the storage array, and between Granite Edge and the server. One-way CHAP is also supported.

Configure Initiators and Storage Groups or LUN Masking

To avoid unwanted hosts to access LUNs mapped to Granite Core, Riverbed recommends that you configure initiator and storage groups between Granite Core and the storage system. This particular practice is also known as LUN masking or Storage Access Control. When mapping Fibre Channel LUNs to the Core VEs, ensure the ESXi servers in the cluster that are hosting the virtual Granite Cores have access to these LUNs. Configure the ESXi servers in the cluster that are not hosting the Core VEs to not have access to these LUNs.

Granite Core and Edge Appliance Deployment Guide 127 Deployment Best Practices Granite Core Best Practices

Granite Core Hostname and IP Address

If the branch DNS server runs on VSP and its DNS datastore is deployed on a LUN used with Granite, Riverbed recommends that you use the Granite Core IP address instead of the hostname when you specify the Granite Core hostname and IP address. If you must use the hostname, deploy the DNS server on the VSP internal storage, or configure host DNS entries for the Granite Core hostname on the SteelHead.

Segregate Storage Traffic from Management Traffic

To increase overall security, minimize congestion, minimize latency, and simplify the overall configuration of your storage infrastructure, Riverbed recommends that you segregate storage traffic from regular LAN traffic using VLANs (Figure 11-7). Figure 11-7. Traffic Segregation

When to PIN and Prepopulate the LUN

Granite technology has built-in file system awareness for NTFS and VMFS file systems. There are two likely circumstances when you need to pin and prepopulate the LUN:

 “LUNs Containing File Systems Other Than NTFS and VMFS and LUNs Containing Unstructured Data” on page 128

 “Data Availability at the Branch During a WAN Link Outage” on page 128

LUNs Containing File Systems Other Than NTFS and VMFS and LUNs Containing Unstructured Data Riverbed recommends that you pin and prepopulate the LUN for unoptimized file systems such as fat, fat32, ext3, and so on. You can also pin the LUN for applications like databases that use raw disk file format or proprietary file systems.

Data Availability at the Branch During a WAN Link Outage When the WAN link between the remote branch office and the data center is down, data no longer travels through the WAN link. Hence, Granite technology and its intelligent prefetch mechanisms no longer functions. Riverbed recommends that you pin and prepopulate the LUN if frequent prolonged periods of WAN outages are expected.

128 Granite Core and Edge Appliance Deployment Guide iSCSI Initiators Timeouts Deployment Best Practices

By default, the Granite Edge keeps a write reserve that is 10 percent of the block store size. If prolonged periods of WAN outages are expected, Riverbed recommends that you appropriately increase the write reserve space.

Granite Core Configuration Export

Riverbed recommends that you store and back up the configuration on an external server in case of system failure. Enter the following CLI commands to export the configuration: enable configure terminal configuration bulk export scp://username:password@server/path/to/config Riverbed recommends that you complete this export each time a configuration operation is performed, or you have some other changes on your configuration.

Granite Core in HA Configuration Replacement

If the configuration has been saved on an external server, the failed Granite Core can be seamlessly replaced. Enter the following CLI commands to retrieve what was previously saved: enable configure terminal no service enable configuration bulk import scp://username:password@server/path/to/config service enable iSCSI Initiators Timeouts

This section contains the following topics:

 “Microsoft iSCSI Initiator Timeouts” on page 129

 “ESX iSCSI Initiator Timeouts” on page 130

Microsoft iSCSI Initiator Timeouts

By default, the Microsoft iSCSI Initiator LinkDownTime timeout value is set to 15 seconds and the MaxRequestHoldTime timeout value is also 15 seconds. These timeout values determine how much time the initiator holds a request before reporting an iSCSI connection error. You can increase these values to accommodate longer outages, such as a Granite Edge failover event or a power cycle in the case of a single appliance. If MPIO is installed in the Microsoft iSCSI Initiator, the LinkDownTime value is used. If MPIO is not installed, MaxRequestHoldTime is used instead. If you are using Granite Edge in an HA configuration and MPIO is configured in the Microsoft iSCSI Initiator, change the LinkDownTime timeout value to 60 seconds to allow the failover to complete.

Granite Core and Edge Appliance Deployment Guide 129 Deployment Best Practices Operating System Patching

ESX iSCSI Initiator Timeouts

By default, the VMware ESX iSCSI Initiator DefaultTimeToWait timeout is set to 2 seconds. This is the minimum time, to wait before attempting an explicit or implicit logout or active iSCSI task reassignment after an unexpected connection termination or a connection reset. You can increase this value to accommodate longer outages, such as a Granite Edge failover event or a power cycle in case of a single appliance. If you are using Granite Edge in an HA configuration, change the DefaultTimeToWait timeout value to 60 seconds to allow the failover to complete. For more information about iSCSI Initiator timeouts, see “Configuring iSCSI Initiator Timeouts” on page 58.

Operating System Patching

This section contains the following topics:

 “Patching at the Branch Office for Virtual Servers Installed on iSCSI LUNs” on page 130

 “Patching at the Data Center for Virtual Servers Installed on iSCSI LUNs” on page 130

Patching at the Branch Office for Virtual Servers Installed on iSCSI LUNs

You can continue to use the same or existing methodologies and tools to perform patch management on physical or virtual branch servers booted over the WAN using Granite appliances.

Patching at the Data Center for Virtual Servers Installed on iSCSI LUNs

If you want to perform virtual server patching at the data center and save a round-trip of patch software from the data center to the branch office, use the following procedure.

To perform virtual server patching at the data center

1. At the branch office:

 Power down the virtual machine.

 Take the VMFS datastore offline.

2. At the data center:

 Take the LUN on the Granite Core offline.

 Mount the LUN to a temporary ESX server.

 Power up the virtual machine, and apply patches and file system updates.

 Power down the virtual machine.

 Take the VMFS datastore offline.

 Bring the LUN on the Granite Core online.

130 Granite Core and Edge Appliance Deployment Guide Operating System Patching Deployment Best Practices

3. At the branch office:

 Bring the VMFS datastore online.

 Boot up the virtual machine. If the LUN was previously pinned at the edge, patching at the data center can potentially invalidate the cache. If this is the case, you might need to prepopulate the LUN.

Granite Core and Edge Appliance Deployment Guide 131 Deployment Best Practices Operating System Patching

132 Granite Core and Edge Appliance Deployment Guide

CHAPTER 12 Granite Appliance Sizing

Every deployment of the Granite product family differs due to variations in specific customer needs and types and sizes of IT infrastructure. The following information is intended to guide you to achieving optimal performance. However, these guidelines are general; for detailed worksheets for proper sizing, contact your Riverbed representative. This chapter includes the following sections:

 “General Sizing Considerations” on page 133

 “Granite Core Sizing Guidelines” on page 134

 “Granite Edge Sizing Guidelines” on page 135

General Sizing Considerations

Accurate sizing typically requires a discussion between Riverbed representatives and your server, storage, and application administrators. General considerations include but not are not limited to:

 Storage capacity used by branch offices - How much capacity is currently used, or expected to be used, by the branch office. The total capacity might include the amount of used and free space.

 Input/output operations per second (IOPS) - What are the number and types of drives being used? This value should be determined early so that the Granite-enabled SteelHead can provide the same or higher level of performance.

 Daily rate of change - How much data is Granite Edge expected to write back to the storage array through the Granite Core appliance? This value can be determined by studying backup logs.

 Branch applications - Which and how many applications are required to continue running during a WAN outage? This answer can impact disk capacity calculations.

Granite Core and Edge Appliance Deployment Guide 133 Granite Appliance Sizing Granite Core Sizing Guidelines

Granite Core Sizing Guidelines

The main considerations for sizing your Granite Core deployment are as follows:

 Total data set size - The total space used across LUNs (not the size of LUNs).

 Total number of LUNs - Each LUN adds five optimized connections to the SteelHead. Also, each branch office in which you have deployed Granite Edge represents at least one LUN in the storage array.

 RAM requirements - Riverbed recommends that you have at least 700 MB of RAM per TB of used space in the data set. There is no specific setting on the Granite Core to allocate memory on this basis, but in general this is how much the Granite Core uses under normal circumstances if the memory is available. Each Granite Core model has a fixed capacity of memory (see Granite and SteelHead specification sheets for details) that it is shipped with. If the metric falls below the recommended value, performance of the Granite Core can be affected, and the ability to efficiently perform prediction and prefetch operations. Other potentially decisive factors include:

 Number of files and directories

 Type of file system, such as NTFS or VMFS

 File fragmentation

 Active working set of LUNs

 Number of misses seen from Granite Edge

 Response time of the storage array The following table summarizes sizing recommendations for Granite Core appliances based on the number of branches and data set sizes.

Model Number of LUNs Number of Branches Data Set Size RAM

1000U 10 5 2 TB VM guidelines 1000L 20 10 5 TB VM guidelines 1000M 40 20 10 TB VM guidelines 1500L 60 30 20 TB VM guidelines 1500M 60 30 35 TB VM guidelines 2000L 20 10 5 TB 24 GB 2000M 40 20 10 TB 24 GB 2000H 80 40 20 TB 24 GB 2000VH 160 80 35 TB 24 GB 3000L 200 100 50 TB 128 GB 3000M 250 125 75 TB 128 GB 3000H 300 150 100 TB 128 GB

134 Granite Core and Edge Appliance Deployment Guide Granite Edge Sizing Guidelines Granite Appliance Sizing

The table assumes 2 x LUNs per branch; however, there is no enforced limit for the number of LUNs per branch or number of branches, so long as the recommended number of LUNs and data set sizes are within limits.

Note: Granite Core models 1000 and 1500 are virtual appliances. For minimum memory requirements, see the Granite Core Installation and Configuration Guide.

Granite Edge Sizing Guidelines

The main considerations for sizing your Granite Edge deployment are as follows:

 Disk size - What is the expected capacity of the Granite Edge block store? – Your calculations can be affected depending on if LUNs are pinned, unpinned, or local. – During WAN outages when the Granite Edge cannot synchronize write operations back through the Granite Core to the LUNs in the data center, the Granite Edge uses a write reserve area on the block store to store the data. As described in “Pin the LUN and Prepopulate the Block Store” on page 116, this area is 10 percent of the block store capacity.

 Input/output operations per second (IOPS) - If you are replacing existing storage in the branch office, you can calculate this value from the number and types of drives in the devices you want to replace. Remember that the drives might not have been operating at their full performance capacity. So, if an accurate figure is required, consider using performance monitoring tools that might be included within the server OS: for example, perfmon.exe in Windows. Other potentially decisive factors:

 HA requirements (PSU, disk, network redundancy)

 VSP CPU and memory requirements

 SteelHead EX optimization requirements (bandwidth and connection count) See the Granite and SteelHead specification sheets for capacity and capabilities of each model.

Granite Core and Edge Appliance Deployment Guide 135 Granite Appliance Sizing Granite Edge Sizing Guidelines

136 Granite Core and Edge Appliance Deployment Guide

APPENDIX A Granite Edge Network Reference Architecture

This appendix provides detailed diagrams for SteelHead EXs that run VSP and Granite Edge. It includes the following topics:

 “Converting In-Path Interfaces to Data Interfaces” on page 137

 “Multiple VLAN Branch with Four-Port Data NIC” on page 139 – “SteelHead EX Setup for Multiple VLAN Branch with Four-Port Data NIC” on page 139 – “Granite Edge Setup Multiple VLAN Branch with Four-Port Data NIC” on page 139 – “Virtual Services Platform Setup Multiple VLAN Branch with Four-Port Data NIC” on page 140

 “Single VLAN Branch with Four-Port Data NIC” on page 141 – “SteelHead EX Setup for Single VLAN Branch with Four-Port Data NIC” on page 141 – “Granite Edge Setup Single VLAN Branch with Four-Port Data NIC” on page 141 – “Virtual Services Platform Setup Single VLAN Branch with Four-Port Data NIC” on page 142

 “Multiple VLAN Branch Without Four-Port Data NIC” on page 143 – “SteelHead EX Setup for Multiple VLAN Branch Without Four-Port Data NIC” on page 143 – “Granite Edge Setup Multiple VLAN Branch Without Four-Port Data NIC” on page 143 – “Virtual Services Platform Setup Multiple VLAN Branch Without Four-Port Data NIC” on page 144 For additional information about four-port NIC data cards, see “Using the Correct Interfaces for Granite Edge Deployment” on page 74.

Converting In-Path Interfaces to Data Interfaces

For Granite Edge deployments in which you want to use additional interfaces for iSCSI traffic, you can convert the in-path interfaces on an additional, installed bypass NIC. To convert to data interfaces, use the hardware nic slot mode data CLI command or the Networking > Networking: Data Interfaces page of the SteelHead EX Management Console. After you enter the command, you need to reboot the appliance and bring up the data interfaces. You can convert data interfaces back to in-path interfaces using the hardware nic slot mode inpath CLI command. This feature is not supported on the SteelHead EX560 and EX760 models.

Granite Core and Edge Appliance Deployment Guide 137 Granite Edge Network Reference Architecture Converting In-Path Interfaces to Data Interfaces

For details, see the Riverbed Granite Core Command-Line Reference Manual and the Riverbed Command-Line Interface Reference Manual. Some SteelHead EX models have expansion slots and can support up to two additional NICs (slot 1 and slot 2). You can use a single NIC in slot 1 for SteelHead optimization (in-path mode), for VSP traffic (data mode), or for Granite traffic (data mode). If you have two additional NICs, you can use slot 1 for SteelHead optimization (in-path), for VSP traffic (data mode), or for Granite traffic (data mode). However, you can only use the NIC in slot 2 can for SteelHead optimization (in-path) or for Granite traffic (data mode). There is no support for VSP traffic on a NIC in slot 2.

Note: SteelHead EX software releases prior to v3.5 only support a single NIC in slot 1. To support two NICs, or a single NIC in slot 2, you must use EX release v3.5 or later.

The following table summarizes the deployment options.

EX Model Number of PCIe Slot 1 Slot 2 (EX v3.5 or Expansion slots later)

EX560 0 — — EX760 0 — — EX1160 2 in-path, VSP, or storage jn-path or storage EX1260 2 in-path, VSP, or storage jn-path or storage EX1360 2 in-path, VSP, or storage jn-path or storage In which:

 in-path= in-path mode for SteelHead EX WAN optimization traffic

 VSP = data mode access for VSP traffic

 storage = data mode access for Granite traffic

138 Granite Core and Edge Appliance Deployment Guide Multiple VLAN Branch with Four-Port Data NIC Granite Edge Network Reference Architecture

Multiple VLAN Branch with Four-Port Data NIC

The following diagrams apply only to SteelHead EX models 1160/1260. Figure A-1. SteelHead EX Setup for Multiple VLAN Branch with Four-Port Data NIC

Figure A-2. Granite Edge Setup Multiple VLAN Branch with Four-Port Data NIC

Granite Core and Edge Appliance Deployment Guide 139 Granite Edge Network Reference Architecture Multiple VLAN Branch with Four-Port Data NIC

Figure A-3. Virtual Services Platform Setup Multiple VLAN Branch with Four-Port Data NIC

140 Granite Core and Edge Appliance Deployment Guide Single VLAN Branch with Four-Port Data NIC Granite Edge Network Reference Architecture

Single VLAN Branch with Four-Port Data NIC

The following network diagrams apply only to SteelHead EX models 1160/1260. Figure A-4. SteelHead EX Setup for Single VLAN Branch with Four-Port Data NIC

Figure A-5. Granite Edge Setup Single VLAN Branch with Four-Port Data NIC

Granite Core and Edge Appliance Deployment Guide 141 Granite Edge Network Reference Architecture Single VLAN Branch with Four-Port Data NIC

Figure A-6. Virtual Services Platform Setup Single VLAN Branch with Four-Port Data NIC

142 Granite Core and Edge Appliance Deployment Guide Multiple VLAN Branch Without Four-Port Data NIC Granite Edge Network Reference Architecture

Multiple VLAN Branch Without Four-Port Data NIC

Figure A-7. SteelHead EX Setup for Multiple VLAN Branch Without Four-Port Data NIC

Figure A-8. Granite Edge Setup Multiple VLAN Branch Without Four-Port Data NIC

Granite Core and Edge Appliance Deployment Guide 143 Granite Edge Network Reference Architecture Multiple VLAN Branch Without Four-Port Data NIC

Figure A-9. Virtual Services Platform Setup Multiple VLAN Branch Without Four-Port Data NIC

144 Granite Core and Edge Appliance Deployment Guide Index

A Disk defragmentation software 126 Antivirus software 126 Disk management, configuring 59 Appliance ports, definitions 22 Document conventions, overview of 2 Application-consistent snapshots 85 At-rest encryption 104 F Auxiliary port, definition 22 Failback branch 102 B data center 102 Backup proxy host configuration 89 disaster recovery scenarios 101 Backup software 126 Failover Best practices, Edge 115 branch office 101 Block Disk LUNs, configuring 28 data center 100 Block store Failover, disaster recovery scenarios 100 definition 6 Fibre Channel overview 7 best practices 50 prepopulate 116 VMware 39 synchronization 79 Fibre Channel LUNs Boot up, unpinned LUN 126 configuring 17, 28 Branch recovery 92 Core VE 46 Branch services populating 49 configuring branch storage 59 support 17 configuring disk management 59 Fibre Channel, overview 37

C G Cabling Gigabit Ethernet networks 127 Granite Core 65 Granite Core Granite Edge 76 cabling 65 high availability 65 configuring failover peers 66 CHAP 127 configuring high availability 27 Configuring configuring jumbo frames 26 disk management 118 deploying 21, 61, 97, 107 interfaces 55 high availability 62 jumbo frames 127 hostname and IP address 128 ports 55 MPIO 29, 62 storage on Edge 59 pool management 30 Console port, definition 22 port definitions 22 Core VE, Fibre Channel LUNs 46 upgrade 109 Granite Edge D adding to configuration 29 Data protection 90 cabling 76 Data recovery 91 configuring 16 Deployment process configuring storage 59 configuring Granite Edge 16 confirming connection to 16 network scenarios 12 high availability 73 overview 9, 14 initiator groups 18 Deployment scenarios, high availability 13 initiators, configuring 18 Disaster recovery 100 interfaces 74 Disaster recovery scenarios, failback 101 jumbo frames 57 Disaster recovery scenarios, failover 100 LUN mapping 18

Granite Core and Edge Appliance Deployment Guide 145 Index

MPIO 77 Multi-path I/O, see MPIO 29 peer communication 80 ports 56 N prepopulating 18 Network deployment scenarios 12 target settings 17 upgrade 109 O Online documentation 2 H Overview, block store 7 Heartbeat 66 High availability P cables 65 Persistent bindings 124 Granite Core 62 Pinned LUN, definition 120 Granite Core, concepts 63 Pool management Granite Edge 73 high availability 35 heartbeat 66 overview 30 overview 13, 29, 61 REST API 32 pool management 35 Port configuration 22, 62 Ports I auxiliary 22 In-flight encryption 104 configuration 55 Interface configuration 22, 55, 62 definitions of 22 Interface routing 23 Edge 56 Interfaces, Granite Edge 74 primary, definition of 22 Interoperability matrix 6 type of traffic 116 IP addresses, changing 117 Product overview iSCSI branch converged infrastructure 5 Initiator 27 high availability deployment 13 Initiator timeouts 130 Proxy host Initiator, configuring 16 configuration 89 portal, configuring 16 iSCSI LUNs, configuring 17 Q QoS J branch office 113 Jumbo frames pinned LUNs 113 configuring 26 replication traffic 113 Edge 57 time-based rules 114 support 26 unpinned LUNs 113

K R Known issues 2 Raw device mapping 40 Rdisk protocol 111, 118 L Recovery Local LUN, definition 120 single Core 97 Lost password 104 single Core VE 98 LUN filtering 47 Related reading 2 LUN masking 39, 127 Replacement, Edge 99 LUN snapshot rollback 103 Requirements, Core VE and Fibre Chan- LUNs nel SANs 42 configuring 17, 27 REST API access 32 configuring snapshots 86 Riverbed Hardware Snapshot discovering 16 Provider 88 exposing 28 Riverbed Host Tools iSCSI 17 operation and configuration 88 mapping to Granite Edge 16 Riverbed Snapshot Agent 87 offlining 28 Riverbed Snapshot Agent 88 removing 28 Riverbed, contacting 3 removing from configuration 28 S M Separate storage 116 MPIO Snapshots configuring for Granite Core 29 application consistent 85 configuring in Granite Edge 60 configuring for LUNs 86 Granite Core 62 Riverbed Hardware Snapshot Granite Edge 77 Provider 88 HA 62 Riverbed Host Tools 87 overview 29 Volume Snapshot Service (VSS)

146 Index Index

support 87 Software upgrade 107 Storage Access Control 127 Storage array data protection 90 data recovery 91 proxy backup 89 System overview, virtual storage 5

T Target, configuring 16 Thick provisioning 123 Thin provisioning 123 Turbo Boot 18

U Upgrade Core 109 Edge 109 minimize risk 108 sequence 108 software 107

V Virtual Machine File System 39 Virtual storage, overview 5 VMware configure memory 125 ESX server storage layout 119 Fibre Channel 39 Volume Snapshot Service (VSS) support 87

W WAN redundancy 81 WCCP 119

Granite Core and Edge Appliance Deployment Guide 147 Index

148 Granite Core and Edge Appliance Deployment Guide