Best Practices Guide Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

2500 Series and 2600 Series

SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

Information furnished in this manual is believed to be accurate and reliable. However, QLogic Corporation assumes no responsibility for its use, nor for any infringements of patents or other rights of third parties which may result from its use. QLogic Corporation reserves the right to change product specifications at any time without notice. Applications described in this document for any of these products are for illustrative purposes only. QLogic Corporation makes no representation nor warranty that such applications are suitable for the specified use without further testing or modification. QLogic Corporation assumes no responsibility for any errors that may appear in this document.

Document Revision History

Revision A, October 14, 2013

Changes Sections Affected

Initial release of new guide

ii SN0454502-00 A Table of Contents

Preface Intended Audience ...... ix What Is in This Guide ...... ix Related Materials ...... x Documentation Conventions ...... x License Agreements...... xi Technical Support...... xii Downloading Updates ...... xii Training ...... xiii Contact Information ...... xiii Knowledge Database ...... xiii 1 Introduction What Are Host Bus Adapter Best Practices? ...... 1-1 Supported Host Bus Adapters ...... 1-2 2 SAN Planning and Deployment Choosing the Right Host Bus Adapter ...... 2-1 PCIe Connectivity ...... 2-2 Features ...... 2-2 High Availability ...... 2-4 Choosing Host Bus Adapters for High Availability...... 2-4 Connecting the SAN for High Availability ...... 2-4 Performance Considerations ...... 2-6 Mitigating Congestion Between VMs and Host Bus Adapter Ports . . . 2-6 Mitigating Congestion Between Host Server Ports and Target Ports . . 2-7 Avoiding Obsolete Drivers or Firmware...... 2-7 Mitigating Host Interrupt-Induced Congestion ...... 2-7 Microsoft Windows 2012 Multipath I/O...... 2-8 Failover Only DSM ...... 2-8 Round Robin DSM...... 2-9 Round Robin with Subset DSM...... 2-9 Least Queue Depth DSM ...... 2-10 Weighted Paths DSM ...... 2-10

SN0454502-00 A iii Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

VMware ESXi 5.x Fibre Channel Multipathing ...... 2-11 Fixed Path Selection Policy...... 2-11 Most Recently Used Path Selection Policy ...... 2-11 Round Robin Path Selection Policy...... 2-12 Considerations for Tape Access Through a Host Bus Adapter ...... 2-12 Checking the Interoperability of SAN Components ...... 2-13 3 Host Bus Adapter Installation Safely Handling the Host Bus Adapter ...... 3-1 Choosing a PCIe Slot...... 3-2 Verifying the Host Bus Adapter Installation ...... 3-3 Verifying Adapter Connectivity in a Preboot Environment...... 3-3 Installing QLogic Firmware, Device Driver, and Management Tools . . 3-3 Installation in a Microsoft Windows 2012 Environment ...... 3-3 Installation in a VMware ESXi 5.x Environment ...... 3-4 Verifying That the Host Can See the Host Bus Adapter from the Host OS . . 3-4 Verifying That the Host Can See the LUNs ...... 3-5 Installing the Host Bus Adapter Drivers ...... 3-5 Understanding the LED Scheme for QLogic Host Bus Adapters ...... 3-6 4 Host Bus Adapter Performance Tuning Host Bus Adapter Performance ...... 4-1 Understanding Application Workloads...... 4-1 OLTP—Transaction-Based Processes (IOPS) ...... 4-2 OLAP—Throughput-Based Processes (MBps) ...... 4-2 Addressing OLTP and OLAP Workloads...... 4-2 Host Bus Adapter Parameters That Impact Performance ...... 4-3 ZIO Operation ...... 4-4 Improving Host Bus Adapter Performance in Windows 2012 ...... 4-5 Improving Host Bus Adapter Performance in VMware ESXi 5.x ...... 4-7 Understanding SIOC and Datastores ...... 4-7 Determining Host Bus Adapter Datastore Congestion ...... 4-9 Tuning the Underlying Physical Interface ...... 4-10 Understanding Host Bus Adapter Queue Depth...... 4-10 Tuning Host Bus Adapter Queue Depth for Your Environment. . . 4-11 Changing the ZIO Mode and Timer in ESXi 5.x ...... 4-15 Monitoring Performance ...... 4-16 Gathering Host Server Data ...... 4-17 Gathering Fabric Network Data ...... 4-18 Gathering Disk Array Performance Data ...... 4-18

iv SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

SAN Performance Recommendations ...... 4-19 Fencing High-Performance Applications ...... 4-19 Minimizing ISL ...... 4-19 Upgrading to 8Gb or 16Gb ...... 4-20 Setting Data Rate to Auto-Negotiate ...... 4-20 Considering Fan-In Ratios...... 4-20 Avoiding RSCNs ...... 4-20 Balancing I/O Loads ...... 4-20 5 Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI) ...... 5-1 Viewing Server Information ...... 5-3 Server: Information ...... 5-3 Server: Security ...... 5-3 Server: Topology...... 5-4 Server: Statistics...... 5-5 Server: Utilities ...... 5-6 Viewing Adapter Information ...... 5-6 Viewing Port Information ...... 5-6 Port: Port Info ...... 5-6 Port: Target ...... 5-6 Port: Diagnostics ...... 5-6 Port: QoS ...... 5-8 Port: Virtual...... 5-8 Port: Parameters ...... 5-8 Port: Monitoring ...... 5-9 Port: Utilities ...... 5-9 Port: VFC ...... 5-9 Port: Utilization ...... 5-9 Monitoring Host Bus Adapters with QConvergeConsole CLI ...... 5-9 Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter Server...... 5-10 Viewing Server Information ...... 5-11 Viewing Adapter Information ...... 5-11 Viewing Port Information ...... 5-12 Port: Boot ...... 5-13 Port: Parameters ...... 5-13 Port: Transceiver ...... 5-14 Port: Statistics...... 5-14

SN0454502-00 A v Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

Viewing Port Information (continued) Port: Diagnostics ...... 5-15 Port: VPD ...... 5-16 6 Fibre Channel Security Security Overview ...... 6-1 Setting the QConvergeConsole Password...... 6-2 Changing the QConvergeConsole Password for a Single Host . . . . . 6-2 Changing the QConvergeConsole Password for Multiple Hosts . . . . . 6-3 7 Fibre Channel Boot-from-SAN Advantages of Boot-from-SAN ...... 7-1 Installing and Configuring Boot-from-SAN ...... 7-2 A QLogic Host Bus Adapter LEDs Glossary Index

vi SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

List of Figures Figure Page 2-1 Highly Available SAN ...... 2-5 4-1 Configuring ZIO in QConvergeConsole ...... 4-6 4-2 Configuring ZIO in QConvergeConsole CLI...... 4-7 4-3 Using esxtop to Determine Datastore Congestion ...... 4-10 4-4 ESXi I/O Architecture ...... 4-12 5-1 QConvergeConsole User Interface ...... 5-2 5-2 QConvergeConsole: Server Information Page...... 5-3 5-3 QConvergeConsole: Server Topology Page ...... 5-4 5-4 QConvergeConsole: Server Statistics Page ...... 5-5 5-5 QConvergeConsole: Port Diagnostics, Link Status Page ...... 5-7 5-6 QConvergeConsole Plug-in for VMware vCenter Server User Interface ...... 5-10 5-7 QConvergeConsole Plug-in for VMware: Adapter Page ...... 5-12 5-8 QConvergeConsole Plug-in for VMware: Port Boot Page ...... 5-13 5-9 QConvergeConsole Plug-in for VMware: Port Parameters Page ...... 5-13 5-10 QConvergeConsole Plug-in for VMware: Port Transceiver Page ...... 5-14 5-11 QConvergeConsole Plug-in for VMware: Port Statistics Page...... 5-15 5-12 QConvergeConsole Plug-in for VMware: Port Diagnostics Page ...... 5-15 5-13 QConvergeConsole Plug-in for VMware: Port VPD Page ...... 5-16 6-1 QConvergeConsole: Security Page ...... 6-3 6-2 QConvergeConsole: Password Update Wizard ...... 6-4

List of Tables Table Page 1-1 Supported Host Bus Adapters ...... 1-2 2-1 PCIe Connectivity...... 2-2 2-2 QLogic Host Bus Adapter Features ...... 2-2 2-3 Highly Available SAN Storage Attributes ...... 2-5 4-1 Modifying Host Bus Adapter Parameters...... 4-3 4-2 ZIO Operation Modes...... 4-4 4-3 Recommended ZIO Settings ...... 4-5 4-4 SIOC Congestion Threshold ...... 4-8 4-5 Host Server Data Tools ...... 4-17 A-1 QLE25xx Host Bus Adapter LED Scheme...... A-1 A-2 QLE26xx Host Bus Adapter LED Scheme...... A-2

SN0454502-00 A vii Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

viii SN0454502-00 A Preface

QLogic® 8Gb and 16Gb Fibre Channel Host Bus Adapters are designed and developed to provide industry-leading performance, simplified management, and fully interoperable solutions in the most demanding enterprise SAN environments. This guide presents a compilation of the best practices to apply when using QLogic 8Gb 2500 Series (QLE25xx) or 16Gb 2600 Series (QLE26xx) Fibre Channel Host Bus Adapters in either a Microsoft® Windows® 2012 or a VMware® ESXi® 5.x environment. The guide recommends QLogic techniques and strategies for implementation in Windows 2012 and ESXi 5.x OSs. Intended Audience This guide is intended for SAN architects, IT administrators, and storage system professionals who currently use, or are considering using, a QLogic Host Bus Adapter in a Fibre Channel SAN in a virtualized environment. What Is in This Guide This preface specifies the intended audience, explains the typographic conventions used in this guide, lists related documents, and provides technical support and contact information. The remainder of this guide is organized into the following chapters and appendices:  Chapter 1 Introduction provides a definition of best practices for QLogic Host Bus Adapters and lists the adapters that are covered.  Chapter 2 SAN Planning and Deployment details what to consider before implementing a SAN in your organization, including choosing the right equipment and making the right interconnections.  Chapter 3 Host Bus Adapter Installation covers the best practices for verifying Host Bus Adapter installation, verifying that there are no conflicts, choosing the correct driver, and selecting server PCIe slot types.  Chapter 4 Host Bus Adapter Performance Tuning describes how to use the QLogic-provided tools in both Windows 2012 and ESXi 5.x to modify Host Bus Adapter driver and firmware parameters for enhanced adapter performance.

SN0454502-00 A ix Preface Related Materials

 Chapter 5 Management and Monitoring covers how to use the management tools provided by QLogic for monitoring the health and performance of the Host Bus Adapter and SAN.  Chapter 6 Fibre Channel Security describes how to implement a secure SAN using the available features of the QLogic Host Bus Adapter.  Chapter 7 Fibre Channel Boot-from-SAN provides best practices related to the installation and booting of OS images from remote targets through QLogic Host Bus Adapters.  Appendix A QLogic Host Bus Adapter LEDs defines the blink patterns of the various LEDs on the back of a QLogic Host Bus Adapter.

Following the appendix are a glossary of terms and acronyms used, and an index to help you quickly find the information you need. Related Materials Other documentation that you may find useful includes the following:  User’s Guide Fibre Channel Adapter 2600 Series (part number FC0054609-00) provides installation, configuration, and troubleshooting information for the QLE2670 and QLE2672 adapters.  User’s Guide QConvergeConsole 2400, 2500, 2600, 3200, 8100, 8200, 8300, 10000 Series (part number SN0054669-00) covers general information about the QConvergeConsole GUI tool.  User’s Guide QConvergeConsole Plug-ins for VMware vSphere (part number SN0054677-00) provides procedures for using the two QLogic plug-in tools: QConvergeConsole Plug-in for VMware vCenter Server and QConvergeConsole Plug-in for VMware vSphere Web Client.  User’s Guide QConvergeConsole CLI 2400, 2500, 2600, 3200, 8100, 8200, 8300 Series (part number SN0054667-00) provides specific command line use in both interactive and noninteractive modes.

For information about downloading documentation from the QLogic Web site, see “Downloading Updates” on page xii. Documentation Conventions This guide uses the following documentation conventions:

 indicates a QLogic-recommended best practice.

 NOTE provides additional information.

x SN0454502-00 A Preface License Agreements

 CAUTION without an alert symbol indicates the presence of a hazard that could cause damage to equipment or loss of data.  Text in blue font indicates a hyperlink (jump) to a figure, table, or section in this guide, and links to Web sites are shown in underlined blue. For example:  Table 9-2 lists problems related to the user interface and remote agent.  See “Installation Checklist” on page 3-6.  For more information, visit www.qlogic.com.  Text in bold font indicates user interface elements such as a menu items, buttons, check boxes, or column headings. For example:  Click the Start button, point to Programs, point to Accessories, and then click Command Prompt.  Under Notification Options, select the Warning Alarms check box.  Text in Courier font indicates a file name, directory path, or command line text. For example:  To return to the root directory from anywhere in the file structure, type cd /root and press ENTER.  Issue the following command: sh ./install.bin

 Key names and key strokes are indicated with UPPERCASE:  Press CTRL+P.  Press the UP ARROW key.  Text in italics indicates terms, emphasis, variables, or document titles. For example:  For a complete listing of license agreements, refer to the QLogic Software End User License Agreement.  What are shortcut keys?  To enter the date type mm/dd/yyyy (where mm is the month, dd is the day, and yyyy is the year). License Agreements Refer to the QLogic Software End User License Agreement for a complete listing of all license agreements affecting this product.

SN0454502-00 A xi Preface Technical Support

Technical Support Customers should contact their authorized maintenance provider for technical support of their QLogic products. QLogic-direct customers may contact QLogic Technical Support; others will be redirected to their authorized maintenance provider. Visit the QLogic support Web site listed in Contact Information for the latest firmware and software updates. For details about available service plans, or for information about renewing and extending your service, visit the Service Program Web page at http://www.qlogic.com/Support/Pages/ServicePrograms.aspx. Downloading Updates The QLogic Web site provides periodic updates to product firmware, software, and documentation. To download firmware, software, and documentation: 1. Go to the QLogic Downloads and Documentation page: http://driverdownloads.qlogic.com. 2. Under QLogic Products, type the QLogic model name in the search box. 3. In the search results list, locate and select the firmware, software, or documentation for your product. 4. View the product details Web page to ensure that you have the correct firmware, software, or documentation. For additional information, click the Read Me and Release Notes icons under Support Files. 5. Click Download Now. 6. Save the file to your computer. 7. If you have downloaded firmware, software, drivers, or boot code, follow the installation instructions in the Readme file. Instead of typing a model name in the search box, you can perform a guided search as follows: 1. Click the product type tab: Adapters, Switches, Routers, or ASICs. 2. Click the corresponding button to search by model or operating system. 3. Click an item in each selection column to define the search, and then click Go. 4. Locate the firmware, software, or document you need, and then click the icon to download or open the item.

xii SN0454502-00 A Preface Technical Support

Training QLogic Global Training maintains a Web site at www.qlogictraining.com offering online and instructor-led training for all QLogic products. In addition, sales and technical professionals may obtain Associate and Specialist-level certifications to qualify for additional benefits from QLogic. Contact Information QLogic Technical Support for products under warranty is available during local, standard working hours, excluding QLogic Observed Holidays. For customers with extended service, consult your plan for available hours. For Support phone numbers, see the Contact Support link at support.qlogic.com.

Support Headquarters QLogic Corporation 4601 Dean Lakes Blvd. Shakopee, MN 55379 USA QLogic Web Site www.qlogic.com Technical Support Web Site http://support.qlogic.com Technical Support E-mail [email protected] Technical Training E-mail [email protected] Knowledge Database The QLogic knowledge database is an extensive collection of QLogic product information that you can search for specific solutions. QLogic is constantly adding to the collection of information in the database to provide answers to your most urgent questions. Access the database from the QLogic Support Center: http://support.qlogic.com.

SN0454502-00 A xiii Preface Technical Support

xiv SN0454502-00 A 1 Introduction

This introductory chapter provides a definition of best practices for QLogic Host Bus Adapters and lists the adapters that are covered. What Are Host Bus Adapter Best Practices? Best practices are process-oriented steps that are planned and prioritized to simplify a specific activity. Following best practices helps to ensure the highest quality of service (QoS). Best practices also provide guidelines on how to fully use your Host Bus Adapter in a SAN. Note that these practices are only recommendations, not requirements. Not following these recommendations does not affect QLogic support of your solution. Not all recommendations apply to every scenario. QLogic customers will benefit from reviewing these recommendations before making any implementation decision. Every organization has different requirements for a Fibre Channel SAN. The requirements are frequently driven by factors such as the type of business in which the organization engages, the type of data stored in the SAN, and the organization's customer base. Careful evaluation of all of these factors, in conjunction with the best practices contained in this guide, helps you to develop an appropriate plan for deploying and using a SAN. This evaluation should include specific considerations as to how data are stored and managed in a Fibre Channel SAN. Follow best practices to quickly and easily enhance the value of existing storage resources with little or no investment.

SN0454502-00 A 1-1 1–Introduction Supported Host Bus Adapters

Supported Host Bus Adapters Table 1-1 lists the QLogic 8Gb and 16Gb Fibre Channel Host Bus Adapters covered in this guide. These QLogic Host Bus Adapters may be collectively referred to as QLogic adapters or simply adapters throughout this guide.

Table 1-1. Supported Host Bus Adapters

QLogic Host Bus Fibre Channel Host Bus Port Quantity Adapter Rate

QLE2560 8Gb PCIe Gen 2 1

QLE2562 8Gb PCIe Gen 2 2

QLE2564 8Gb PCIe Gen 2 4

QLE2670 16Gb PCIe Gen 3 1

QLE2672 16Gb PCIe Gen 3 2

1-2 SN0454502-00 A 2 SAN Planning and Deployment

This chapter provides information on the following aspects of SAN planning and deployment:  Choosing the Right Host Bus Adapter  “High Availability” on page 2-4  “Performance Considerations” on page 2-6  “Microsoft Windows 2012 Multipath I/O” on page 2-8  “VMware ESXi 5.x Fibre Channel Multipathing” on page 2-11  “Considerations for Tape Access Through a Host Bus Adapter” on page 2-12  “Checking the Interoperability of SAN Components” on page 2-13 Choosing the Right Host Bus Adapter

As a best practice, use an 8Gb or 16Gb Host Bus Adapter, even if the SAN has 4Gb components (switch or storage), for the following reasons:  QLogic 8Gb Fibre Channel Host Bus Adapter technology is backward compatible with 4Gb and 2Gb infrastructures.  QLogic 16Gb Fibre Channel Host Bus Adapter technology is backward compatible with 8Gb and 4Gb infrastructures.  QLogic 8Gb and 16Gb Fibre Channel Host Bus Adapters offer significantly higher performance, provide enhanced product and data reliability, and allow superior scalability.  As user requirements and data volumes continue to grow, choosing and deploying a QLogic 8Gb or 16Gb Fibre Channel Host Bus Adapter enables better investment protection to meet future needs.

SN0454502-00 A 2-1 2–SAN Planning and Deployment Choosing the Right Host Bus Adapter

PCIe Connectivity Table 2-1 lists PCIe connectivity information for QLogic 8Gb and 16Gb Fibre Channel Host Bus Adapters.

Table 2-1. PCIe Connectivity

QLE25xx 8Gb Fibre Channel QLE26xx 16Gb Fibre Channel Host Bus Adapter Host Bus Adapter PCIe Generation Physical Size Physical Size Lanes Used Lanes Used of Connector of Connector

PCIe Gen 1 ×8 ×8 ×8 ×8

PCIe Gen 2×4×8×8×8

PCIe Gen 3 N/A N/A ×4 ×8

Features Table 2-2 summarizes the features provided by QLogic 8Gb and 16Gb Fibre Channel Host Bus Adapters.

Table 2-2. QLogic Host Bus Adapter Features

QLE25xx QLE26xx Description 8Gb Fibre Channel 16Gb Fibre Channel Host Bus Adapter Support Host Bus Adapter Support

MSI-X Yes Yes

Quantity of ports 1, 2, and 4 1 and 2

Speed 8Gbps, 4Gbps, and 2Gbps 16Gbps, 8Gbps, and 4Gbps

Protocol support Fibre Channel protocol 3 (FC-3), FC-3, SCSI, and SCSI, and Fibre Channel tape Fibre Channel tape

Fibre Channel Class support Class 2 and Class 3 Class 2 and Class 3

Fibre Channel topology Switching fabric, Fibre Channel Switching fabric, FC-AL 8 and 4, arbitrated loop (FC-AL) 8 and 4, and point-to-point and point-to-point

Intelligent interleaved direct Yes Yes memory access (iiDMA)

Out-of-order frame reassembly Yes Yes (OoOFR)

2-2 SN0454502-00 A 2–SAN Planning and Deployment Choosing the Right Host Bus Adapter

Table 2-2. QLogic Host Bus Adapter Features (Continued)

QLE25xx QLE26xx Description 8Gb Fibre Channel 16Gb Fibre Channel Host Bus Adapter Support Host Bus Adapter Support

Overlapping protection domains Yes Yes (OPD)

Fibre Channel security protocol N/A Authentication only (FC-SP)

T10 CRC Yes Yes

N_Port ID virtualization (NPIV) 64 255 virtual ports (vPorts) per port

Quantity of logins per port 2,000 2,000

Exchanges per port 2,000 2,000

IOPS per port 250,000 500,000

MBps 826 1,652

Queue depth ESXi only ESXi only

SN0454502-00 A 2-3 2–SAN Planning and Deployment High Availability

High Availability This section provides the hardware and software design considerations necessary to achieve high availability connectivity between host initiators and storage device targets. Choosing Host Bus Adapters for High Availability

QLogic provides the following recommendations to help you decide how many ports per Host Bus Adapter are required for a server. Options include choosing one or more single-, dual-, or quad-port Host Bus Adapters:  High availability requires more than one physical Host Bus Adapter in a server.  If multiple Host Bus Adapters are not available, the next best option is to use a single Host Bus Adapter with multiple ports. Use this option when server PCI slot space can accommodate only one Host Bus Adapter.  Two single-port Host Bus Adapters provide better reliability than one dual-port Host Bus Adapter.  When using a SAN to back up your array, use a dedicated third Host Bus Adapter that allows the storage Host Bus Adapters to perform their tasks through separate paths from the high bandwidth backup application. This practice eliminates performance impacts on regular storage. Connecting the SAN for High Availability

QLogic recommends that you set up all SANs such that a single point of failure (that is, failure of a single SAN component) does not disrupt application I/O. Use redundant Host Bus Adapters, switches, storage processors, and fabric to achieve a highly available SAN, as depicted in Figure 2-1. Table 2-3 lists the attributes associated with each storage configuration in the Figure 2-1 SAN.

2-4 SN0454502-00 A 2–SAN Planning and Deployment High Availability

Host Host Host Host

Host Bus Host Bus Host Bus Host Bus Host Bus Host Bus Adapter Adapter Adapter Adapter Adapter Adapter

Switch Switch Switch Switch

Storage Storage Storage Storage Storage Storage Storage Storage Processor Processor Processor Processor Processor Processor Processor Processor A B A B A B A B

LUN LUN LUN LUN

Storage 1 Storage 2 Storage 3 Storage 4

Figure 2-1. Highly Available SAN

Table 2-3. Highly Available SAN Storage Attributes

Attribute Storage 1 Storage 2 Storage 3 Storage 4

Possibility of High Substantial Moderate Low failure

Connectivity Single path Single host Single switch Fully redundant connection

Single point of Host Bus Adapter, Host Bus Adapter, Switch None failure storage port, and switch, and Host cable Bus Adapter-to-switch cable

SN0454502-00 A 2-5 2–SAN Planning and Deployment Performance Considerations

NOTE For Windows 2012, simply configuring redundant paths between an initiator and a target does not provide high availability because Windows 2012 cannot determine that a LUN seen through multiple paths is a single LUN.  The host OS treats the multiple paths as multiple LUNs, which leaves the LUNs vulnerable to corruption.  To allow for high availability connectivity, host OSs use multipath software to manage multiple paths between the host and the storage. For ESXi 5.x, the native multipathing plug-in is always loaded, thus the OS always presents one LUN to the applications and users, even though there are multiple paths to that LUN. Performance Considerations Several common, virtualized, host server configuration issues affect the performance of the host storage. Successful planning and deployment of a high performance, virtualized environment requires consideration of these issues. Major virtualized host issues that affect storage performance include:  Congestion between the virtual machines (VMs) and the Host Bus Adapter ports on the host  Congestion between the host server ports and the target ports  Obsolete drivers or firmware  Host CPU interrupt-induced congestion

The following sections provide more detailed explanations of the preceding issues. Mitigating Congestion Between VMs and Host Bus Adapter Ports QLogic supports the following technologies that both Microsoft and VMware have developed to address congestion between the VMs and the Host Bus Adapter ports:  In Windows 2012, Microsoft provides virtual Fibre Channel (vFC). To enhance the value of vFC, QLogic provides vFC QoS (see “Port: QoS” on page 5-8).  In ESXi 5.x, VMware provides storage input/output control (SIOC) VM prioritization (see “Understanding SIOC and Datastores” on page 4-7).

2-6 SN0454502-00 A 2–SAN Planning and Deployment Performance Considerations

Mitigating Congestion Between Host Server Ports and Target Ports Congestion between a host server's initiator Host Bus Adapter port and a storage device's target port, in most instances, is the result of a bottleneck caused by cumulative bandwidth demand of the hosts server's resident VMs that exceeds the bandwidth of the paths connecting the two. One solution to this bottleneck is to increase the connection bandwidth by using the highest bandwidth Host Bus Adapter port that is available. Currently, the highest bandwidth Host Bus Adapter port is a QLogic 2600 Series 16Gb Host Bus Adapter port. However, with high levels of virtualization—which can exponentially increase traffic between a host server and a shared target—even a 16Gb Host Bus Adapter port may eventually prove insufficient. Unfortunately, simply connecting multiple ports between a host server and a target causes presentation of multiple paths to the target, which then causes the host OS to identify the different paths as different targets. To allow multiple paths between hosts and their targets, both Microsoft and VMware support multipath connections (see “Microsoft Windows 2012 Multipath I/O” on page 2-8 and “VMware ESXi 5.x Fibre Channel Multipathing” on page 2-11). In addition to failover, multipathing can provide load balancing across multiple paths, which multiplies the bandwidth between the two points by the quantity of paths connecting them. Avoiding Obsolete Drivers or Firmware To implement the latest problem solutions and performance enhancements, ensure that the deployed Host Bus Adapters are running the latest versions of QLogic firmware and device drivers. QLogic regularly updates its drivers and firmware with performance enhancements and bug fixes, and provides sophisticated tools to simplify deployment of mass updates. For more information, see Chapter 5 Management and Monitoring. Mitigating Host Interrupt-Induced Congestion High bandwidth I/O can cause systemic performance degradation because of high incidents of I/O interrupts. QLogic provides zero interrupt operation (ZIO) to mitigate the systemic impact of high bandwidth I/O interrupts. For more information, see “ZIO Operation” on page 4-4.

SN0454502-00 A 2-7 2–SAN Planning and Deployment Microsoft Windows 2012 Multipath I/O

Microsoft Windows 2012 Multipath I/O Multipath I/O (MPIO) combines the multiple paths between a host and attached storage LUNs into what appears to the host OS as a single path. MPIO provides high availability through link failure detection and failover initiation, and provides performance enhancement by load balancing the traffic between the host and its LUN across multiple paths. Microsoft provides the MPIO framework that allows storage providers to develop multipath solutions that contain the hardware-specific information required to optimize connectivity with their storage arrays. The MPIO modules are called device-specific modules (DSMs). Microsoft provides default DSMs that can work in instances where the storage array is SPC-3 compliant and the vendor does not provide a proprietary DSM.

QLogic and Microsoft Corp. both highly recommend that you use the DSM from your storage device vendor, if the vendor provides a proprietary DSM. Install the vendor-provided DSM using the vendor-provided installation software. If the vendor does not provide DSM installation software, you can install the DSM with the Microsoft DSM installation. Microsoft's default DSMs provide the following configurable DSMs for failover and load balancing:  Failover Only DSM  Round Robin DSM  Round Robin with Subset DSM  Least Queue Depth DSM  Weighted Paths DSM Failover Only DSM The failover only DSM provides one or more alternative standby paths. All traffic flows down a single active path. If the primary path fails, an alternate path becomes the active path and traffic flows through the alternate path. Because only a single path is active, there is no load balancing, consequently the bandwidth of the connection is limited to the bandwidth of the single active path. The failover only DSM attributes include the following:  No load balancing  Single active path that sends all I/Os  Additional paths are standby paths

2-8 SN0454502-00 A 2–SAN Planning and Deployment Microsoft Windows 2012 Multipath I/O

 If the active path fails, a standby path is used.  If the original failed path recovers and failback is enabled, traffic returns to the original path.

NOTE QLogic does not recommend using the failover only DSM, because it wastes the bandwidth available across redundant paths between an initiator and target.

Round Robin DSM The round robin DSM uses all of the available paths between an initiator and a target with a load-balancing algorithm that sends traffic down each path in its turn, around a rotational sequence. Because the load-balancing algorithm uses all available paths, the available bandwidth between the initiator and target is the sum of the bandwidth of the paths included in the rotation. The round robin DSM:  Uses and balances all available paths for MPIO.  Is the default DSM that is chosen when:  The storage controller follows an active-active model.  The management application does not specify a load-balancing policy.

QLogic recommends that you use the round robin method when all paths are active, all paths have the same number of hops, and the I/O transactions are of generally similar lengths. Round Robin with Subset DSM The round robin with subset DSM uses only a subset of the available paths between an initiator and a target in a round robin rotation. The paths not in the round robin subset are held in standby, otherwise known as not active. These not-active standby paths are only used if all of the active paths in the rotation are unavailable. If at any time only one of the active paths used in the rotation is available, only that single path is used. The algorithm does not substitute standby paths in the round robin rotation. In the round robin with subset DSM:  An application can specify:  A subset set of paths to be used in the round robin.  A subset set of other paths to be standby paths.

 The DSM uses the active paths for as long as at least one of the paths is available.

SN0454502-00 A 2-9 2–SAN Planning and Deployment Microsoft Windows 2012 Multipath I/O

 The DSM uses a standby path only when all of the active paths have failed.

QLogic recommends that you use the round robin with subset method when at least some of the available paths are to be not active standby, but otherwise the paths have the same hop count and the I/O traffic is of similar length. Least Queue Depth DSM The least queue depth DSM uses the path whose queue has the fewest currently outstanding I/O requests. This method addresses the difference in time taken for longer and shorter I/O transactions to complete. For example, a short I/O transaction of 512 bytes takes much less time to complete than a large I/O transaction of 1MB. In the least queue depth DSM:  If the rotation is strictly round robin, a situation can occur where the next path in the rotation can still be transferring a previous large transaction while a more recently used path could have completed more transactions because it transferred one or more short transactions.  The round robin scheme skews the loading of the paths because of the asymmetry of the transaction lengths.  The least queue depth DSM compensates for transaction length deltas and more efficiently pushes traffic onto paths as they become available, not just when the paths come up in the rotation.

QLogic recommends that you use the least queue depth scheme if your traffic profile spans transactions of widely differing lengths. Weighted Paths DSM The weighted paths DSM assigns to each path a weight that indicates the relative priority of a specific path. The larger the number (weight), the lower ranked is the priority. The DSM chooses the least-weighted path from among the available paths.

QLogic recommends that you use the weighted paths scheme if your environment consists of paths with differing path lengths because the policy accounts for paths with different hop counts.

2-10 SN0454502-00 A 2–SAN Planning and Deployment VMware ESXi 5.x Fibre Channel Multipathing

VMware ESXi 5.x Fibre Channel Multipathing VMware ESXi 5.x default support for Fibre Channel multipathing combines the following:  Native Multipathing Plug-In (NMP): A generic VMware multipathing module  Path Selection Plug-In (PSP): Also called Path Selection Policy, a sub plug-in of the NMP module that handles path selection for a specific device

VMware also supports the installation of vendor-provided Multipath Plug-Ins (MPPs) that can operate concurrently with the NMP and PSP.

If a storage vendor provides an MPP or a PSP for their storage device, VMware and QLogic recommend that the storage device user install and use the vendor-provided MPP or PSP. The NMP supports the following VMware PSPs:  Fixed Path Selection Policy (not recommended)  Most Recently Used Path Selection Policy (not recommended)  Round Robin Path Selection Policy (recommended) Fixed Path Selection Policy The Fixed Path Selection Policy PSP has a single active path; all other paths are on standby. The active path changes only if that path becomes unavailable. The fixed path selection policy:  Uses the preferred path, if a preferred path is configured.  Uses the first working path discovered, if a preferred path is not configured.  Does not use failback.  Is the default policy for most active-active storage devices.  Provides failover only and does not use the bandwidth of standby paths. If the bandwidth demand exceeds the capacity of the single path, performance suffers.

QLogic does not recommend the fixed path selection method because it holds any additional paths in standby and does not use the additional bandwidth of the standby paths. Most Recently Used Path Selection Policy The Most Recently Used (MRU) Path Selection Policy PSP also has a single active path; all other paths are on standby. The active path changes only if that path becomes unavailable.

SN0454502-00 A 2-11 2–SAN Planning and Deployment Considerations for Tape Access Through a Host Bus Adapter

The MRU path selection policy:  Has the host select the MRU path as the active path.  Does not use failback.  Is the default policy for most active-passive storage devices.  Provides failover only and does not use the bandwidth of standby paths. If the bandwidth demand exceeds the capacity of the single path, performance suffers.

QLogic does not recommend the MRU path selection method because it holds any additional paths in standby and does not use the additional bandwidth of the standby paths. Round Robin Path Selection Policy The Round Robin Path Selection Policy PSP has multiple active paths. The round robin path selection policy:  Rotates traffic around the active paths.  Uses load balancing across active paths.  Supports active-active and active-passive arrays.

QLogic recommends that you use round robin selection as the policy for storage devices that can support it. Because this policy uses load balancing, the aggregated VM bandwidth demand is spread across the active available paths. This factor increases the bandwidth capacity at the nearly exponential rate of bandwidth × quantity of paths. Considerations for Tape Access Through a Host Bus Adapter

For tape access, QLogic recommends that you use a separate Host Bus Adapter and switch zone in the fabric. This recommendation is based on the following:  Separate zoning keeps streaming tape traffic segregated from standard disk traffic in the SAN.  Zoning out the tape Host Bus Adapter and the tape port on the switch decreases the impact of tape reset and rewind issues on disk traffic.

2-12 SN0454502-00 A 2–SAN Planning and Deployment Checking the Interoperability of SAN Components

Checking the Interoperability of SAN Components Today's IT environments incorporate products from many different vendors and sources with the possibility of millions of combinations. To take advantage of this wide breadth of IT products, you must determine the best product for the task at hand—whether servers, storage, operating system, or other components.

QLogic recommends that you consider the following interoperability questions:  Which products have been rigorously tested with all of the others and are known to work properly?  What configuration information and best practices are available to ensure that these products actually work together?  Where can you go for support of a mixed-vendor installation?

SN0454502-00 A 2-13 2–SAN Planning and Deployment Checking the Interoperability of SAN Components

2-14 SN0454502-00 A 3 Host Bus Adapter Installation

This chapter lists best practices that QLogic recommends to ensure that your QLogic Host Bus Adapter is installed correctly, that the right drivers are loaded, and that each of these steps are completed without error. Sections include the following:  Safely Handling the Host Bus Adapter  “Choosing a PCIe Slot” on page 3-2  “Verifying the Host Bus Adapter Installation” on page 3-3  “Verifying That the Host Can See the Host Bus Adapter from the Host OS” on page 3-4  “Verifying That the Host Can See the LUNs” on page 3-5  “Installing the Host Bus Adapter Drivers” on page 3-5  “Understanding the LED Scheme for QLogic Host Bus Adapters” on page 3-6 Safely Handling the Host Bus Adapter

To minimize the possibility of ESD-related damage, Qlogic strongly recommends using both a workstation anti-static mat and an ESD wrist strap, and that you observe the following precautions:  Leave the Host Bus Adapter in its anti-static bag until you are ready to install it in the system.  Always use a properly fitted and grounded wrist strap or other suitable ESD protection when handling the Host Bus Adapter, and observe proper ESD grounding techniques.  Hold the Host Bus Adapter by the edge of the printed circuit board (PCB) or mounting bracket. Never hold the adapter by the connectors.  Place the Host Bus Adapter on a properly grounded anti-static work surface pad when the Host Bus Adapter is out of its protective anti-static bag.

SN0454502-00 A 3-1 3–Host Bus Adapter Installation Choosing a PCIe Slot

Choosing a PCIe Slot The choice of which PCIe slot to install a QLogic Host Bus Adapter into depends on:  The QLogic Host Bus Adapter (QLE25xx or QLE26xx) being installed.  The PCIe bus version present in the host server.  The slot connector's physical size and electrical connections.

PCIe uses parallel, high-speed serial buses called lanes to transfer data. PCIe slot connectors are available in different physical sizes depending on how many lanes the connector can support. Lane considerations include:  The physical sizes range from one to sixteen lanes.  A physical PCIe connector is not actually required to be electrically connected to the maximum number of lanes it can support.  An ×8 physical connector that can accommodate eight lanes is actually electrically connected as an ×4 slot with only four lanes.

QLogic Host Bus Adapters are all physically sized for ×8 physical connectors. Both the QLE25xx and QLE26xx Host Bus Adapters can support either eight or four lanes, depending on the generation of PCIe bus into which it is plugged.

CAUTION Never assume that because a Host Bus Adapter physically fits into a connector that the connector can support the Host Bus Adapter. Before installing the adapter, QLogic strongly advises that you always check to determine which adapters the PCIe connector electrically supports.  The most convenient place to look is silkscreen markings on the motherboard PCB; the markings show the electrical connections to the PCIe connectors.  If your server is the rare exception that does not show the PCIe channel connection in a PCB silkscreen, then review its associated documentation.

To achieve maximum I/O performance and the lowest power utilization, QLogic recommends that you choose a PCIe slot with the following electrical characteristics:  QLE25xx Host Bus Adapter—PCIe Gen 2 ×4 or greater  QLE26xx Host Bus Adapter—PCIe Gen 3 ×4 or greater

3-2 SN0454502-00 A 3–Host Bus Adapter Installation Verifying the Host Bus Adapter Installation

Verifying the Host Bus Adapter Installation

QLogic recommends that you verify every Host Bus Adapter installation to guarantee that the Host Bus Adapter and drivers have been installed successfully and are working properly. Verifying Adapter Connectivity in a Preboot Environment Use the preboot interface to determine if the Host Bus Adapter is operational following installation. QLogic adapters support Fast!UTIL, UEFI, and FCode preboot environment interfaces. The preboot environment interface type is dependent on the vendor server in use.

QLogic recommends that you use the UEFI interface in a server that supports UEFI and BIOS. Installing QLogic Firmware, Device Driver, and Management Tools To ensure the optimal performance of the QLogic Host Bus Adapter, use the latest firmware, device driver, and tools for your OS. Installation in a Microsoft Windows 2012 Environment QLogic provides the Windows SuperInstaller to install the drivers and tools required to access and manage a QLogic Host Bus Adapter. Use the QLogic Windows SuperInstaller to install:  One or more device drivers for the adapter  QConvergeConsole CLI management tool  Local agents that can interface locally or remotely with QConvergeConsole GUI management tool

QLogic recommends that you use the SuperInstaller in Windows environments because it simplifies the software installation process. The QLogic SuperInstaller installs on a local host all of the software necessary for remote management of all adapters present on the host. About QConvergeConsole (GUI)

For additional management capability in a Windows environment, QLogic also recommends installing QConvergeConsole. This tool provides a graphical representation of the Host Bus Adapter management interface.

SN0454502-00 A 3-3 3–Host Bus Adapter Installation Verifying That the Host Can See the Host Bus Adapter from the Host OS

QConvergeConsole is not included in the SuperInstaller; it requires a separate download and installation. QConvergeConsole can access and control the adapters on both a local host and a remote system. You can install the QConvergeConsole GUI in one system and manage the Host Bus Adapters on many other systems through an Internet connection. Installation in a VMware ESXi 5.x Environment Use the VMware ESXCLI interface to install the drivers and management tools in an ESXi 5.x environment. QLogic provides the QConvergeConsole Plug-in for VMware vCenter Server to manage QLogic Host Bus Adapters in ESXi 5.x environments. The plug-in is installed into the vCenter Server® and operates with CIM providers that are installed into each ESXi 5.x host that has a QLogic Host Bus Adapter installed. This arrangement allows one vCenter Server to manage multiple QLogic Host Bus Adapters installed across the data center. For instructions on using ESXCLI to install QLogic device drivers and QConvergeConsole Plug-in for VMware vCenter Server for ESXi 5.x, refer to the drivers’ Read Me documents and the User’s Guide Fibre Channel Adapter 2600 Series (part number FC0054609-00).

In ESXi 5.x environments, QLogic strongly recommends that you install the QConvergeConsole Plug-in for VMware vCenter Server in your vCenter Server and CIM providers in each ESXi 5.x host that uses a QLogic Host Bus Adapter. Verifying That the Host Can See the Host Bus Adapter from the Host OS To verify that the host OS can see the QLogic Host Bus Adapter, do one of the following, depending on your OS:  In Windows 2012, ensure that the QLogic Host Bus Adapter is visible in the Windows Device Manager under SCSI and RAID Controllers.  In VMware ESXi 5.x: 1. Log into the vSphere Client that is connected to the vCenter Server. 2. Navigate to the Configuration page for the host. 3. Click the Configuration tab. 4. On the Configuration page, select Storage Adapters. 5. Under Storage Adapters, click the QLogic 2500 or 2600 Series adapter. 6. Verify that the QLogic Host Bus Adapter appears in the list of adapters.

3-4 SN0454502-00 A 3–Host Bus Adapter Installation Verifying That the Host Can See the LUNs

Verifying That the Host Can See the LUNs To verify that the host OS can see all of the LUNs that are presented to the QLogic Host Bus Adapter ports, do one of the following, depending on your OS:  In Windows 2012, do one of the following:  Navigate to Computer Management, then to Disk Management, and verify that the LUNs can be seen as disks.  Use the QConvergeConsole management tool to view the LUNs.  In VMware ESXi 5.x: 1. Log into vCenter® and navigate to the Configuration page for the host. 2. Select Storage Adapters. 3. Select one or more QLogic Adapter ports. and then verify that the LUNs can be seen. 4. (Optional) If new LUNs may have been added since the host was booted, perform a rescan as follows: a. In the upper right corner of the Configuration page, right-click the adapter port. b. On the shortcut menu, click either Scan or Rescan All. Installing the Host Bus Adapter Drivers For best performance, ensure that the latest Host Bus Adapter drivers are installed on all Host Bus Adapters in your environment. Download the latest driver versions from the QLogic Downloads and Documentation page: http://driverdownloads.qlogic.com

 The QLogic QLE25xx 8Gb Fibre Channel device driver follows the unified driver model, where the firmware is bundled along with the driver package. The bundle model prevents any possible mismatch between the driver and firmware versions, which in turn reduces the number of software components that the SAN administrator must manage.  The QLogic QLE26xx 16Gb Fibre Channel device driver requires a separate firmware installation.

QLogic recommends that you download and install at the same time the current versions of both the QLE26xx device driver and the firmware Flash device driver.

SN0454502-00 A 3-5 3–Host Bus Adapter Installation Understanding the LED Scheme for QLogic Host Bus Adapters

Understanding the LED Scheme for QLogic Host Bus Adapters To identify the QLogic Host Bus Adapter LEDs, and how the LED color and blink patterns indicate Host Bus Adapter status and connected components, see Appendix A QLogic Host Bus Adapter LEDs. Use this information—after you have successfully installed the QLogic Host Bus Adapter—as a starting point for your SAN link troubleshooting exercises.

3-6 SN0454502-00 A 4 Host Bus Adapter Performance Tuning

This chapter provides tips for improving the performance of QLogic Fibre Channel Host Bus Adapters, and also describes the different workloads generated by enterprise applications, their impact on performance, and how performance can be addressed by configuration, management tools, and parameter modifications. The following sections provide details:  Host Bus Adapter Performance  “Improving Host Bus Adapter Performance in Windows 2012” on page 4-5  “Improving Host Bus Adapter Performance in VMware ESXi 5.x” on page 4-7  “Monitoring Performance” on page 4-16  “SAN Performance Recommendations” on page 4-19 Host Bus Adapter Performance The following sections focus on tuning Host Bus Adapter performance in Windows 2012 and VMware ESXi 5.x environments. Understanding Application Workloads In general, there are two types of data workload (processing):  Transaction-based (online transaction processing: OLTP)  Throughput-based (online analytical processing: OLAP)

These two workloads are very different, and are affected by different factors. Knowing and understanding the demands of your environment is an important part of successfully configuring your SAN for performance. Transaction-based workloads: Transactions are individual I/Os that each transfer a block of data. The data block size can vary from a minimum of 512 bytes to several megabytes. Transaction-based workloads tend to send high numbers of transactions with relatively small or moderate block sizes. OLTP is an example of a transaction-based workload.

SN0454502-00 A 4-1 4–Host Bus Adapter Performance Tuning Host Bus Adapter Performance

Throughput-based workloads: Throughput is the amount of bytes that can be transferred to a storage device. The maximum throughput of a Host Bus Adapter port is a fixed amount. Throughput-based workloads tend to transfer low numbers of transactions with large blocks of data. OLAP is an example of a throughput-based workload. OLTP—Transaction-Based Processes (IOPS) OLTP applications use small data blocks randomly distributed across the storage. Because of the small data block characteristic of OLTP applications, high-performance OLTP applications generate large numbers of I/O transactions. These transactions:  Demand a proportional amount of space in the Host Bus Adapter queues.  Burden the host server with a proportional numbers of I/O interrupts.  Use large amounts of the server's CPU capacity.

OLAP—Throughput-Based Processes (MBps) Throughput-based workloads are prevalent with applications or processes that require large amounts of data to be transfered, and generally use large, sequential blocks to reduce disk latency. Throughput-based workloads:  Require less Host Bus Adapter queuing.  Impose less interrupt burden.  Have a relatively low host CPU overhead. Generally, OLAP applications are most affected by the bandwidth between the host initiator and the target. Addressing OLTP and OLAP Workloads While no single enterprise application can be classified as having a solely OLTP- or OLAP-based workload, a large percentage of an application's workload is either one or the other. However, in virtualized environments with highly mobile VMs, the reality is that host servers must accommodate a dynamically shifting mix of both types of workloads. OLTP-type applications tend to be more demanding of host server resources and, consequentially, most techniques used to optimize host-to-storage performance address issues are most prevalent in OLTP-type workloads:  Address the OLTP queuing demand by adjusting the Host Bus Adapter queuing depth.  Mitigate the OLTP interrupt burden by reducing the ratio on interrupts to transactions.

4-2 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Host Bus Adapter Performance

 Consider the OLTP CPU bandwidth demand when provisioning VMs where OLTP applications operate.  Accommodate OLAP-type workloads by ensuring adequate host to storage bandwidth. Host Bus Adapter Parameters That Impact Performance Table 4-1 lists the available techniques and programmable Host Bus Adapter parameters. The table provides a brief description of the parameters, and shows their ranges and default values. Modifying these parameters may impact Host Bus Adapter performance.

Table 4-1. Modifying Host Bus Adapter Parameters

Technique Description Range Default OS Support

Setting frame size Specifies the size of a Windows 2012 Fibre Channel payload 512–2048 2048 and VMware per I/O. ESXi 5.x

Configuring Fibre Specifies the Host Bus Channel data rate Adapter data rate. 1 = Auto When set to Auto, the 2 = 4Gb Windows 2012 adapter auto-negoti- Auto and VMware ates the data rate with 3 = 8Gb ESXi 5.x the connecting SAN 4 = 16Gb device.

Setting maximum Specifies the maximum 32 VMware ESX 4.x queue depth number of I/O com- mands allowed to exe- cute or queue on a 1–65535 Host Bus Adapter port, 64 VMware ESXi 5.x per LUN queue depth for each port.

Setting ZIO mode Determines the behav- 0 = Disable ZIO 5 Windows 2012 ior of the zero interrupt 5 = Queue entries are operation (ZIO) adap- placed on the corre- tive algorithm. sponding queue with minimal interrupts. 6 VMware ESXi 5.x 6 = Enable ZIO mode 6. (For more informa- tion, see “ZIO Opera- tion” on page 4-4.)

SN0454502-00 A 4-3 4–Host Bus Adapter Performance Tuning Host Bus Adapter Performance

Table 4-1. Modifying Host Bus Adapter Parameters (Continued)

Technique Description Range Default OS Support

Setting ZIO timer Sets a time period in 0–255 0 Windows 2012 100ms increments for When this value is 0, the interrupt, after the driver defers the which outstanding inter- interrupt delay timer rupts are executed selection to the firm- 1 VMware ESXi 5.x even if the mode ware. The firmware requirements are not default for the inter- met. rupt delay timer is 2h.

ZIO Operation ZIO is a QLogic-provided SCSI interrupt mitigation technique that uses an adaptive algorithm to post a single interrupt for multiple SCSI I/O completions, instead of separately posting all the SCSI I/O completion interrupts. This technique reduces the total number of SCSI interrupts without affecting performance and reduces the systemic performance impact of SCSI I/O interrupts. The ZIO user interface provides the following setting:  The Operation Mode setting determines the behavior of the adaptive algorithm (see Table 4-2).  The Interrupt Delay Timer setting sets a time period after which outstanding interrupts are executed, even if the mode requirements are not met. This guarantees that outstanding interrupts never wait an indeterminate period of time.

Table 4-2. ZIO Operation Modes

Mode Algorithm Operation

0 Disable ZIO

1–4 Reserved

5 Enable ZIO mode 5. Response queue entries are placed on the corre- sponding queue with minimal interrupts. This is the default setting in Win- dows 2012.

6 Enable ZIO mode 6. This mode is similar to mode 5, except that an inter- rupt is generated when the firmware has no active exchanges (even if the interrupt delay timer has not expired). This is the default setting in VMware ESXi 5.x.

4-4 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in Windows 2012

Table 4-2. ZIO Operation Modes (Continued)

Mode Algorithm Operation

7–F Reserved

Completed SCSI commands are placed on the response queue without interrupting the host until one of the following events occurs:  The interrupt delay timer expires.  There is no outstanding active exchange, indicating that there is no active SCSI I/O (optional, ZIO mode 6 only).

Table 4-3 shows the recommended ZIO Operation Mode and Interrupt Delay Timer settings by OS for the QLogic Host Bus Adapters.

Table 4-3. Recommended ZIO Settings

OS Host Bus Operation Mode Interrupt Delay Adapter Type Timer

Window 2012 QLE25xx 5 0

Window 2012 QLE26xx 62

ESXi 5.x QLE25xx 6 1

ESXi 5.x QLE26xx 61

Improving Host Bus Adapter Performance in Windows 2012

QLogic recommends that you configure the ZIO settings to reduce the server-wide performance impact that can occur with large numbers of SCSI I/O interrupts.

This section shows you how to change the ZIO operation mode and interrupt delay timer settings using either QConvergeConsole GUI or CLI.

To change the ZIO mode and timer using QConvergeConsole (GUI): 1. In the QConvergeConsole system tree in the left pane, expand the adapter node, and then select the port node. 2. In the right pane, click the Parameters tab.

SN0454502-00 A 4-5 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in Windows 2012

3. On the port Parameters page, click the Advanced HBA Parameters tab. 4. On the Advanced HBA Parameters page under Configure Port Advanced Parameters, complete the following for ZIO (see Table 4-3 for QLogic recommended values):  Operation Mode  Interrupt Delay Timer (100ms) Figure 4-1 shows an example of changing the ZIO options in QConvergeConsole.

Figure 4-1. Configuring ZIO in QConvergeConsole

To change the ZIO mode and timer using QConvergeConsole CLI: 1. On the QConvergeConsole CLI Main Menu, type the number corresponding to each of the following menu options, in this order: Adapter Configuration Fibre Channel Adapter HBA Parameters Configure HBA Parameters

2. On the Configure Parameters Menu, select the options corresponding to the following for ZIO (see Table 4-3 for QLogic recommended values):  Operation Mode  Interrupt Delay Timer (100ms)

4-6 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

Figure 4-2 shows an example of changing the ZIO options from the QConvergeConsole CLI Configure Parameters Menu.

Figure 4-2. Configuring ZIO in QConvergeConsole CLI Improving Host Bus Adapter Performance in VMware ESXi 5.x This section provides the following best practices for improving Host Bus Adapter performance in a VMware ESXi 5.x environment:  Understanding SIOC and Datastores  “Determining Host Bus Adapter Datastore Congestion” on page 4-9  “Tuning the Underlying Physical Interface” on page 4-10  “Changing the ZIO Mode and Timer in ESXi 5.x” on page 4-15 Understanding SIOC and Datastores As of ESXi 4.1, VMware has provided storage input/output control (SIOC). (Prior to ESXi 4.1, VMware used adaptive queuing for congestion control; adaptive queuing had major deficiencies related to host throttling.)

QLogic highly recommends that you enable SIOC on ESXi 5.x.

SN0454502-00 A 4-7 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

SIOC manages I/O access to a datastore by controlling the VM I/O queues across all ESXi servers that share the datastore. SIOC allows you to set the amount of access shares that a VM is allotted to a shared datastore as either low, normal, or high. This allocation imposes a fair distribution of a datastore’s capacity.  SIOC grants a percentage of a datastore's I/O slots to each VM that shares the datastore relative to each VM’s share allotment.  The total quantity of VMs share the datastore and their individual allotments.

SIOC also allows you to set a per-datastore congestion threshold value at which to begin enforcing the prescribed VM share allotments across the data center. Below the congestion threshold, VMs are allowed to be as “noisy” as they want, but after the threshold is reached, each VM’s individual allotment is strictly enforced. This restricts “noisy neighbors” from imposing on the neighborhood.

Table 4-4 lists the VMware-recommended settings for SIOC.

Table 4-4. SIOC Congestion Threshold

Storage Media Congestion Threshold

Solid-state disk (SSD) 10–15ms

Fibre Channel 20–30ms

Serial attached SCSI (SAS) 20–30ms

Serial advanced technology 30–50ms attachment (SATA)

Auto-tiered storage Vendor-recommended value

VMware recommends the following for best practices with SIOC:  Do not mix-and-match SIOC-enabled and SIOC-disabled hosts (on native or older ESXi hosts).  Avoid different settings for datastores that share the same underlying resources.  When altering the values in the ranges shown in Table 4-4, note that higher congestion threshold values tend to favor throughput-based workloads, while lower settings favor transactional-based workloads (see “Understanding Application Workloads” on page 4-1.)

4-8 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

In ESXi 5.x, when SIOC is enabled on a datastore, VMware provides graphical tracking of the value on which SIOC bases its congestion detection. The Storage I/O Control Normalized Latency value, which reasonably represents the loading condition of the datastore, is compared to the Congestion Threshold, which represents the loading point where the datastore is considered congested, where:  The calculation of Storage I/O Control Normalized Latency is based on latency experienced across VMs using a shared datastore.  The latencies are normalized to compensate for latency skew introduced by transaction sizes.

Because Storage I/O Control Normalized Latency is the value from which VMware measures a datastore's loading conditions, you should consider it the key measurement of the ongoing performance of a datastore as its data center evolves and the demand on the datastore changes. SIOC ensures fair access to a shared datastore, but does not ensure good performance for any datastore. Datastore performance depends more on the physical performance of the device where the datastore’s LUN resides. Physical performance is based on the following:  Quantity of VMs attached and their distribution on ESXi hosts  Paths configured between the ESXi hosts and the datastore  Individual I/O demands of the VMs

To summarize, although an insufficient amount of I/O capacity fairly distributed is maximized, capacity still must be increased.

QLogic recommends that you continually evaluate the performance aspects of datastores so that, as environments evolve over time, the cumulative bandwidth demands of additionally attached VMs can be accommodated. Storage I/O Control Normalized Latency provides the ideal vantage point from which to conduct this ongoing evaluation.Do not mix-and-match SIOC-enabled and SIOC-disabled hosts (on native or older ESXi hosts). Determining Host Bus Adapter Datastore Congestion Use the VMware ESXi esxtop tool to view the current Host Bus Adapter queue utilization while I/O is active on the adapter ports. From the ESXi 5.x console, issue the esxtop -h command to open the esxtop menu, which provides detailed information on its usage. Figure 4-3 shows the output of the storage device section of esxtop while I/O is active, where:  DAVG/cmd indicates the latency between this ESXi host and a datastore. The value DAVG/cmd is sensitive to the I/O loading conditions at the datastore; thus, it is an indicator of congestion at a datastore.

SN0454502-00 A 4-9 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

 KAVG/cmd indicates the hypervisor’s contribution to the latency.  GAVG/cmd indicates the latency experienced at the guest OS.

Figure 4-3. Using esxtop to Determine Datastore Congestion Tuning the Underlying Physical Interface SIOC ensures a controlled distribution of storage bandwidth in a congested environment, and Adaptive Queuing provides a QFULL recovery scheme.

QLogic recommends that you rely on these VMware-provided mechanisms to address datastore congestion.

However, another option is to configure the underlying QLogic Host Bus Adapter maximum queue depth module parameter. This parameter applies across all QLogic physical Host Bus Adapter ports configured in the ESXi host's physical server. Understanding Host Bus Adapter Queue Depth Host Bus Adapter queue depth is a Host Bus Adapter driver parameter that refers to the quantity of Host Bus Adapter SCSI command buffers that can be allocated by an adapter port. Queue depth is the maximum quantity of commands that can be outstanding on a per-LUN basis (per port). When the QLogic driver loads, it registers the value of the queue depth with the SCSI mid-level, which notifies the SCSI layer above the Host Bus Adapter driver of the quantity of commands the adapter driver is willing to accept and process. The default maximum SCSI queue depth for ESXi 5.x is 64, and the allowable values for the adapter queue depth range from 1–65535.

4-10 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

If no LUNs are shared across multiple VMs, then you only need to set the Host Bus Adapter queue depth. However, when a specified LUN can be used for more than one VM, you must also consider the ESXi value Disk.SchedNumReqOutstanding. Setting the adapter queue depth to a value larger than Disk.SchedNumReqOutstanding has no effect because the queue depth available to SCSI is equal to whichever value is the lower of these two settings. SCSI never places more than this quantity of commands in the driver's queue. If you increase the Host Bus Adapter queue depth, then you also must accordingly increase the Disk.SchedNumReqOutstanding value. Set the Host Bus Adapter queue depth to the same value for all initiators that access the same datastore. Tuning Host Bus Adapter Queue Depth for Your Environment When deciding the right value of the Host Bus Adapter queue depth for your environment, consider the effect of the adapter queue depth on other SAN components. This section describes the relationship between the Host Bus Adapter queue depth and the target device queue depth, and discusses whether increasing the queue depth can increase performance without adversely affecting other SAN components. I/O Loading Fundamentals Loading is presented from the target device port-loading perspective, with host and Host Bus Adapter factors that affect the storage array's optimal utilization. The concepts and variables presented are standard SCSI terminology. When determining loading in any mass storage SAN setup, you must understand the factors used; these factors are at the SCSI layer level in both the initiator (host) and the target (device). The following factors provide an I/O flow control mechanism between the initiator and the target:  The host flow control variable is the queue depth per LUN  The device flow control variable is target port queue depth

SN0454502-00 A 4-11 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

Figure 4-4 shows the placement of these variables in the I/O architecture.

ESXi Host 1 ESXi Host 2

VM 1 VM 2 VM 3 VM 4 q File System File System

LUN 1 SCSI LUN 2 LUN 3 SCSI LUN 4 Queue Queue Queue Queue

Fibre Channel Fibre Channel P

8Gb or 16Gb T

Target

Target Port Queue

LUN 1 LUN 2 LUN 3 LUN 4 Datastore L Figure 4-4. ESXi I/O Architecture

The following main variables assess the loading factor on a device port: P = Quantity of host paths connected to that array device port. q = Queue depth per LUN on the hosts (for the host port); that is, the maximum quantity of I/Os outstanding per LUN from the host at any specified time. Queue depth is a QLogic Fibre Channel and SCSI driver parameter. L = Quantity of LUNs configured on the array device port as seen by a specified host path.

4-12 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

T = Maximum queue depth per array device port. T signifies the maximum quan- tity of I/Os outstanding (that can be handled) on that port at a specified time. The value for T is typically 2,048. Q = Queue depth per Host Bus Adapter port, meaning the maximum I/Os that an adapter can have outstanding at an instant. This variable is required only if q (queue depth per LUN) is not available.

These variables relate to each other through the following equation: T ≥ P * q * L Where the target queue depth should be greater than or equal to the product of host paths, Host Bus Adapter queue depth, and the quantity of LUNs configured. For heterogeneous hosts connected to the same port on the device, use the following equation: T ≥ Host OS 1 (P × q × L) + Host OS 2 (P × q × L) + … Host OS n (P × q × L) The target port queue depth value (2,048 for many arrays) used in the preceding equations is on a per-port basis; that is, 2,048 outstanding I/Os (simultaneous) at a time. Therefore, one controller with four ports could handle 2048 × 4 = 8192 outstanding I/Os at a time without any port overloading, and two of the same controllers could handle 16,384 outstanding I/Os. Check the specifications of your array to determine the exact target port queue depth value. Configurations not conforming to the above equations can result in either under-utilizing the target device queue (T < actual queue depth) or flooding the target device queue (T > actual queue depth) on the target device. A QFULL condition is the result of flooding the target device queue. QFULL is an I/O throttling command that is sent by the storage array SCSI layer to a Host Bus Adapter port. The QFULL command notifies the port that its I/O processing limit has been reached and it cannot accept any more I/Os until it completes its current set. If you suspect flooding, you can enable the Extended Error Logging flag in the Host Bus Adapter parameters to view extended logging and see if a QFULL command is being received by the adapter. Setting the Host Bus Adapter Queue Depth The QLogic Host Bus Adapter queue depth is defined by the variable ql2xmaxqdepth. Note that any increase in ql2xmaxqdepth should be guided by the equation described in “I/O Loading Fundamentals” on page 4-11. The default value for ql2xmaxqdepth for ESXi 5.x is 64.

SN0454502-00 A 4-13 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

To change the queue depth of a QLogic Adapter in VMware ESXi 5.x: 1. Log on to the VMware ESXi 5.x console as root. 2. Issue the appropriate ESXCLI command for your OS version. For ESX 5.0 and 5.1: esxcli system module parameters set -m qla2xxx -p ql2xmaxqdepth=0x40 For ESX 5.5: esxcli system module parameters set -m qlnativefc -p ql2xmaxqdepth=0x40

3. (Optional) Verify the QLogic Host Bus Adapter queue depth change by one of the following methods:  View the entries for the QLogic Fibre Channel adapter driver in the proc file system (procfs). (This method is not available on ESX 5.5.)  Issue the appropriate command for your OS version. For ESX 5.0 and 5.1: esxcli system module parameters list -m qla2xxx | grep ql2xmaxqdepth For ESX 5.5: esxcli system module parameters list -m qlnativefc | grep ql2xmaxqdepth

Key Considerations for Tuning the Adapter Queue Depth Consider the following when tuning the QLogic Host Bus Adapter queue depth parameter, ql2xmaxqdepth:  Set the ql2xmaxqdepth to the same value for all initiators that access the same datastore.  For QLogic drivers written for ESXi 5.x, the ql2xmaxqdepth is set to 64, which is the default ESX SCSI queue depth.  Increasing the ql2xmaxqdepth in the QLogic Fibre Channel Host Bus Adapter greater than the Disk.SchedNumReqOutstanding value has no effect because the ESX SCSI never requests more than the lower of the two values.

4-14 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Improving Host Bus Adapter Performance in VMware ESXi 5.x

 If the values of the Disk.SchedNumReqOutstanding and ql2xmaxqdepth are increased in a server, the result is likely to be an increased quantity of I/O operations being placed in the Host Bus Adapter queue for servicing, which consequently increases the CPU utilization of the host.  Setting the value of the ql2xmaxqdepth from its default to a higher value may increase the congestion at the datastore. Changing the ZIO Mode and Timer in ESXi 5.x

QLogic recommends that you configure the ZIO settings to reduce the server-wide performance impact that can occur with large numbers of SCSI I/O interrupts.

These procedures show you how to change the ZIO operation mode and interrupt delay timer settings using ESXCLI.

NOTE For ESXi 5.5, the qla2xxx driver has been replaced with the qlnativefc driver.

To determine the current ZIO mode and timer using ESXCLI: 1. Issue the appropriate command for your OS version. For ESXi 5.0 and 5.1: cat /proc/scsi/qla2xxx/ For ESXi 5.5: # /usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -i QLNATIVEFC/qlogic -g -k X

Where X is an instance number for each Host Bus Adapter port enumerated in the driver (key value).

2. Search for the strings ZIO mode and Timer. For example, for ESXi 5.0 or 5.1: ~ # cat /proc/scsi/qla2xxx/6 | grep ZIO

To change the ZIO mode and timer using ESXCLI: 1. Set the ZIO mode by issuing the appropriate command for your OS version: For ESXi 5.0 and 5.1: esxcli system module parameters set -m qla2xxx -p ql2xoperationmode=

SN0454502-00 A 4-15 4–Host Bus Adapter Performance Tuning Monitoring Performance

ESXi 5.0 or 5.1 example: ~ # esxcli system module parameters set -m qla2xxx -p ql2xoperationmode=6 For ESXi 5.5: esxcli system module parameters set -m qlnativefc -p ql2xoperationmode= ESXi 5.5 example: ~ # esxcli system module parameters set -m qlanativefc -p ql2xoperationmode=6 Where the mode is either disabled (0) or enabled (1), and is always ZIO mode 6 when enabled. 2. Set the ZIO interrupt delay timer by issuing the appropriate command for your operating system: For ESXi 5.0 and 5.1: esxcli system module parameters set -m qla2xxx -p ql2xintrdelaytimer= For ESXi 5.5: esxcli system module parameters set -m qlnativefc -p ql2xintrdelaytimer=

Monitoring Performance This section describes the tools and monitors that are available to help you analyze what is happening within your SAN environment. To determine where a performance problem exists or where a potential problem may occur, it is important to gather data from all of the components of the storage solution. A single piece of information can mislead you into misdiagnosing the cause of poor system performance, when another system component is actually causing the problem. System administrators perceive transaction performance to be poor when the following conditions occur:  Random read/write operations are exceeding 20ms (without write cache).  Random write operations are exceeding 2ms with the cache enabled.  I/Os are queuing up in the operating system I/O stack (due to a bottleneck).

4-16 SN0454502-00 A 4–Host Bus Adapter Performance Tuning Monitoring Performance

System administrators perceive throughput performance to be poor when the disk capability is not reached, which can stem from the following situations:  With read operations, read-ahead is being limited, preventing availability of higher amounts of immediate data.  I/Os are queuing up in the operating system I/O stack (due to a bottleneck).

Most operating systems, Fibre Channel adapters, Fibre Channel switches, and disk arrays provide a host of tools (add-on and built-in) to help you monitor I/O performance.

QLogic recommends breaking down the measurements from these tools into three parts:  “Gathering Host Server Data” on page 4-17  “Gathering Fabric Network Data” on page 4-18  “Gathering Disk Array Performance Data” on page 4-18 Gathering Host Server Data Most OSs have several built-in tools to measure I/O performance and the effect of this performance on other host server components (CPU consumption, memory utilization, and more). Table 4-5 summarizes the commonly used tools.

Table 4-5. Host Server Data Tools

Tool OS Description Key Statistics Usage

QConvergeCon- Windows, Linux®, QLogic Host Bus Host Bus Adapter See User’s Guide sole (GUI) Solaris® Adapter perfor- port IOPS, MBps, QConvergeCon- mance monitor error statistics sole 2400, 2500, 2600, 3200, 8100, 8200, 8300, 10000 Series (part num- ber SN0054669-00)

QConvergeCon- Windows, Linux, QLogic Host Bus Host Bus Adapter See User’s Guide sole CLI Solaris Adapter perfor- port IOPS, MBps, QConvergeCon- mance monitor error statistics sole CLI 2400, 2500, 2600, 3200, 8100, 8200, 8300 Series (part num- ber SN0054667-00)

SN0454502-00 A 4-17 4–Host Bus Adapter Performance Tuning Monitoring Performance

Table 4-5. Host Server Data Tools (Continued)

Tool OS Description Key Statistics Usage

Performance Mon- Windows Windows built-in CPU, disk, mem- Windows Control itor (perfmon) real time perfor- ory, network Panel > Adminis- mance monitoring trative Tools > Per- tool; ability to set formance Monitor; alerts for various or by issuing thresholds perfmon at the command line

esxtop VMware Display ESXi CPU, disk, mem- Access from server resource ory, network /usr/bin/esx- utilization statis- top tics in real time

QConvergeCon- VMware Display perfor- Virtual machine SeeUser’s Guide sole Plug-in for mance charts per CPU, disk, mem- QConvergeCon- VMware vCenter VM and host ory, network statis- sole Plug-ins for Server server tics VMware vSphere (part number SN0054677-00)

Gathering Fabric Network Data One of the easiest ways to measure and monitor I/O performance in a SAN is to look at I/O statistics at the heart of the SAN: the Fibre Channel switch.

QLogic recommends looking at the Fibre Channel switch statistics as the first step in any performance monitoring exercise. These statistics provide a view of SAN performance from both the initiator and the target. Gathering Disk Array Performance Data Many storage arrays have built-in capabilities to measure and monitor performance. These tools provide a wealth of information about how the storage array processes I/O, which disks are taking the most I/O hits, the cache hit percentage, the split between read and write operations across the array, and so on.

QLogic recommends that you install and use the vendor-provided tools to supplement performance monitoring in your SAN.

4-18 SN0454502-00 A 4–Host Bus Adapter Performance Tuning SAN Performance Recommendations

SAN Performance Recommendations Host Bus Adapter performance is secondary to the physical configuration of the SAN, which depends on physical factors such as:  Switch speed  Target port speed  Initiator port speed  Quantity and speed of the paths between the initiators and the targets across the fabric

Managing SAN performance in a dynamic, virtualized environment means meeting the demands of current workloads, as well as ensuring the ability to scale and handle larger workloads and peak demands. To ensure adequate performance, administrators need to monitor and measure system performance. Chapter 5 Management and Monitoring describes the tools available to an administrator to measure and analyze performance, and discusses how to use the tools effectively. Follow the recommendations in this section to enable optimal balance between performance, management, and functionality.

QLogic recommends that you conduct performance tuning in a test SAN environment, and that you monitor I/O performance before implementing the changes in a live SAN environment. Performance monitoring and tuning is a continual process. QLogic makes the following general recommendations for improving SAN performance. Fencing High-Performance Applications

QLogic recommends that you separate a high-performance, demanding application from other applications by assigning it a dedicated SAN fabric. If you do not separate high-performing applications, they can overwhelm other applications competing for the same storage resource over the same port. Minimizing ISL

QLogic recommends that you locate on the same switch the Host Bus Adapter and the storage ports that it will access. Otherwise, try to minimize ISLs and decrease the number of hops. The more hops the data take, the longer it takes to reach its destination.

SN0454502-00 A 4-19 4–Host Bus Adapter Performance Tuning SAN Performance Recommendations

Upgrading to 8Gb or 16Gb

QLogic recommends that you replace all 4Gb Fibre Channel Adapters with QLogic 8Gb or 16Gb Fibre Channel Adapters, because 8Gb and 16Gb adapters have many advantages over 4Gb adapters. Besides performance advantages, they offer new features, high availability, and better troubleshooting capabilities. Setting Data Rate to Auto-Negotiate

QLogic recommends that you set all Fibre Channel ports (Host Bus Adapters, switches, and storage ports) to auto-negotiate the Fibre Channel speed; any other setting may result in a slow connection. All QLogic Host Bus Adapters come factory set to auto-negotiate. Considering Fan-In Ratios Consider the ratio of storage ports bound to a single QLogic Host Bus Adapter port; this ratio is called the Fan-In. Ideally, the Fan-In ration is 1:1—one storage port is bound to a single Host Bus Adapter. If the application accessing the storage ports is not bandwidth intensive, then the Fan-In ratio can be lowered to 3:1 or more, depending on the application and time-of-the-day load.

QLogic recommends that you review your storage array’s vendor-recommended Fan-In and Fan-Out ratios for your operating systems and arrays. Avoiding RSCNs Assign a unique Fibre Channel switch zone between the server initiator and the storage controller processor ports. That is, each worldwide name (WWN) (each initiator Host Bus Adapter), regardless of the number of ports, requires one unique zone. This zoning strategy isolates an initiator from registered state change notification (RSCN) disruptions, which can hamper performance. Balancing I/O Loads You can maximize the performance of an application by spreading the I/O load across multiple paths, arrays, and device adapters in the storage unit. When trying to balance the load within the storage unit, the determining factor is the placement of application data. The most critical traffic activities to balance, in relative order of importance, are as follows: 1. Activity to the RAID disk groups. Use as many RAID disk groups as possible for critical applications. Many performance bottlenecks occur

4-20 SN0454502-00 A 4–Host Bus Adapter Performance Tuning SAN Performance Recommendations

because a few disks are overloaded. Spreading an application across multiple RAID disk groups ensures that as many disk drives as possible are available. 2. Activity to the device adapters. When selecting RAID disk groups within a cluster for a critical application, spread them across separate device adapters. 3. Activity to the Fibre Channel ports. Load balance I/O between paths to your SAN with multipath tools such as MPIO in Windows 2012 and NMP in vSphere.

Also, if it available from your storage device vendor, QLogic recommends the purchase and deployment of SAN management tools such as the EMC® PowerPath®/VE.

SN0454502-00 A 4-21 4–Host Bus Adapter Performance Tuning SAN Performance Recommendations

4-22 SN0454502-00 A 5 Management and Monitoring

This chapter provides best practices for monitoring the health and performance of your QLogic Host Bus Adapters. QLogic provides a rich and versatile suite of management products under the umbrella product name QLogic QConvergeConsole. QConvergeConsole can capture performance statistics, status conditions, topology, configuration settings, and can display and diagnose error conditions associated with QLogic Host Bus Adapters and their surrounding environment. The following QLogic tools provide monitoring capabilities:  QConvergeConsole (GUI) on Windows (see “Monitoring Host Bus Adapters with QConvergeConsole (GUI)” on page 5-1)  QConvergeConsole CLI on Windows (see “Monitoring Host Bus Adapters with QConvergeConsole CLI” on page 5-9)  QConvergeConsole Plug-in for VMware vCenter Server on ESXi 5.x (see “Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter Server” on page 5-10)

In a Windows environment, QLogic strongly recommends that you use the management and monitoring capabilities available by installing both the QConvergeConsole (GUI) tool (using its own installer) and the QConvergeConsole CLI tool (using the QLogic Windows SuperInstaller). Monitoring Host Bus Adapters with QConvergeConsole (GUI) For general information about the QLogic QConvergeConsole® tool, refer to the User’s Guide QConvergeConsole 2400, 2500, 2600, 3200, 8100, 8200, 8300, 10000 Series (part number SN0054669-00), available for download on the QLogic Downloads and Documentation page (see “Downloading Updates” on page xii). For detailed information about using this tool and its interface, view the QConvergeConsole help system, available while using QConvergeConsole.

SN0454502-00 A 5-1 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

QConvergeConsole is organized in a hierarchical arrangement of information specific to the host server, Host Bus Adapter, port, storage device (disk), and target LUN. Figure 5-1 identifies the components of the GUI.

Figure 5-1. QConvergeConsole User Interface

The following sections provide more details about using QConvergeConsole to view information for Fibre Channel Host Bus Adapters.  “Viewing Server Information” on page 5-3  “Viewing Adapter Information” on page 5-6  “Viewing Port Information” on page 5-6

5-2 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

Viewing Server Information In QConvergeConsole, view information about the host server by selecting the server node in the left pane, and then clicking the appropriate tab in the right pane—Information, Security, Topology, Statistics, and Utilities—to open the corresponding page. The following sections provide details of these pages. Server: Information Use the server Information page to view details about the server's configuration, including the OS type and version, the quantity and type of QLogic adapters installed, and more. Figure 5-2 shows an example.

Figure 5-2. QConvergeConsole: Server Information Page Server: Security Use the Security page to set a new login user name and password to prevent unauthorized persons from modifying configuration settings. For more information about server security, see Chapter 6 Fibre Channel Security.

QLogic recommends that you change the default login and password at your earliest opportunity.

SN0454502-00 A 5-3 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

Server: Topology Use the server Topology page to see a graphical view of the targets and LUNs attached to the Host Bus Adapter ports. Figure 5-3 shows an example.

QLogic recommends that you view this page to determine if the topology of the SAN connected to a Host Bus Adapter port is as expected.

Figure 5-3. QConvergeConsole: Server Topology Page

5-4 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

Server: Statistics Use the server Statistics page to view filtered statistics for one or all QLogic adapters installed in the server. The filtering criteria is user configurable, so the generated logs contain only the data that are needed. Figure 5-4 shows an example.

QLogic recommends that you use these statistics to determine the health and performance of your installed Host Bus Adapters.

Figure 5-4. QConvergeConsole: Server Statistics Page

SN0454502-00 A 5-5 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

Server: Utilities Use the server Utilities page to view the existing installed QLogic agent versions. This page also provides a convenient method of updating one or all installed agents. Viewing Adapter Information In QConvergeConsole, view information about the Host Bus Adapter by selecting the adapter node in the left pane, and then clicking the appropriate tab in the right pane—HBA Info, Utilities, Settings, Advanced Utilities, and Personality—to open the corresponding page.

CAUTION The Personality page enables you to completely change the adapter type from a Host Bus Adapter to a Converged Network Adapter. QLogic recommends that you do not change the Host Bus Adapter type.

Viewing Port Information In QConvergeConsole, view information about, control, and monitor the Host Bus Adapter ports by selecting the port node in the left pane, and then clicking the appropriate tab in the right pane—Port Info, Target, Diagnostics, QoS, Virtual, Parameters, VPD, Monitoring, Utilities, VFC, and Utilization—to open the corresponding page. The following sections provide details of these pages. Port: Port Info Use the Port Info page to view a comprehensive listing of the port, Flash, and PCIe attributes relevant to the selected port. Port: Target The port Target page provides two sub-tabs—Target List and Target Persistent Binding—that provide read-only data for either all targets or the persistently bound targets. Port: Diagnostics Use the port Diagnostics page to access some of the most useful features of QConvergeConsole. The Diagnostics page contains these sub-tabs—Link Status, General Diagnostics, Transceiver Details, and Trace. Click a tab to view the corresponding page.

QLogic recommends that you use these features to evaluate the health of the port connection to the associated SAN.

5-6 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

The following sections provide details of the port diagnostic pages. Port Diagnostics: Link Status Use the Link Status page to view statistics that indicate the health of the link or links between the selected host server's Host Bus Adapter initiator port and the target ports to which the Host Bus Adapter initiator port is connected. Monitor the statistics on this page to determine if an initiator-to-target connection is experiencing transient or static conditions such as link failures, synchronization loss, signal loss, or invalid CRC. Figure 5-5 shows an example.

Figure 5-5. QConvergeConsole: Port Diagnostics, Link Status Page Port: General Diagnostics Use the General Diagnostics page to access two port testing methods:  The loopback test reveals the health of the Host Bus Adapter and transceiver relationship. This test requires that you plug a physical loopback cable into the transceiver to allow the adapter’s transmitted traffic to be looped back into the adapter’s receive port.  The read/write buffer test reveals the end-to-end health of the Host Bus Adapter to an attached LUN. This test writes a selectable data pattern to the target LUN and reads the pattern back to determine the integrity of the data delivery. Use this test to verify the initiator to target nexus. The read/write buffer test is safe to invoke against a SAN-booted OS LUN because the software recognizes that the target LUN contains the OS image that is running on the host server and does not allow the test to corrupt the running OS. Port: Transceiver Details Use the Transceiver Details page to view both general and detailed information on the type, speed, and vendor of the port’s attached transceiver. It also shows the current temperature and electrical conditions measured at the transceiver. Use this information to monitor the heat and power conditions experienced at the Host Bus Adapter physical port.

SN0454502-00 A 5-7 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole (GUI)

Port: Trace Use the Trace page to access the QLogic Fibre Channel Event (FCE) tracing feature. FCE tracing records specific frames that have been transmitted or received by the Fibre Channel port during its execution of commands. FCE tracing also records predetermined events that occur during Fibre Channel port operation. This feature provides a limited view of link traffic through the Fibre Channel port without using a storage protocol analyzer. This capability is useful as a debugging tool for situations where a physical connection to the storage protocol link is not possible. Port: QoS Use the QoS page to select the quality of service (QoS) type, priority, or bandwidth, and the relative setting of the Host Bus Adapter port priority or bandwidth share. Use these settings to adjust the relative performance of N_Port ID virtualization (NPIV) virtual Fibre Channel (vFC) ports. Port: Virtual Use the Virtual page to establish NPIV ports associated with the selected physical port. Port: Parameters The Parameters page provides sub-tabs—HBA Parameters, Advanced HBA Parameters, and Boot Device Selection–for changing these port settings. Click a tab to open the corresponding page. The following sections provide details of these pages. Port: HBA Parameters and Advanced HBA Parameters Use the HBA Parameters and Advanced HBA Parameters pages to view and modify the current settings of various Host Bus Adapter parameters.

QLogic recommends that you leave these parameters at the default settings, unless you have a very specific reason to alter them. If you change the Host Bus Adapter parameter settings and later suspect that these settings are adversely affecting adapter operation, QLogic recommends changing back to the default settings by clicking Restore Defaults at the bottom of the HBA Parameters page. Port: Boot Device Selection Use the Boot Device Selection page to view or configure a selectable boot device, either as a primary boot device or as an alternative boot device. Use this interface to direct a server to boot from a LUN containing a predeployed OS image either as a primary or alternative boot device.

5-8 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole CLI

Port: Monitoring Use the Monitoring page to view tabular or graphical representations of pertinent performance and error statistics. The specific counters are BPS (bytes per second), Device Errors, HBA Port Errors, I/O Count, IOPS, and Reset (number of ports resets encountered).

QLogic recommends that you view these counters to determine the performance and health of the connection between the host initiator port and the target device LUN. Port: Utilities Use the Utilities page to update the Host Bus Adapter firmware and parameters from a file, or to update the drivers parameters from a file. The firmware update feature can only be invoked from Host Bus Adapter port 1. Port: VFC Use the VFC page to view virtual machine information, if it exists. Use this page to determine which vFC WWPN correlates to which VM holding the vFC. Port: Utilization Use the Utilization page to measure the percentage of the physical ports bandwidth used by the selected port. Monitoring Host Bus Adapters with QConvergeConsole CLI The QLogic QConvergeConsole CLI is a command line interface for managing and monitoring the Host Bus Adapters in a local host. You can install the CLI from the QLogic SuperInstaller. QConvergeConsole CLI parallels the adapter-level capabilities of the QConvergeConsole GUI tool. User preference should dictate whether to use one tool or the other.

QLogic recommends that you install both management tools.

For details about using this tool, refer to the User’s Guide QConvergeConsole CLI 2400, 2500, 2600, 3200, 8100, 8200, 8300 Series (part number SN0054667-00), available for download on the QLogic Downloads and Documentation page (see “Downloading Updates” on page xii).

SN0454502-00 A 5-9 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter Server In an ESXi 5.x environment, QLogic provides the QConvergeConsole Plug-in for VMware vCenter Server. The QLogic plug-in for VMware provides capabilities similar to the QConvergeConsole tool. For details about using this tool, refer to the User’s Guide QConvergeConsole Plug-ins for VMware vSphere (part number SN0054677-00), available for download on the QLogic Downloads and Documentation page (see “Downloading Updates” on page xii). The QConvergeConsole page in vCenter displays a hierarchy of the server, installed adapters, ports, and targets attached to the installed device ports. Figure 5-6 identifies the components of the VMware plug-in GUI.

QConvergeConsole Interface in vCenter QConvergeConsole Tab

System Tree Content Pane Pane

Figure 5-6. QConvergeConsole Plug-in for VMware vCenter Server User Interface

The following sections provide more details about using QConvergeConsole Plug-in for VMware vCenter Server to view information for Fibre Channel Host Bus Adapters.  “Viewing Server Information” on page 5-11  “Viewing Adapter Information” on page 5-11  “Viewing Port Information” on page 5-12

5-10 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Viewing Server Information In QConvergeConsole Plug-in for VMware vCenter Server, click the server node in the left pane to view SAN information for the selected server. The QConvergeConsole VMware plug-in page displays either:  A storage map that provides a convenient means to validate the topology of the SAN and the paths configured between the initiator ports and the target device ports (see Figure 5-6).

QLogic recommends that you always validate that the topology matches the design schematic.

 Configurable, server-wide performance parameters, including:  A check box for enabling extended logging.

QLogic recommends that you enable extended logging while troubleshooting, but disable it during normal operation.

 A check box to turn off ZIO and an option to set the ZIO delay.

QLogic recommends that you leave ZIO turned on because it mitigates the impact of servicing the SAN, which affects overall system performance by limiting the number of interrupts generated by the Host Bus Adapter.

 A setting for the MAX LUN queue depth, which is similar to ql2xmaxdepth discussed in “Tuning the Underlying Physical Interface” on page 4-10. The default setting is 64, the maximum value. Increasing this value has no effect unless you also increase the ESXi 5.x Disk.SchedNumReqOutstanding parameter. Viewing Adapter Information In QConvergeConsole Plug-in for VMware vCenter Server, click the adapter node in the left pane to view information about the Host Bus Adapter in the right pane. Figure 5-7 shows an example.

SN0454502-00 A 5-11 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Figure 5-7. QConvergeConsole Plug-in for VMware: Adapter Page

This page provides sections with General information for the adapter, Commands for updating the Host Bus Adapter Flash image and firmware, and the Personality Type Configuration option (available only for QLogic QLE26xx Adapters) to change the entire personality of the adapter from a Host Bus Adapter to a Converged Network Adapter.

CAUTION Although you can use the Personality Type Configuration option to completely change the QLE26xx Adapter type from a Host Bus Adapter to a Converged Network Adapter. QLogic recommends that you do not change the Host Bus Adapter type.

Viewing Port Information In QConvergeConsole Plug-in for VMware vCenter Server, view information, control, and monitor the Host Bus Adapter ports by selecting the port node in the left pane, and then clicking the appropriate tab in the right pane—Boot, Parameters, Transceiver, Statistics, Diagnostics, and VPD—to open the corresponding page. The following sections provide details of these pages.

5-12 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Port: Boot Use the Boot page to view or configure a selectable boot device, either as a primary boot device or as an alternative boot device. You can direct a server to boot from a LUN containing a predeployed OS image either as a primary or alternative boot device. Figure 5-8 shows an example.

Figure 5-8. QConvergeConsole Plug-in for VMware: Port Boot Page Port: Parameters Use the Parameters page to view and modify the settings of various Host Bus Adapter port parameters. Figure 5-9 shows an example.

QLogic recommends that you leave these parameters at the default settings, unless you have a very specific reason to alter them.

Figure 5-9. QConvergeConsole Plug-in for VMware: Port Parameters Page

SN0454502-00 A 5-13 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Port: Transceiver Use the Transceiver page to view both general and detailed information on the type, speed, and vendor of the port's attached transceiver, as well as the current temperature and electrical conditions measured at the transceiver. Use this information to monitor the heat and power conditions on the Host Bus Adapter physical port. Figure 5-10 shows an example.

Figure 5-10. QConvergeConsole Plug-in for VMware: Port Transceiver Page Port: Statistics Use the Statistics page to view pertinent performance and error statistics, including:  Number of I/Os  Number of Interrupts  Link Failure  Loss of Sync  Controller Errors  Invalid Transmission Words  Throughput in Megabytes  Number of LIP Resets

QLogic recommends that you view these counters to determine the performance and health of the connection between the host's initiator port and the target device's LUN.

5-14 SN0454502-00 A 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Figure 5-11 shows an example of the Statistics page.

Figure 5-11. QConvergeConsole Plug-in for VMware: Port Statistics Page Port: Diagnostics Use the Diagnostics page to access two port testing methods:  The loopback test reveals the health of the Host Bus Adapter and transceiver relationship. This test requires that you plug a physical loopback cable into the transceiver to allow the adapter’s transmitted traffic to be looped back into the adapter’s receive port. Figure 5-12 shows an example of the loopback test pane.  The read/write buffer test reveals the end-to-end health of the Host Bus Adapter to an attached LUN. This test writes a selectable data pattern to the target LUN and reads the pattern back to determine the integrity of the data delivery. Use this test to verify the initiator to target nexus. The read/write buffer test is safe to invoke against a SAN-booted OS LUN because the software recognizes that the target LUN contains the OS image that is running on the host server and does not allow the test to corrupt the running OS.

Figure 5-12. QConvergeConsole Plug-in for VMware: Port Diagnostics Page

SN0454502-00 A 5-15 5–Management and Monitoring Monitoring Host Bus Adapters with QConvergeConsole Plug-in for VMware vCenter

Port: VPD Use the Port Vital Product Data (VPD) page to view detailed, port-level VPD data. Figure 5-13 shows an example.

Figure 5-13. QConvergeConsole Plug-in for VMware: Port VPD Page

5-16 SN0454502-00 A 6 Fibre Channel Security

This chapter defines best-practice recommendations for enhancing the security of Fibre Channel SANs. The focus is on QLogic Fibre Channel Host Bus Adapters, associated software, and how security relates to generic SAN best practices. Security Overview As more business-critical data are being consolidated into SANs, protecting the data has become the key best practice that every SAN administrator must implement. Effective SAN protection requires a solid understanding of the following:  What actions increase security?  What actions create potential loopholes?  What impact do these actions have on the rest of the environment?

As with any other best practice, the “best” approach to implementing security depends on the business and the current needs of the organization, along with a risk assessment of the data being protected.

NOTE Fibre Channel security is only as strong as the weakest component in the SAN. Typically, hosts that are connected to a SAN constitute the weakest link. Therefore, protecting hosts from intruders should have priority over protecting other components. Today's hosts potentially host hundreds of different applications, OSs, and driver components, many of them with their own security considerations.

QLogic strongly recommends that you diligently apply the latest security patches for all host components.

SN0454502-00 A 6-1 6–Fibre Channel Security Setting the QConvergeConsole Password

Setting the QConvergeConsole Password The QLogic QConvergeConsole (GUI) tool runs on a Windows or Linux server. Use QConvergeConsole to install, configure, troubleshoot, and analyze QLogic Host Bus Adapters that are installed either in the same physical server or on other servers that are visible to the server where QConvergeConsole is installed. Many QConvergeConsole features are password protected. The default password for these features is “config,” and is known by almost anyone who manages a SAN or has access to the Internet and a search engine. Leaving the QConvergeConsole password at its default value means that anyone with access to the host server could easily change key Host Bus Adapter parameters and potentially breach the security of the SAN.

QLogic recommends that you set both host and application access passwords for QConvergeConsole, and that you reset them often.

Changing the QConvergeConsole Password for a Single Host To change the QConvergeConsole password for a local host: 1. In the QConvergeConsole left pane, select the host for which you want to set the application access password. 2. Click the Security tab to open the Security page. 3. Click All. 4. In the Host Access section, type a Login and Password in the boxes to verify that you have administrator or root privileges for the selected host. (These are the system login and password you use to access the machine.) 5. In the Application Access section, type the QLogic default password for Current Password, a New Password of your choosing, and the same new password to Verify New Password. 6. To change the login and passwords, click Apply.

6-2 SN0454502-00 A 6–Fibre Channel Security Setting the QConvergeConsole Password

Figure 6-1 shows an example of the QConvergeConsole Security page.

Figure 6-1. QConvergeConsole: Security Page Changing the QConvergeConsole Password for Multiple Hosts

To set the passwords of several hosts at once, QLogic recommends that you use the Password Update Wizard.

SN0454502-00 A 6-3 6–Fibre Channel Security Setting the QConvergeConsole Password

To change the QConvergeConsole password for multiple hosts: 1. In QConvergeConsole, select the Wizards menu, and then click Password Update Wizard. 2. Complete each wizard window (Figure 6-2 shows an example of the Host Selection window) by following the on-screen instructions, and then clicking Next to proceed to the next step. For assistance in completing any of the steps in the Password Update Wizard, click the Help button to view detailed instructions and explanations.

Figure 6-2. QConvergeConsole: Password Update Wizard

6-4 SN0454502-00 A 7 Fibre Channel Boot-from-SAN

In the past, the most common boot method was to boot from a direct-attached disk, called a local boot. Unlike local boot, the boot-from-SAN with Fibre Channel method places the boot device on the SAN. This boot device is a LUN that resides on a Fibre Channel storage array device. The server communicates with the storage array on the SAN through a Host Bus Adapter. Advantages of Boot-from-SAN Boot-from-SAN has the following advantages:  Provides high fault tolerance, when properly configured  Allows centralized management of OS images for rapid deployment scenarios and disaster recovery options  With an 8Gb or 16Gb Fibre Channel connection (which can be additionally enhanced with multipath), provides a higher bandwidth connection between a host server and its OS storage target than is possible on a typical server with a direct-attached disk

The adapter boot code (BIOS or UEFI) contains the instructions that enable the server to find the boot disk on the SAN. Because the boot device resides on the SAN, it simplifies server management. Separating the boot image from each server allows administrators to leverage the advanced capabilities of storage arrays to achieve high availability, improved data integrity, rapid provisioning, and more efficient storage management. Replacing a failed server is as easy as moving the Host Bus Adapter to a new server, pointing it to the SAN boot device, and booting up the new host.

QLogic highly recommends using boot-from-SAN as the boot method.

SN0454502-00 A 7-1 7–Fibre Channel Boot-from-SAN Installing and Configuring Boot-from-SAN

Installing and Configuring Boot-from-SAN Follow the procedures in this section to install, configure, and test the boot-from-SAN. To install boot-from-SAN: 1. Specify the size of the boot LUN. When configuring a boot-from-SAN environment, it is critical that you consider the size of the boot LUN. The boot LUN must be large enough to hold the OS with all of the required features, plus any applications that must run on the host. 2. Specify a single path rather than multipath. The initial installation of the OS in a boot-from-SAN environment requires that you install the OS on the SAN boot device from the host server through a Host Bus Adapter port.

CAUTION Configure the path between the host server and the SAN boot device as a single path because the OS installation software does not support multipath and sees each path to the boot LUN as a different LUN. Both Microsoft and VMware warn users to avoid this condition because it is likely to cause the installation to fail.

QLogic recommends that during the OS installation phase, you configure a single path between the host server and its SAN boot device. 3. Install the Host Bus Adapter driver. The OS or OS revision that you are installing may not include an inbox driver for the Host Bus Adapter that connects the host server to the SAN boot device. If not, the OS installation software cannot find the SAN boot device.

QLogic recommends that you install a copy of the OS-appropriate Host Bus Adapter driver to your server during OS installation.

To configure boot-from-SAN: 1. Configure a multipath connection between the host server and its SAN boot device.

QLogic highly recommends multipath configuration to provide fault tolerance.

7-2 SN0454502-00 A 7–Fibre Channel Boot-from-SAN Installing and Configuring Boot-from-SAN

2. (Optional) Configure load balancing to improve performance. The performance of the host server's access to its OS installation affects the performance of every application that runs on that host server.

QLogic highly recommends configuring load balancing to enhance the overall performance of the host server.

3. Configure alternate boot LUNs.

QLogic recommends that you provide additional fault tolerance by configuring one or more alternate boot LUNs. The alternate boot LUN allows the host server to boot if connectivity is lost to the primary LUN. To test and validate boot-from-SAN: 1. Validate the multipath connection between the host server, its SAN boot device, and the alternate boot LUN. To test the connection, boot the OS on the host server, and then disable the primary path to the SAN boot device. 2. Disconnect the physical cable or alter the zoning at the Fibre Channel switch.  If the multipath configuration is working, the host OS continues to operate normally.  If the multipath connection is not working, disabling the primary path breaks the host server connection to the OS installation, causing the host OS to fail. 3. Test the alternate boot LUN. Disable access to the primary LUN by either changing the zoning or masking the target device, and then booting the host server without access to the primary LUN. If the alternate boot device is properly configured and is accessible, the host server automatically finds the alternate boot device and boots the OS as usual. 4. Reconnect the physical cables (if disconnected in Step 2).

SN0454502-00 A 7-3 7–Fibre Channel Boot-from-SAN Installing and Configuring Boot-from-SAN

7-4 SN0454502-00 A A QLogic Host Bus Adapter LEDs

This appendix defines the QLogic Host Bus Adapter LED schemes. Table A-1 shows the QLE25xx LEDs

Table A-1. QLE25xx Host Bus Adapter LED Scheme

Power On Power On 2Gbps Link 4Gbps Link 8Gbps Link (Before (After Firmware LEDs Power Off Firmware Firmware Fault Up and Up and Up and Beaconing Initialization) Initialization) Active Active Active

Amber LED Off On Flashing Flashing in On and Off Off Flashing (2Gbps) sequence flashing

Green LED Off On Flashing Flashing in Off On and Off Off (4Gbps) sequence flashing

Yellow LED Off On Flashing Flashing in Off Off On and Flashing (8Gbps) sequence flashing

SN0454502-00 A A-1 A–QLogic Host Bus Adapter LEDs

Table A-2 shows the QLE26xx LEDs.

Table A-2. QLE26xx Host Bus Adapter LED Scheme

Power On Out of Power On Power On 4Gbps 8Gbps 16Gbps Power Reset (Before (After Firmware LEDs Off (Without Firmware Firmware Fault Link Up Link Up Link Up Beaconing and Active and Active and Active Flash or Initialization) Initialization) Preload)

Amber LED Off Flashing On Flashing Flashing in On and Off Off Flashing (4Gbps) slowly sequence flashing

Green LED Off Flashing On Flashing Flashing in Off On and Off Off (8Gbps) slowly sequence flashing

Amber LED Off Flashing On Flashing Flashing in Off Off On and Flashing (16Gbps) slowly sequence flashing

A-2 SN0454502-00 A Glossary

adapter boot code The board that interfaces between the The program that initializes a system or an host system and the target devices. adapter. Boot code is the first program to Adapter is synonymous with Host Bus run when a system or a device within a Adapter, host adapter, and board. system, such as an adapter, is powered on. FCode, BIOS, and extensible firmware bandwidth interface (EFI) are all forms of boot code A measure of the volume of data that can for specific hardware and operating be transmitted at a specific transmission system environments. rate. A 1Gbps or 2Gbps Fibre Channel Boot code for QLogic Fibre Channel port can transmit or receive at nominal Adapters is required if the computer rates of 1 or 2Gbps, depending on the system is booting from a storage device device to which it is connected. This corre- (disk drive) attached to the adapter. The sponds to actual bandwidth values of primary function of the boot code is 106MB and 212MB, respectively. communication with the external boot device before the operating system is up basic input/output system and running. Boot code can also perform See BIOS. secondary functions, including managing the setup for the adapter and initializing BIOS and testing the adapter’s ISP. Basic input/output system (typically in Flash PROM). The program (or tool) that boot device serves as an interface between the The device, usually the hard disk that hardware and the operating system, and contains the operating system that the allows booting from the adapter at startup. BIOS uses to boot from when the computer is started. boot, booting Loading code from a disk or other device boot-from-SAN into a computer’s memory, typically in The ability for each server on a network to stages that begin with the BIOS. boot their operating system from a Fibre Channel RAID unit located on the SAN, rather than from a local disk or direct attached storage (DAS). Booting from SAN simplifies SAN management because you can replace a server and boot it from the Fibre Channel RAID unit.

SN0454502-00 A Glossary-1 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

CIM device-specific module Common information model. Provides a See DSM. common definition of management infor- mation for systems, networks, applica- DSM tions, and services, and allows for vendor Device-specific module. A extensions. CIM’s common definitions hardware-specific driver that has passed enable vendors to exchange semantically the Microsoft MPIO test and submission rich management information between process. systems throughout the network. ESXCLI Class 2 A VMware-provided command-line set for A connectionless Fibre Channel communi- managing many aspects of an ESXi host. cation service where the receiver explicitly You can run ESXCLI commands as vCLI acknowledges frames and notifies of commands or in the ESXi Shell in trouble- delivery failure, including end-to-end flow shooting situations, and also from the control. PowerCLI shell by using the Get-EsxCli cmdlet. The set of ESXCLI commands Class 3 available on a host depends on the host A connectionless Fibre Channel communi- configuration. Issue the esxcli cation service where frames are not explic- --server --help itly acknowledged and delivery is on a command before you run a command on a “best effort” basis. host to verify that the command is defined on the host you are targeting. common information model See CIM. fabric A fabric consists of cross-connected Fibre CRC Channel devices and switches. Cyclic redundancy check. A scheme to check data that have been transmitted or failover stored and to detect errors. A CRC cannot Automatic substitution of another system correct errors. component when one fails. cyclic redundancy check Fast!UTIL See CRC. QLogic Fibre Channel adapter BIOS tool. data center FC-SP A secure location for hosting servers. Fibre Channel security protocol. An ANSI standard that defines the various protocols datastore used for implementing security in a Fibre A virtual representation of combinations of Channel fabric. underlying physical storage resources in the data center. A datastore is the storage firmware location (for example, a physical disk, a Low-level software typically loaded into RAID, or a SAN) for virtual machine files. read-only memory and used to boot and operate an intelligent device.

Glossary-2 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

Fibre Channel intelligent interleaved direct memory High-speed serial interface technology access that supports other higher layer protocols See iiDMA. such as SCSI and IP, and is primarily used in SANs. inter-switch link See ISL. Fibre Channel security protocol See FC-SP. Internet small computer system interface See iSCSI. Flash Non-volatile memory where the boot code IOPS is saved. At times, Flash and boot code Input/output operations per second. A are used interchangeably. common performance-based measure- ment calculated by benchmarking applica- high availability tions. IOPS is primarily used with servers A system or device that operates continu- to find the best storage configuration. ously for a long length of time. iSCSI Host Bus Adapter Internet small computer system interface. A QLogic adapter that connects a host Protocol that encapsulates data into IP system (the computer) to other network packets to send over connec- and storage devices. tions. An alternative to FCIP.

I/O ISL Input/output. Collection of interfaces that Inter-switch link. A connection between a different functional units of a system use to port on one switch and a port on another communicate with each other. switch. iiDMA LED Intelligent interleaved direct memory Light-emitting diode. Status indicator on a access. A QLogic feature that ensures switch, router, adapter, or other device. maximum link efficiency. light-emitting diode initiator See LED. The device that initiates a data exchange with a target device. load balancing Adjusting components to spread demands input/output evenly across a system’s physical See I/O. resources, and thereby optimizing perfor- mance. Load balancing can be done Input/output operations per second manually or programmatically. See IOPS. logical unit number See LUN.

SN0454502-00 A Glossary-3 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

loopback MSI-X (defined in PCI 3.0) allows a device Diagnostic tool that routes transmit data to allocate any number of interrupts through a loopback connector back to the between 1 and 2048 and gives each inter- same adapter. rupt separate data and address registers. Optional features in MSI (64-bit LUN addressing and interrupt masking) are Logical unit number. Representation of a mandatory with MSI-X. logical address on a peripheral device or multipath I/O array of devices. See MPIO. MBps N_Port ID virtualization Megabytes per second. A measure in megabytes of data transfer rates. See NPIV. megabytes per second NPIV See MBps. N_Port ID virtualization. The ability for a single physical Fibre Channel end point message signaled interrupts (N_Port) to support multiple, uniquely See MSI, MSI-X. addressable, logical end points. With NPIV, a host Fibre Channel adapter is MPIO shared in such a way that each virtual Multipath I/O. A Microsoft framework adapter is assigned to a virtual server and designed to mitigate the effects of a Host is separately identifiable within the fabric. Bus Adapter failure by providing an alter- Connectivity and access privileges within nate data path between storage devices the fabric are controlled by the identifica- and a Windows operating system. MPIO tion of each virtual adapter and hence, the enables up to 32 alternate paths to add virtual server using each virtual adapter. redundancy and load balancing for OLAP Windows storage environments. Online analytical processing. A technology MSI, MSI-X that is used to organize large business Message signaled interrupts. Two databases and support business intelli- PCI-defined extensions to support gence. OLAP databases are divided into message signaled interrupts (MSI), in PCI one or more cubes, and each cube is 2.2 and later and PCI Express. MSIs are organized and designed by a cube admin- an alternative way of generating an inter- istrator to fit the way that you retrieve and rupt through special messages that allow analyze data the emulation of a pin assertion or deassertion.

Glossary-4 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

OLTP overlapping protection domains Online transaction processing. A class of See OPD. systems that facilitate and manage trans- action-oriented applications, typically for PCI Express, PCIe data entry and retrieval transaction A third-generation I/O standard that allows processing. The term is somewhat ambig- enhanced Ethernet network performance uous; some understand a “transaction” in beyond that of the older peripheral compo- the context of computer or database trans- nent interconnect (PCI) and PCI extended actions, while others (such as the Transac- (PCI-X) desktop and server slots. tion Processing Performance Council) define it in terms of business or commer- PCI-X cial transactions. OLTP has also been Peripheral component interface extended. used to refer to processing in which the Extension of PCI to the software command system responds immediately to user setup that doubles the speed of data requests. An automatic teller machine transfer and increases fault tolerance. (ATM) for a bank is an example of a commercial transaction processing appli- peripheral component interface extended cation. See PCI Express, PCIe. online analytical processing port See OLAP. Access point in a device where a link attaches. The most common port types online transaction processing are: See OLTP.  N_Port. A Fibre Channel port that supports point-to-point topology. OoOFR  NL_Port: A Fibre Channel port that Out-of-order frame reassembly. A QLogic supports loop topology. feature that reassembles the frames within an exchange in the correct order, even if  F_Port: A port in a fabric where an they were received out of order. Used in a N_Port can attach. meshed switch topology where frames can  FL_Port: A port in a fabric where an traverse through different ISLs to arrive at NL_Port can attach. the target. Otherwise, according to the Fibre Channel specification, the entire QoS exchange of multiple frames would have to Quality of service. Refers to the methods be retransmitted. used to prevent bottlenecks and ensure business continuity when transmitting data OPD over virtual ports by setting priorities and Overlapping protection domains. QLogic’s allocating bandwidth. data protection implementation in the Fibre Channel controller. quality of service See QoS. out-of-order frame reassembly See OoOFR.

SN0454502-00 A Glossary-5 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

RAID serial advanced technology attachment Redundant array of independent disks. See SATA. RAID technology groups separate disks into one logical storage unit (LUN) to serial attached SCSI improve reliability and/or performance. See SAS. redundant array of independent disks SIOC See RAID. Storage input/output control. Used to control the I/O usage of a virtual machine registered state change notification and to gradually enforce the predefined See RSCN. I/O share levels. VMware supports SIOC on Fibre Channel and iSCSI connected RSCN storage in ESX/ESXi 4.1 and 5.0. With Registered state change notification. ESXi 5.0 support for NFS with SIOC was Refers to the methods used to prevent also added. bottlenecks and ensure business conti- nuity when transmitting data over virtual solid-state disk ports by setting priorities and allocating See SSD. bandwidth. SPC-3 SAN SCSI Primary Commands - 3. Contains . Multiple storage the third-generation definition of the basic units (disk drives) and servers connected commands for all SCSI devices. by networking topology. SSD SAS Solid-state disk. A data storage device that Serial attached SCSI. Technology that uses memory chips to store data, instead enables SCSI interfaces to grow beyond of the spinning platters found in conven- Ultra320 while retaining backward compat- tional hard disk drives. ibility. storage area network SATA See SAN. Serial advanced technology attachment. Standard for connecting hard drives with storage input/output control serial signaling technology. See SIOC.

SCSI Small computer systems interface. A high-speed interface used to connect devices—such as hard drives, CD drives, printers, and scanners—to a computer. The SCSI can connect many devices using a single controller. Each device is accessed by an individual identification number on the SCSI controller bus.

Glossary-6 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

UEFI WWN Unified extensible firmware interface. A Worldwide name. Unique 64-bit address specification detailing an interface that assigned to a device by the device helps hand off control of the system for the manufacturer. preboot environment (that is, after the system is powered on, but before the WWPN operating system starts) to an operating Worldwide port name. Unique 64-bit system, such as Windows or Linux. UEFI address assigned to each port on a provides a clean interface between device. One WWNN may contain multiple operating systems and platform firmware WWPN addresses. at boot time, and supports an architec- ture-independent mechanism for initial- ZID, ZIO izing add-in cards. Zero interrupt delay, zero interrupt opera- tion. QLogic feature that collates I/O inter- unified extensible firmware interface rupts to reduce the number of interrupts to See UEFI. a specific CPU and allow multiple SCSI commands to be completed during a vFC single system interrupt Virtual Fibre Channel. A Hyper-V feature that provides the guest operating system zone (zoning) with unmediated access to a SAN by using A set of Fibre Channel device ports config- a standard WWN associated with a virtual ured to communicate across the fabric. machine. vFC allows Hyper-V users to use Through switches, traffic within a zone can Fibre Channel SANs to virtualize be physically isolated from traffic outside workloads that require direct access to the zone. SAN LUNs. virtual Fibre Channel See vFC. vital product data See VPD.

VPD Vital product data. Information provided by the manufacturer about the current working adapter. Information varies by manufacturer, or may not be provided at all. worldwide name See WWN. worldwide port name See WWPN.

SN0454502-00 A Glossary-7 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

Glossary-8 SN0454502-00 A Index

A B adapters balancing I/O loads 4-20 definition of Glossary-1 bandwidth advanced parameters, setting 5-8 definition of Glossary-1 information, viewing in QConvergeConsole connection, increasing 2-7 5-6 cumulative demand bottleneck 2-7 information, viewing in QConvergeConsole demands in VMware, mitigating 4-9 Plug-in for VMware vCenter Server 5-11 distribution with SIOC 4-10 monitoring with QConvergeConsole 5-1 Fan-In ratio 4-20 monitoring with QConvergeConsole CLI 5-9 high I/O, mitigating 2-7 monitoring with QConvergeConsole Plug-in port, setting 5-8 for VMware 5-10 ports, measuring 5-9 personality type, changing 5-6, 5-12 Windows 2012 DSM, policies 2-8 upgrading 4-20 workloads issues with 4-3 adaptive queuing 4-10 basic input/output system, See BIOS agent versions, viewing 5-6 best practices, definition of 1-1 agreements, license xi BIOS alternate boot LUNs adapter boot code 7-1 configuring 7-3 definition of Glossary-1 testing 7-3 boot analyzing monitor performance 4-16 definition of Glossary-1 anti-static bag, using 3-1 device, configuring with QConvergeConsole applications 5-8 high performance 4-19 device, configuring with QConvergeConsole throughput-based workloads 4-2 Plug-in for VMware VCenter Server transaction-based high-performance OLTP 5-13 4-2 information, viewing for port 5-13 workloads overview 4-1 boot code array performance, disk data 4-18 definition of Glossary-1 audience ix boot-from-SAN 7-1 auto-negotiating Fibre Channel speed 4-20 downloading xii avoiding RSCNs 4-20

SN0454502-00 A Index-1 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

boot device Class 3 definition of Glossary-1 adapter support for 2-2 boot-from-SAN booting 7-1 definition of Glossary-2 configuring 5-8, 5-13 common information model, See CIM multipath connection to 7-2 configurations multipath connection, validating 7-3 boot-from-SAN 7-2 path, configuring 7-2 SAN storage 2-4 SAN, installing OS on 7-2 configuring boot LUNs alternate boot LUNs 7-3 alternate, configuring 7-3 load balancing 7-3 alternate, testing 7-3 multipath connection 7-2 boot-from-SAN congestion definition of Glossary-1 datastore, determining 4-9 advantages of using 7-1 mitigating between host server and target configuring 7-2 ports 2-7 installing 7-2 mitigating between VMs and ports 2-6 overview 7-1 mitigating host-interrupt induced 2-7 testing and validating 7-3 threshold, SIOC settings 4-8 booting connecting SAN for high availability 2-4 definition of Glossary-1 connections host server 7-3 bandwidth, increasing 2-7 electrical 3-2 Fibre Channel 7-1 C initiator-to-target 5-7 monitoring 5-9 calculating queue depth 4-13 multipath, configuring 7-2 changing multipath, validating 7-3 See also setting port, evaluating health with Host Bus Adapter parameters 4-3 QConvergeConsole 5-6 password for several hosts 6-3 port, evaluating health with password for single host 6-2 QConvergeConsole Plug-in for VMware personality type of adapter 5-6, 5-12 5-14 port parameters 5-8, 5-13 connectivity choosing adapter, verifying 3-3 adapter for high availability 2-4 PCIe 2-2 Host Bus Adapter 2-1 considerations, performance 2-6 PCIe slot 3-2 contacting QLogic xiii CIM conventions, documentation x definition of Glossary-2 CRC installed with VMware plug-in 3-4 definition of Glossary-2 Class 2 invalid status 5-7 adapter support for 2-2 T10 support 2-3 definition of Glossary- customer support xii cyclic redundancy check, See CRC

Index-2 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

D DSMs definition of Glossary-2 damage to Host Bus Adapter, avoiding 3-1 failover only 2-8 data least queue depth 2-10 rate, setting 4-20 MPIO modules 2-8 workloads, application 4-1 round robin 2-9 data center round robin with subset 2-9 definition of Glossary-2 weighted paths 2-10 adapters, managing across 3-4 datastores, measuring 4-9 VM share allotments 4-8 E database, knowledge xiii datastores electrical congestion, determining 4-9 transceiver conditions 5-14 understanding 4-7 EMC PowerPath/VE, recommendation for definitions of terms Glossary-1 4-21 delay timer, interrupt 4-15 end user license agreement, software 1-xi deploying the SAN 2-1 equations, queue depth 4-13 device driver ESD-related damage, preventing 3-1 installing on adapter 3-3 ESXCLI keeping current 2-7 definition of Glossary-2 device-specific module, See DSM queue depth, changing 4-14 diagnostics using to install QLogic firmware, drivers, and tools 3-4 port trace 5-8 ZIO mode and timer, changing 4-15 port transceiver 5-7 esxtop tool port, general 5-7 datastore congestion, determining 4-9 port, link status 5-7 I/O performance, measuring 4-18 port, viewing 5-6, 5-15 EULA, software xi disk array performance, gathering 4-18 Extended Error Logging flag 4-13 documentation conventions used x downloading xii guide, what’s in it ix F intended audience ix fabric related materials x definition of Glossary-2 downloading updates xii network data, gathering 4-18 drivers paths across 4-19 installing on adapter 3-3, 3-5 SAN (diagram) 2-4 keeping current 2-7 SAN, dedicated 4-19 parameters, queue depth 4-10 switching 2-2 zones in 2-12 failover definition of Glossary-2 policies for 2-8

SN0454502-00 A Index-3 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x failover only DSM 2-8 H Fan-In ratios 4-20 Fast!UTIL help system, QConvergeConsole 5-1 definition of Glossary-2 help, obtaining from QLogic xiii preboot environment interfaces 3-3 high availability fault tolerance, configuring multipath 7-2 definition of Glossary-3 FC-SP adapter, choosing for 2-4 definition of Glossary-2 MPIO 2-8 support for 2-3 SAN, connecting for 2-4 features, QLogic Host Bus Adapters 2-2 high-performance applications, fencing 4-19 fencing high-performance applications 4-19 host Fibre Channel bus types, supported 1-2 definition of Glossary-3 OS, SAN high availability 2-6 boot-from-SAN 7-1 server ports 2-7 multipathing in ESXi 5.x 2-11 storage performance 2-6 security 6-1 verifying LUN sighting 3-5 topology 2-2 Host Bus Adapter Fibre Channel security protocol, See FC-SP definition of Glossary-3 file choosing 2-1 support files, viewing xii driver versions, installing 3-5 updating parameters from 5-9 high availability, choosing for 2-4 firmware improving performance in VMware ESXi 5.x definition of Glossary-2 4-7 downloading xii improving performance in Windows 2012 installing 3-3 4-5 keeping current 2-7 installation of 3-1 updating 5-12 installation, verifying 3-3 Fixed Path Selection Policy 2-11 LED schemes 3-6, A-1 Flash managing and monitoring 5-1 definition of Glossary-3 parameters 4-3 image, updating 5-12 performance tuning 4-1 port attribute, viewing 5-6 port parameters, setting 5-8 flashing LEDs, meaning of A-1 safe handling 3-1 supported models 1-2 tape access through 2-12 G tuning performance 4-1 verifying that host can see 3-4 gathering host server data, gathering 4-17 disk array performance data 4-18 host-interrupt induced congestion, mitigating fabric network data 4-18 2-7 host server data 4-17 general diagnostics, port 5-7 glossary of terms Glossary-1

Index-4 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

I interoperability of SAN components 2-13 interrupt delay timer, setting 4-15 I/O introduction to best practices 1-1 definition of Glossary-3 IOPS architecture, queue depth 4-12 definition of Glossary-3 loading, fundamentals of 4-11 counters for port monitoring 5-9 loads, balancing 4-20 OLTP transaction-based iiDMA processes 4-2 definition of Glossary-3 port statistics, viewing 4-17 support for 2-2 support for 2-3 improving adapter performance iSCSI, definition of Glossary-3 VMware ESXi 5.x 4-7 ISL Windows 2012 4-5 definition of Glossary-3 initiator minimizing 4-19 definition of Glossary-3 host, HA connectivity 2-4 port speed 4-19 K ports, mitigating congestion 2-7 redundant path to 2-6, 2-9 knowledge database, QLogic xiii round robin policy, using 2-9 round robin with subset policy, using 2-9 input/output operations per second, See IOPS L input/output, See I/O lanes installation definition of 3-2 adapter, verifying 3-3 used for PCIe 2-2 boot-from-SAN 7-2 least queue depth DSM 2-10 device driver 3-3 LEDs driver on adapter 3-5 definition of Glossary-3 firmware 3-3 light schemes, understanding 3-6, A-1 Host Bus Adapter 3-1 license agreements xi management tools 3-3 lights on adapter, identifying 3-6 QConvergeConsole 3-3 link status, port diagnostics 5-7 software on VMware ESXi 5.x 3-4 load balancing software on Windows 2012 3-3 definition of Glossary-3 intelligent interleaved direct memory access, See iiDMA configuring 7-3 intended audience ix policies for 2-8 interface local boot 7-1 QConvergeConsole 5-2 logical unit number, See LUN QConvergeConsole CLI 5-9 loopback test QConvergeConsole Plug-in for VMware port diagnostics with QConvergeConsole vCenter Server 5-10 5-7 Internet small computer system interface, See port diagnostics with QConvergeConsole iSCSI Plug-in for VMware 5-15

SN0454502-00 A Index-5 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

loopback, definition of Glossary-4 MSI-X LUNs definition of Glossary-4 See also alternate boot LUNs support for 2-2 definition of Glossary-4 multipath connection, configuring 7-2 verifying host can see 3-5 multipath I/O, See MPIO multipathing ESXi 5.x 2-11 M Windows 2012 2-8 Multipathing Plug-Ins, See MPPs management tools, installing on adapter 3-3 managing Host Bus Adapters 5-1 materials, related x N MAX LUN queue depth, setting 5-11 MBps N_Port ID virtualization, See NPIV definition of Glossary-4 native multipathing plug-in for ESXi 2-6 OLAP throughput-based processes 4-2 Native Multipathing Plug-In, See NMP measuring I/O performance 4-17 network data (fabric), gathering 4-18 megabytes per second, See MBps NMP message signaled interrupts, See MSI, MSI-X supported PSPs 2-11 minimizing ISL 4-19 VMware multipathing module 2-11 mitigating NPIV congestion between ports 2-6, 2-7 definition of Glossary-4 host interrupt-induced congestion 2-7 ports, adjusting 5-8 modifying, See changing ports, virtual 5-8 monitoring support for 2-3 adapters with QConvergeConsole 5-1 adapters with QConvergeConsole CLI 5-9 adapters with QConvergeConsole Plug-in O for VMware vCenter Server 5-10 Host Bus Adapters 5-1 OLAP link status 5-7 definition of Glossary-4 performance 4-16 applications 4-2 ports in QConvergeConsole 5-9 data workloads 4-1 Most Recently Used (MRU) Path Selection throughput-based processes 4-2 Policy 2-11 workloads, addressing 4-2 MPIO OLTP definition of Glossary-4 definition of Glossary-5 for DSM Glossary-2 applications 4-2 for load balancing I/O 4-21 data workloads 4-1 using 2-8 transaction-based processes 4-2 with round robin policy 2-9 workloads, addressing 4-2 MPPs, VMware support for 2-11 online analytical processing, See OLAP MSI, definition of Glossary-4 online transaction processing, See OLTP

Index-6 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

OoOFR performance (continued) definition of Glossary-5 monitoring with QLogic tools 4-16 support for 2-2 throughput 4-17 OPD transaction 4-16 definition of Glossary-5 Performance Monitor (perfmon), measuring support for 2-3 I/O performance 4-18 operation modes, ZIO 4-4 peripheral component interface extended, See out-of-order frame reassembly, See OoOFR PCI-X overlapping protection domains, See OPD personality of adapter, changing type 5-6, overview 5-12 application workloads 4-1 physical connectors, lanes for 3-2 best practices 1-1 physical interface, SIOC 4-10 queue depth 4-10 planning the SAN 2-1 security 6-1 policies Fixed Path Selection 2-11 MRU Path Selection 2-11 P Round Robin Path Selection 2-12 port diagnostics parameters See also ports, port information Host Bus Adapter, changing 4-3 general, viewing 5-7 port, changing 5-13 link status, viewing 5-7 port, setting 5-8 viewing in QConvergeConsole 5-6 updating from a file 5-9 viewing in QConvergeConsole Plug-in for Password Update Wizard VMware 5-15 (QConvergeConsole) 6-3 port information password, changing QConvergeConsole Plug-in for VMware several hosts, QConvergeConsole vCenter Server, viewing in 5-12 Password Update Wizard 6-3 QConvergeConsole, viewing in 5-6 single host, QConvergeConsole Security See also ports, port diagnostics, port page 6-2 information Path Selection Plug-In, See PSP ports paths, redundant 2-6 See also port information, port diagnostics PCI Express definition of Glossary-5 definition of Glossary-5 boot information, viewing 5-8, 5-13 See also PCIe link status 5-7 PCIe mitigating congestion between 2-6, 2-7 definition of Glossary-5 monitoring 5-9 connectivity 2-2 NPIV, setting 5-8 lanes supported 3-2 parameters, setting 5-8, 5-13 slot, choosing 3-2 QoS, setting 5-8 PCI-X, definition of Glossary-5 quantity supported 2-2 performance speed factor 4-19 considerations for 2-6 statistics, viewing 5-14 Host Bus Adapter, tuning 4-1 target information, viewing 5-6

SN0454502-00 A Index-7 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

ports (continued) QConvergeConsole Plug-in for VMware trace diagnostics 5-8 vCenter Server (continued) transceiver information, viewing 5-7, 5-14 port information, viewing 5-12 utilities 5-9 server information, viewing 5-11 utilization 5-9 QFULL vFC WWPN 5-9 condition, queue depth 4-13 VPD, viewing 5-16 recovery theme 4-10 preboot environment ql2xmaxqdepth variable, setting 4-13 and UEFI Glossary-7 QLogic connectivity, verifying 3-3 contacting xiii preventing ESD-related damage 3-1 Downloads and Documentation page 5 primary boot device 5-8 knowledge database xiii processes software EULA xi throughput-based 4-2 technical support xii transaction-based 4-2 tools for adapter management 5-1 protocol support 2-2 training xiii PSPs, supported 2-11 QLogic Windows SuperInstaller 5-1 QoS definition of Glossary-5 Q achieving through best practices 1-1 congestion, mitigating 2-6 QConvergeConsole port, setting 5-8 adapter information, viewing 5-6 quality of service, See QoS adapters, monitoring with 5-1 queue depth help system 5-1 calculating 4-13 I/O performance, measuring 4-17 changing 4-14 installation 3-3 I/O architecture 4-12 interface, components of 5-2 I/O loading 4-11 password, changing for several hosts 6-3 key considerations 4-14 password, changing for single host 6-2 overview 4-10 port info, viewing 5-6 setting 4-13 SAN security, maintaining 6-1 tuning 4-11 server information, viewing 5-3 variables 4-12 ZIO, changing 4-5 queue utilization 4-9 QConvergeConsole CLI adapters, monitoring 5-9 I/O performance, measuring 4-17 R ZIO, changing 4-6 QConvergeConsole Plug-in for VMware RAID vCenter Server definition of Glossary-6 adapter information, viewing 5-11 traffic activities, balancing 4-20 adapters, monitoring 5-10 ratios, Fan-In 4-20 I/O performance, measuring 4-18 interface, components of 5-10

Index-8 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

read/write buffer test SCSI port diagnostics with QConvergeConsole definition of Glossary-6 5-7 command buffers allocated 4-10 port diagnostics with QConvergeConsole commands in response queue 4-5 Plug-in for VMware 5-15 I/O interrupts, reducing impact of 4-5, 4-15 ReadMe, downloading xii interrupts with ZIO 4-4 redundant array of independent disks, See layer level for loading 4-11 RAID layer sent by QFULL 4-13 redundant paths 2-6 queue depth 4-10 registered state change notification, See queue depth, ESXi 5.x 4-10, 4-14 RSCN support for 2-2 related materials x ZIO operations 4-4 release notes, downloading xii SCSI Primary Commands - 3, See SPC-3 round robin DSM 2-9 security Round Robin Path Selection Policy 2-12 Fibre Channel 6-1 round robin with subset DSM 2-9 Fibre Channel SAN 6-1 RSCN overview 6-1 avoiding 4-20 server 5-3 definition of Glossary-6 selectable boot device 5-8 serial advanced technology attachment, See SATA S serial attached SCSI, See SAS safe handling of Host Bus Adapter 3-1 server SAN information, viewing in QConvergeConsole 5-3 definition of Glossary-6 information, viewing in QConvergeConsole boot from 7-1 Plug-in for VMware 5-11 components, interoperability of 2-13 security, viewing in QConvergeConsole 5-3 disk array performance data 4-18 statistics, viewing 5-5 fabric network data, gathering 4-18 topology, viewing 5-4 Fibre Channel security 6-1 utilities, viewing agent versions 5-6 high availability, connecting for 2-4 service plans xii host server data, gathering 4-17 setting performance recommendations 4-19 See also changing planning and deployment 2-1 data rate 4-20 storage configuration options 2-4 queue depth 4-13 SAS ZIO interrupt delay timer 4-15 congestion threshold 4-8 ZIO mode 4-15 definition of Glossary-6 SIOC SATA definition of Glossary-6 congestion threshold 4-8 congestion threshold, recommended definition of Glossary-6 settings 4-8 congestion, mitigating 2-6

SN0454502-00 A Index-9 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

SIOC (continued) targets (continued) physical interface, tuning 4-10 port congestion, mitigating 2-7 recommendations for use 4-8 port speed 4-19 understanding 4-7 queue depth equations 4-13 slot connectors, PCIe 3-2 round robin with subset DSM 2-9 small computer systems interface, See SCSI technical support xii software contacting xiii downloading xii knowledge database xiii EULA xi training xiii solid-state disk, See SSD temperature SPC-3 transceiver, viewing in QConvergeConsole compliancy, storage array 2-8 5-7 definition of Glossary-6 transceiver, viewing in QConvergeConsole speed factors, SAN performance 4-19 Plug-in for VMware 5-14 SSD terminology, definitions Glossary-1 congestion threshold 4-8 testing definition of Glossary-6 boot LUN, alternate 7-3 statistics boot-from-SAN 7-3 port, viewing 5-14 port diagnostics 5-15 server, viewing 5-5 port diagnostics with QConvergeConsole status, link 5-7 5-7 storage area network, See SAN throughput performance 4-17 storage configurations, SAN (diagram) 2-4 throughput-based Storage I/O Control Normalized Latency value data workloads 4-1 4-9 processes 4-2 storage input/output control, See SIOC tools SuperInstaller, QLogic Windows 5-1 adapter management, installing 3-3 support EMC PowerPath/VE, deploying 4-21 protocols 2-2 esxtop, using 4-9 technical xii host server data, gathering 4-17 technical, contact information xiii I/O performance, measuring 4-17 supported Host Bus Adapter models 1-2 QLogic, performance monitoring 4-16 switch statistics, examining 4-18 vendor-provided for arrays 4-18 switching fabric 2-2 topology 2-2 I/O queue depth 4-12 server, viewing 5-4 T trace diagnostics, port 5-8 traffic activities, balancing 4-20 tape access, considerations for 2-12 training, QLogic xiii target persistent binding, viewing 5-6 transaction performance 4-16 targets transaction-based device queue depth 4-11 data workloads 4-1 information, viewing 5-6 processes 4-2 load balancing with round robin policy 2-9

Index-10 SN0454502-00 A Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

transceivers vFC information, viewing 5-14 ports, setting 5-8 port, details 5-7 WWPN 5-9 virtual Fibre Channel, See vFC virtual ports U setting with QConvergeConsole 5-8 support for 2-3 UEFI vital product data, See VPD definition of Glossary-7 VMware ESXi 5.x boot code 7-1 adapter performance, improving 4-7 preboot environment 3-3 installing drivers, agents, and tools 3-4 unified extensible firmware interface, See multipathing in 2-11 UEFI VPD updating definition of Glossary-7 driver parameters 5-9 port, viewing 5-16 firmware 5-9 vPorts, support for 2-3 parameters from a file 5-9 upgrading 8Gb to 16Gb adapters 4-20 utilities W host server data 4-17 port, updating firmware, parameters, and weighted paths DSM 2-10 driver parameters 5-9 what’s in this guide ix server, agents 5-6 Windows 2012 utilization adapter performance, improving 4-5 ESXi server resource statistics 4-18 installing drivers, agents, and tools 3-3 host CPU, increasing 4-15 MPIO, using 2-8 memory, performance 4-17 Windows SuperInstaller (QLogic’s) 5-1 port bandwidth, measuring 5-9 workloads queue 4-9 application 4-1 storage array 4-11 OLTP and OLAP 4-2 throughput-based 4-2 transaction-based 4-1 V worldwide name, See WWN worldwide port name, See WWPN validating WWN boot-from-SAN 7-3 definition of Glossary-7 SAN topology 5-11 unique zone for 4-20 variables, queue depth 4-12 WWPN verifying definition of Glossary-7 adapter installation 3-3 VM to vFC 5-9 host can see adapter 3-4 host can see LUNs 3-5 preboot environment connectivity 3-3

SN0454502-00 A Index-11 Best Practices Guide Fibre Channel Host Bus Adapters on Microsoft Windows 2012 and VMware ESXi 5.x

Z zero interrupt delay, See ZID zero interrupt operation, See ZIO ZID, definition of Glossary-7 ZIO definition of Glossary-7 changing using QConvergeConsole 4-5 changing using QConvergeConsole CLI 4-6 congestion, mitigating with 2-7 interrupt delay timer, setting 4-15 mode, setting 4-15 operation 4-4 settings, recommended 4-5 zone definition of Glossary-7 separating adapter and switch 2-12 switch, assigning unique 4-20 zoning definition of Glossary-7 strategies 4-20 tape access considerations 2-12

Index-12 SN0454502-00 A

Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 949.389.6000 www.qlogic.com International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan

© 2013 QLogic Corporation. Specifications are subject to change without notice. All rights reserved worldwide. QLogic and the QLogic logo are registered trademarks of QLogic Corporation. EMC and PowerPath are registered trademarks of EMC Corporation. Linux is a registered trademark of Linus Torvalds. Microsoft and Windows are registered trademarks of Microsoft Corporation. Solaris is a registered trademark of Sun Microsystems, Inc. VMware, ESX, vCenter, vCenter Server, and vSphere are trademarks or registered trademarks of VMware, Inc. All other brand and product names are trademarks or registered trademarks of their respective owners. Information supplied by QLogic Corporation is believed to be accurate and reliable. QLogic Corporation assumes no responsibility for any errors in this brochure. QLogic Corporation reserves the right, without notice, to make changes in product design or specifications.