<<

Maximizing SQL Virtualization Performance

Maximizing

Contents Maximizing Host CPU and Memory 2 Guest VM Configuration Guidelines 4 Using SSDs 6 Revving Up VMs with the SQL Server 2014 In- Virtualization Performance Memory OLTP Engine 8 Virtualization on the NEC Express5800 9 Predictable Network Performance with ProgrammableFlow 11 Summary 12 By Michael Otey Without a doubt, performance is the database professional’s number one concern when it comes to virtualizing SQL Server. While virtualizing SQL Server is nothing new, even today there are some who still think that SQL Server is too resource-intensive to virtualize. That’s definitely not the case. However, there are several tips and best practices that you need to follow to achieve optimum per- formance and availability for your virtual SQL Server instances. In this whitepaper, you’ll learn about the best practices, techniques, and server platform for virtualiz- ing SQL Server to obtain the maximum virtualized database performance.

In the first part of this whitepaper, you’ll learn Sponsored by about some of the best practices for configur- ing your virtualization host’s central process- ing unit (CPU), memory, and storage. Next, you’ll learn about the best practices for config- uring a guest virtual machine (VM) to run SQL Server. You’ll see best practices for configur- ing virtual CPUs and using dynamic memory. You’ll also learn about using virtual hard disks (VHDs), configuring SQL Server VM storage,

1 Maximizing SQL Server Virtualization Performance

and using solid state disks (SSDs) with your SQL Server VMs. Then you’ll see how you can maximize SQL Server 2014 online transaction processing (OLTP) applica- tion performance by taking advantage of the new In-Memory OLTP feature.

The second part of this whitepaper will cover some of the practical implementation details required to get the best performance for your SQL Server VMs. Although the specific configuration steps are vital, it’s equally important to select the right virtual- ization platform to provide the scalability and reliability that your organization needs to meet its service level agreements (SLAs). In this section, you’ll learn about using the NEC Express5800/A2000 Series Server as a virtualization platform. Here you’ll see how its Capacity OPTimization (COPT) feature and high random-access memory (RAM) capacity enable it to support dense virtualization workloads. Then you’ll see how NEC’s ProgrammableFlow Network Suite and PF1000 virtual switch integrate with Microsoft Hyper-V and Microsoft System Center Virtual Machine Manager (SCVMM) to provide predictable network bandwidth for your business-critical applications.

As a general rule for the best performance in your tier 1 VMs, you should plan for a 1:1 ratio of virtual CPUs and physical cores in the system.

Maximizing Host CPU and Memory Making sure the host is correctly configured is one of the most fundamental aspects for optimizing your virtualization environment. If your host lacks the processing power, RAM, or network bandwidth to run your VMs, you’ll never achieve the performance that you need for your tier 1 applications. First, the host has to be sized adequately to run the workloads of all of the VMs that will be simultaneously active. To plan for the proper host capacity, you should use Performance Monitor to create a performance baseline for the workload you intend to virtualize by measur- ing the peak and average CPU and memory utilization. This workload can be run- ning on a VM, or it can be a physical installation that you plan to migrate to a VM. Aggregating these values for all the different servers that you want to run on your virtualization host will tell you the base processing power and RAM that’s needed. As a general rule for the best performance in your tier 1 VMs, you should plan for

2 Maximizing SQL Server Virtualization Performance

a 1:1 ratio of virtual CPUs and physical cores in the system. While nothing pre- vents you from overcommitting the CPUs for either Hyper-V or VMware vSphere, matching you physical cores to your virtual CPUs will ensure that you always have computing power for that workload. When you’re planning the number of virtual CPUs to use in the guest, be sure to remember that the maximum number of virtual CPUs supported can vary depending on the guest OS. Both Windows Server 2012 R2 Hyper-V and vSphere 5.5 provide support for hosts with up to 320 cores and VMs with up to 64 virtual CPUs.

Next, while you’re planning your host’s computing resources, you should make sure that the host supports Second Level Address Translation (SLAT) and Non-Uni- form Memory Access (NUMA). Most modern servers from tier 1 vendors provide these features, but they might not be present if you’re considering using an older hardware platform for virtualization. Both are very important for VM scalability. SLAT has different names, depending on the CPU manufacturer. Intel’s version is

The host memory is the next most important consideration after the host’s CPU support.

called Extended Page Tables (EPT) and AMD calls it Rapid Virtualization Indexing (RVI). SLAT allows the processor to directly handle the translation of guest virtual addresses to host physical addresses without the need for the hypervisor to keep track of a shadow page table, thereby reducing the load on the hypervisor for every guest VM. NUMA support allows NUMA-aware applications like SQL Server to opti- mize threads in high-speed memory that’s owned (should this be owned?) by a local processor. The latest version of Windows Server 2012 R2 Hyper-V and vSphere 5.5 both provide NUMA support for guest VMs.

The host memory is the next most important consideration after the host’s CPU support. First, make sure that you don’t allocate all the available host physical RAM to the VM. Plan to keep about 1GB of memory reserved for the host to manage the running VMs. To prepare for future scalability requirements, it’s a best practice to select a host system that supports hot-add RAM. RAM is typically the limiting factor

3 Maximizing SQL Server Virtualization Performance

to how many VMs you can run simultaneously, and hot-add RAM enables you to upgrade the host without incurring any downtime. Windows Server 2012/R2 sup- ports hot-add RAM but you should be aware that hot-add RAM is not supported in every server hardware platform. You should be sure to look for this capability when evaluating virtualization server platforms.

Making sure that there’s adequate network bandwidth for your production work- loads is the next critical step in the virtualization’s host configuration. Trying to funnel all the network traffic for your VMs through too few host network interface cards (NICs) is a common virtualization configuration mistake. You can use Perfor- mance Monitor to get an idea of your aggregated network bandwidth requirements just like you did to estimate the host’s CPU and memory requirements. In addition, you should plan for one dedicated NIC for management purposes as well as one dedicated NIC for live migration or vMotion. This will help to separate the network traffic required by these management tasks from your production workloads.

Finally, you should plan for the host’s OS to be installed on a separate storage location from the guest VHDs or virtual machine disks (VMDKs). More details about guest VM storage is presented in the following section. In addition, if you’re running anti-virus (AV) software on the host, be sure to exclude the VMs from AV scanning. AV scans will impact the performance of the VM, which is something that you want to avoid for your tier 1 applications. Any AV scanning should occur within the VM guest.

One of the most important guest configuration guidelines is to be sure to provide enough memory for the guest.

Guest VM Configuration Guidelines One of the most important guest configuration guidelines is to be sure to provide enough memory for the guest. This is especially true if the guest is running a database application like SQL Server or Microsoft SharePoint. As a general rule of thumb, the more memory you can give SQL Server VMs the better—up to a point. The actual requirements depend on the application and workload. One best

4 Maximizing SQL Server Virtualization Performance

practice is to take advantage of the hypervisor’s ability to support dynamic memory. Both Hyper-V and vSphere can take advantage of dynamic memory. Microsoft fully supports running SQL Server with dynamic memory to increase server consolida- tion ratios and increase database performance. One best performance practice with dynamic memory is to avoid setting a maximum ceiling and to let the VM expand the memory if the VM experiences memory pressure.

When you create a guest VM, you have three basic choices for VHD types.

When you create a guest VM, you have three basic choices for VHD types. Microsoft and VMware each have slightly different names for these different VHD formats, but they’re essentially the same: fixed virtual disks, dynamic disks, and differenc- ing disks. Fixed virtual disks provide the best performance, but they also require the most disk storage. Fixed virtual disks provide almost the same performance as native Direct Attached Storage (DAS).

Dynamic disks are slightly slower and require much less storage than fixed virtual disks. However, the hypervisor will expand dynamic disks when they need more stor- age, and the execution of the VM is paused during this process. You would typically use fixed virtual disks to avoid this situation for business-critical SQL Server instances.

Differencing disks are the slowest type of VHD, but they also require the least disk space. Differencing disks are best suited for lab and help desk scenarios and not for running production applications.

Next, when you’re configuring the VM itself for SQL Server, one of the most impor- tant best practices is to create multiple VHDs and use them to split out the SQL Server production database and log files as well tempdb. If you don’t change the defaults, the SQL Server installation puts everything on the drive with the SQL Server binaries. In the case of a VM, this means that the guest OS, the database data files, the database log files, tempdb, and the other system databases would

5 Maximizing SQL Server Virtualization Performance

all be on the same VHD. That configuration can work for some small installations, but it certainly won’t give you the best database performance. Putting the data and log files on separate VHDs that use different drives will definitely provide far better performance. In addition, like in a physical installation, you should place the VHD containing the log files on fast-writing drives that use RAID 1 or RAID 10. Another best storage configuration practice is to put tempdb on its own drive using a VHD that’s separate from the data and log files. Tempdb can be a very active database with lots of write activity, so like the log files, a best practice is to use RAID 1 or RAID 10 if possible for the drives on which the tempdb database is placed.

Another important factor for performance is the installation of SQL Server Integration Services.

Another important factor for performance that’s easy to overlook is the installation of SQL Server Integration Services (SSIS) or VM Tools on the guest. These VM add- ins provide optimized device drivers for the VM. For instance, when you install SSIS on a Hyper-V VM, you get the high-performance synthetic network device driver. If you don’t install SSIS, your Hyper-V VMs will use the Legacy Network adapter. The Legacy Network adapter is an emulated device, and its activity is handled by a worker thread in the Hyper-V host’s parent partition. This will result in slower network performance for that VM as well as all of the other VMs on the host.

Using SSDs The continued advancements in computing power and large memory support have resulted in the input/output (I/O) subsystem being a bottleneck for some VM installations. Traditional hard disk drives (HDDs) have gotten larger, but they really haven’t gotten faster. SSDs use high-performance flash memory for storage, and they can provide significantly higher throughput than standard rotational HDDs. An HDD Serial Attached SCSI (SAS) drive spinning at 15,000 revolutions per minute (rpm) can deliver about 150MB to 200MB of sequential throughput per second. In contrast, an SSD drive on a 6GB controller can provide about 550MB of sequential throughput per second.

6 Maximizing SQL Server Virtualization Performance

When you’re considering using SSDs with SQL Server VMs, you have several differ- ent implementation options: • Moving data files onto SSDs. Data files typically experience more reads than writes and can be a good choice for SSDs if the SSDs are large enough to con- tain the data files. • Moving indexes onto SSDs. Most index access is read-heavy, making them ideal candidates for SSD drives, which excel at random read access. • Moving log files onto SSDs. Log files experience a high degree of writes and therefore might not be as good a candidate as data files or indexes for moving onto SSDs. If you do move the log files onto SSDs, plan on using drive mirroring and RAID to protect against drive failure. • Moving tempdb onto SSDs. Tempdb typically experiences a high volume of write activity. Moving tempdb onto SSDs can provide improved performance, but you need to be sure to monitor the drive status and have a replacement strategy. Like with log files, if you move tempdb onto SSDs, plan on using drive mirroring and RAID to protect against drive failure.

The more write operations an SSD has, the shorter its life expectancy will be.

Although SSDs provide better performance than HDDs, there are a couple caveats to using SSDs. First, it’s important to realize that they aren’t a silver bullet for your performance issues. SSDs won’t fix a lack of memory or processing power. Like- wise, they won’t fix poorly written queries. Next, the SSD lifecycle is significantly shorter than a rotational HDD. The more write operations an SSD has, the shorter its life expectancy will be. Furthermore, the write performance for an SSD will degrade over time. High I/O implementations like SQL Server will also shorten the lifecycle of an SDD. In addition, the fuller the SSD drive is, the faster it will degrade. This essentially means that if you plan to use SSDs for your SQL Server VMs, you need to plan to keep about 50 percent of the drives’ space unallocated and you should plan on a two to three year replacement cycle.

7 Maximizing SQL Server Virtualization Performance

The life expectancy of SSDs also varies greatly according to the type of SSD drive. There are two basic types: single-level cell (SLC) and multi-level cell (MLC). SLCs are enterprise grade. Although they’re more costly, they deliver better performance and a longer lifespan than MLCs. MLCs are typically found in consumer-grade devices and have lower perfor- mance and shorter lifespans than SLCs.

Finally, if you implement SSDs, don’t attempt to defragment them. They don’t store or retrieve data like HDDs. Defragmentation will only increase the wear on the drive.

Revving Up VMs with the SQL Server 2014 In-Memory OLTP Engine The upcoming SQL Server 2014 release will provide the all-new In-Memory OLTP engine, which promises to significantly boost application performance. Microsoft has shown appli- cation performance improvements ranging from 7x to 20x using the new In-Memory OLTP engine. Equally significant is the fact that this new In-Memory OLTP engine can work just as well in a VM as it can in a physical system. SQL Server 2014’s new In-Memory OLTP support works by moving select tables and stored procedures into memory. Plus, the new In-Memory OLTP engine provides an all new lock-free optimistic locking design that maxi- mizes the throughput of the engine.

Memory access speeds are much faster than disk access speeds.

Memory access speeds are much faster than disk access speeds. However, to really take advantage of the In-Memory OLTP engine, you need to be running on a platform that can support the large memory capacities required to move the selected tables and stored proce- dures into RAM. The latest versions of Windows Server 2012 R2 Hyper-V and vSphere 5.5 both support hosts with up to 320 cores and 4TB of RAM. In addition, both offer support for VMs with up to 1TB of virtual memory. These large memory sizes, coupled with a physical host that supports this much RAM, enable SQL Server VMs to take full advantage of the new In-Memory OLTP performance feature.

8 Maximizing SQL Server Virtualization Performance

Virtualization on the NEC Express5800 Selecting the proper hardware platform is essential for providing maximum performance and scalability to your SQL Server VMs. NEC’s new Express5800/A2000 Series Server (or CX) brings mainframe-class performance and reliability to your enterprise virtualization implementations. The CX series is NEC’s highest performing line of systems, and its sixth generation of Intel-based enterprise server systems. The CX series uses the latest high-per- formance Intel Xeon processor E7 v2 Product Family. The new Intel Xeon E7 v2 processors can be configured with up to 15 cores per processor, and they support twice the amount of memory compared to the previous generation of CPUs. In its maximum configuration, the CX supports up to four processors, where each CPU has 15 cores for a total of 60 cores. The high number of cores enables the CX to be able to dedicate physical CPU resources to each vCPU running in the SQL Server VMs thereby maximizing performance. The CX is also ideal for memory-intensive applications, offering support for up to 4TB of RAM. You can see the NEC Express5800 in Figure 1.

Figure 1: NEC Express5800/A2000 Series Server (CX)

Beyond pure scalability, the NEC CX supports a unique core optimization capability called COPT (Capacity OPTimization). COPT is essentially a dynamic CPU core activation control similar to UNIX’s capacity on demand capability. COPT provides improved reliability and scalability by enabling you to dynamically add available unused CPU cores. COPT allows you to “pay as you grow” by dynamically adding cores for increased scalability using a core activation key. The NEC Express5800/A2040b COPT model can seamlessly scale up from 1 to 60 cores. This core optimization capability is completely independent of the operating systems. It works with Linux and vSphere in addition to Windows Server 2012 and Win- dows Server 2008 R2 SP1. In the case of Linux and Windows, the cores can be added with- out requiring a server reboot. You can see an overview of NEC’s COPT feature in Figure 2.

9 Maximizing SQL Server Virtualization Performance

Figure 2: An Overview of NEC’s COPT Feature

In Figure 2 you can see how COPT can be used to dynamically scale performance. On the left side of the graph you can see where the system configuration starts off with two CPUs each with two cores enabled. As demands on the system increase cores can be added by simply enabling more cores via a software activation key. In the middle section you can see that two additional license keys have been used to add one additional core per CPU. On the far right you can see where you can subsequently add more additional cores to accommodate future growth. COPT allows you to dynamically add cores up to the sys- tem’s maximum of 60 cores across 4 CPUs. COPT is a powerful and unique feature. It can provide protection from CPU failures and it enables increased scalability without requiring any physical hardware maintenance or intervention.

In addition to unique COPT features, the CX supports a number of advanced Reliability, Avail- ability, and Serviceability (RAS) features. These features are critical when selecting a server plat- form as they help avoid any single point of failure. This is particularly important in virtualizing a

10 Maximizing SQL Server Virtualization Performance

Tier-1 application like SQL Server where availability is critical. Memory modules and I/O cards can be added on-the-fly without shutting the system down. Memory is constantly monitored for errors and support for Double Device Data Correction (DDDC) allows DRAMs with memory errors to be dynamically removed from the system’s memory map. Enhanced MCA recovery enables uncorrectable errors to be detected and recovered while only the affected application will be shutdown. You can see an overview of the CX’s main RAS features in Figure 3.

Features Next Gen A2040b Next Gen (Sub-models) A2040bCOPT A202b CPU (Core) Capacity Yes (W/L) No on demand [COPT] (Reboot NOT required) Memory module addition on the fly Yes (W/L) Yes (W/L)

Flexibility I/O Card hot plug Yes (W/L) Yes (W/L) DynamicCore De-allocation Yes (L), but sparing Yes (L) and Sparing not supported DynamicMemory Page De-allocation/PFA Yes (W/L/V) Yes (W/L/V) (Predictive Failure Analysis) for ECC Single Node Memory chip data correction DDDC DDDC Recovery for CPU/Memory failure Availability Yes (W/L/V) Yes (W/L/V) [MCA Recovery] Failure log correction and report Yes Yes HW resource (Core IO/Service Yes Yes Processor/Clock) Sparing Supporting the above features sometimes depends on OS readiness. W = Windows, L = Standard Linux (RHEL or Oracle UEK) + NEC’s RAS Driver, V = VMware

Figure 3: NEC Express58000/A2000 Series RAS Features

Predictable Network Performance with ProgrammableFlow Providing the raw processing power to support your virtual workloads is the first step toward achieving enterprise-level virtualization performance. However, you still need to be able to deliver that power to your end users. Your network infrastructure is the vital conduit for connecting your virtualized applications to the end users that need them. It’s important to realize that the network can be a bottleneck, especially in highly virtualized environ- ments. Software-defined networking (SDN) technologies like NEC’s ProgrammableFlow Networking Suite can enable you to more quickly deploy applications as well as control the

11 Maximizing SQL Server Virtualization Performance

utilization of your network resources. The end result is improved ability to meet your SLAs and deliver predictable application performance to your end users.

Designed to support high-density virtualization platforms like the Express5800/A2000 Series Server, NEC’s ProgrammableFlow SDN technology ensures that all of your VMs can meet their SLAs by enabling you to create a logical or virtual network that’s abstracted from the underlying physical network infrastructure. You can associate your virtual net- works with specific applications, eliminating the need to manually create Virtual Local Area Networks (VLANs) when you deploy your applications. These associations also enable you to manage the network bandwidth for your applications using defined policies. NEC’s ProgrammableFlow is completely integrated with Microsoft System Center and Windows Server 2012 R2 Hyper-V network virtualization, enabling you to manage your VMs and your virtual networks using SCVMM 2012. When you create virtual networks using SCVMM, NEC’s ProgrammableFlow SDN capabilities will handle all the required underlying network configurations. NEC’s ProgrammableFlow Networking Suite uses the OpenFlow protocol to automatically provision and manage both the physical switches and Hyper-V’s Extensible Switch (also known as the Virtual Switch).

Summary The days of considering SQL Server to be a workload that can’t be virtualized are definitely in the past. Today’s high-performance computing platforms like the NEC Express5800/ A2000 Series Server provides a level of performance and scalability that’s ideal for running virtually all production SQL Server workloads. In addition, the latest generation of hypervi- sors like Windows Server 2012 R2 and vSphere 5.5 enable you to take full advantage of the host’s compute and memory capabilities, allowing you to run the most resource-intensive enterprise workloads. In order to ensure maximum performance and scalability you need to be sure to start with a hardware platform that provides the essential computing power plus the high memory capacity required to support multiple concurrent workloads. With support for up to 60 cores and 4 TB of RAM the NEC Express5800/A2000 series delivers the performance and scalability required to run the most resource intensive workloads. Beyond pure scalability its RAS and COPT features provide mainframe class reliability for your SQL Server VMs. With that said, you’ll find that by following the essential virtualization host and VM guest configuration practices, selecting the right server platform like the NEC Express5800/A2000 Series Server, and by taking advantage of SDN, you can ensure that you’ll get the maximum performance for your SQL Server VMs. ●

12