December 2016 | Vol. 9 No. 6

VirtualizationReview.com

AND THE WINNERS ARE …

Our picks for the best, most-useful products of the past year.

PLUS > TOP 10 VIRTUALIZATION STORIES OF 2016

> YOUR IoT PLANNING GUIDE

> vSPHERE 6.5: BIG CHANGES ADVERTORIAL

When dealing with today's ransomware threats, WHY A DRAAS SOLUTION IS YOUR time is your worst enemy. The faster you can detect the encryption, the more time you have to take DRAAS-TICALLY BETTER DEFENSE actions to restore your or mission-critical application. Simple backup procedures will let you AGAINST RANSOMWARE restore your production database, but it will take If your production database or mission-critical application gets infected, how significantly more time than a modern disaster long would it take you to recover? recovery as a service (DRaaS) solution.

Compare the process of restoring a production database from a cloud backup vs. a modern DRaaS solution:

Is DB part of a cluster? Repeat each of these steps for each machine connected to cluster. 1 2 3 4 5 6 7 8

Ransomware Power down Determine Rebuild server Reconfigure Restore clean Inject old DB (from Establish Infection machine date/time (OS and software DB services DB files restored backup) connectivity of infection installation) from backup to rebuilt DB (DB & input svcs)

CLOUD BACKUP TOTAL DOWNTIME (BEST CASE): 4-5 hours

1 2 3 4 5

Ransomware Power down Determine date/time Log into DRaaS Boot VM (from

DRaaS Infection machine of infection dashboard last clean backup)

TOTAL DOWNTIME: 1-2 minutes

3 Ways a DRaaS Solution is DRaaS-tically Better Defense Against Ransomware Dramaticaly Quickly Pinpointing Built-In 1 Faster RTOs 2 the Time of Infection 3 Orchestration DRaaS solutions equip you with the ability to With a cloud backup, it takes a while to determine if your application When it comes to restoring applications and quickly failover productions systems by has been corrupted. Admins must download the application files production from a backup requires spinning up VMs or images in the cloud (or a from the cloud (based on your most recent backup), rebuild, and some planning and coordination. Leading local appliance) in minutes. Restoring your then compile the database or application. If the application runs, DRaaS solutions include built-in failover files from a clean backup will take 4-5 hours then you know you have restored a clean copy; otherwise, you need orchestration that let you create and that’s if the stars align. That’s a big to go back to your next recent backup and recompile. This can take predetermined failover plans for a group of difference -- minutes vs. hours – and that hours. With DRaaS, admins can boot a production server and replicated VMs, which can be to boot difference can be catastrophic depending on immediately verify whether the application is infection-free. If it simultaneously or in a specific order. the business and the transactions feeding successfully boots, then you have a clean image. This takes the your production databases. guessing game out of “Is this a clean backup?”

Learn More Infrascale provides the most powerful disaster recovery and cloud backup solutions in the world. Our mission is to eradicate downtime and data loss by equipping EVERY organization with the ability to recover mission critical data and applications within minutes. www.infrascale.com +1 877.896.3611 VISIT VIRTUALIZATIONREVIEW.COM ED NOTEWARD

8

KEITH WARD

What I Learned About Virtualization in 2016

Typically, year-end magazine issues make predictions for the coming year. I thought it might be a nice change of pace to take a look back on what I learned in the past year, rather than focusing on what I think might happen in the next one. So here are the main virtualization eye- openers for me from 2016. The Internet of Things (IoT) has legs.In journalism, December 2016 | VIRTUALIZATION REVIEW | VOL. 9, NO. 6 a story with “legs” is something that’s more than a one-day story; it lasts for a while (sometimes much longer), with potential to grow much larger. For me, the IoT is that story. COVER STORY It was clear, of course, that embedded technology was 8 2016 Virtualization Review Editor’s Choice Awards growing more popular, and has been for years. But I didn’t A guide to the products our contributing editors nd irresistible. think it would become as big as it was, as fast as it has. I FEATURES still don’t trust some of the applications—self-driving cars, for instance, terrify me as a concept—but there’s no denying 24 Top 10 Virtualization Stories of 2016 VMware gets a new owner, Citrix gets a new leader and the that its initial uses have grown into something much larger future gets even cloudier. It was a busy news year. and more comprehensive. The downside of this networked phenomenon is that it’s also more ripe for abuse, meaning that IoT security will 30 The Internet of Things Is Coming. Plan Accordingly. IoT is here, but isn’t yet well developed. Before jumping in, become a much bigger story, as well. consider carefully what’s available now and what your goals are VMware fi gured out its cloud strategy. VMware for the future. had already staked out some strong, non-hypervisor-related paths, including software-defi ned networking and mobile device management. But it wasn’t enough to support the company going forward, as everyone knew. COLUMNS That missing piece? It was obviously the cloud, but 38 Dan’s Take: DAN KUSNETZKY VMware had been thrashing about in that space for years, Containers: SOA in Disguise trying to fi nd something solid to latch onto. Then, in 2016,

39 The Cranky Admin: TREVOR POTT it started partnering with public cloud companies—most Key Factors in Choosing On-Premises IT vs. Public Cloud notably Amazon Web Services—in an offering that would keep vSphere and other VMware-specifi c technologies 40 Take 5: TOM FENTON front and center, while still giving customers an onramp Thoughts on the Container Industry to a public cloud that they could trust (and no, vCloud Air wasn’t, and isn’t, that platform). Now, VMware’s offering gives customers a familiar environment while providing cloud peace of mind. Truly a win-win-win: for customers, VMware and the public cloud. Here’s to a great 2017! VR

VirtualizationReview.com | Virtualization Review | December 2016 | 1 ID STATEMENT Virtualization Review magazine (ISSN 1941-2843) is published four times a year (July, August/September, October and December/January) by Editorial Director, Enterprise Computing Group Scott Bekker 1105 Media, Inc., 9201 Oakdale Avenue, Ste. 101, Chatsworth, CA 91311. Periodicals postage paid Editor in Chief Keith Ward at Chatsworth, CA 91311-9998, and at additional Group Managing Editor Wendy Hernandez mailing offices. Complimentary subscriptions are sent to qualifying subscribers. Annual subscription rates Contributing Editors Tom Fenton, Dan Kusnetzky, Trevor Pott payable in U.S. funds for non-qualified subscribers are: Vice President, Art and Brand Design Scott Shultz U.S. $39.95, International $64.95. Subscription inquiries, back issue requests, Creative Director Scott Rovin and address changes: Mail to: Virtualization Review, 9201 Oakdale Avenue, Suite 101, Chatsworth, CA 91311, E-mail [email protected] or call 818-814-5223, fax number 818-936-0267. POSTMASTER: Send address changes to Virtualization Review, 9201 Oakdale Avenue, Suite 101, Chatsworth, CA 91311. Canada Publications Mail Agreement No: 40612608. Return Undeliverable LEAD SERVICES Canadian Addresses to 9121 Oakdale Avenue, Vice President, Lead Services Michele Imgrund Suite 101, Chatsworth, CA 91311. Senior Director, Audience Development President & Data Procurement Annette Levee COPYRIGHT STATEMENT © Copyright 2016 by Henry Allain 1105 Media, Inc. All rights reserved. Printed in the Director, Audience Development U.S.A. Reproductions in whole or part prohibited Chief Revenue Officer & Lead Generation Marketing Irene Fincher except by written permission. Mail requests to Dan LaBianca Director, Client Services & Webinar “Permissions Editor,” c/o Virtualization Review, Chief Marketing Officer Production Tracy Cook 4 Venture, Suite 150, Irvine, CA 92618. Carmel McDonagh Director, Lead Generation Marketing Eric Yoshizuru LEGAL DISCLAIMER The information in this Director, Custom Assets & Client Services magazine has not undergone any formal testing by ART STAFF Mallory Bundy 1105 Media, Inc. and is distributed without any war- Custom Editorial Manager Richard Seeley ranty expressed or implied. Implementation or use of Creative Director Scott Rovin any information contained herein is the reader’s sole Creative Director Jeffrey Langkau Senior Program Manager, Client Services responsibility. While the information has been reviewed & Webinar Production Chris Flack Senior Art Director Deirdre Hoffman for accuracy, there is no guarantee that the same or Project Manager, Lead Generation Marketing similar results may be achieved in all environments. Art Director Michele Singh Mahal Ramos Technical inaccuracies may result from printing errors Assistant Art Director Dragutin Cvijanovic and/or new developments in the industry. Senior Graphic Designer Alan Tao MARKETING CORPORATE ADDRESS 1105 Media, 9201 Senior Web Designer Martin Peace Chief Marketing Officer Carmel McDonagh Oakdale Ave. Ste 101, Chatsworth, CA 91311 www.1105media.com Vice President, Marketing Emily Jacobs PRODUCTION STAFF Senior Manager, Marketing Christopher Morales MEDIA KITS Direct your Media Kit requests to Print Production Manager Peter Weller Marketing Coordinator Alicia Chew Chief Revenue Officer Dan LaBianca, Print Production Coordinator Teresa Antonio 972-687-6702 (phone), 972-687-6799 (fax), Marketing & Editorial Assistant Dana Friedman [email protected]

ADVERTISING AND SALES REPRINTS For single article reprints (in minimum ENTERPRISE COMPUTING GROUP EVENTS Chief Revenue Officer Dan LaBianca quantities of 250-500), e-prints, plaques and posters Senior Director, Events Brent Sutton contact: PARS International Phone: 212-221-9595. Publisher Al Tiano Senior Director, Operations Sara Ross E-mail: [email protected]. Advertising Sales Associate Tanya Egenolf www.magreprints.com/QuickQuote.asp Director, Event Marketing Merikay Marzoni Senior Manager, Events Danielle Potts LIST RENTAL This publication’s subscriber list, as ONLINE/DIGITAL MEDIA Coordinator, Event Marketing Michelle Cheng well as other lists from 1105 Media, Inc., is available Vice President, Digital Strategy Becky Nagel for rental. For more information, please contact our Coordinator, Event Marketing Chantelle Wallace list manager, Jane Long, Merit Direct. Senior Site Producer Gladys Rama Phone: 913-685-1301; E-mail: [email protected]; Site Producer News David Ramel Web: www.meritdirect.com/1105 Director, Site Administration Shane Lee Reaching the Staff Site Administrator Biswarup Bhattacharjee Staff may be reached via e-mail, telephone, fax, or mail. Front-End Developer Anya Smolinski Chief Executive Officer A list of editors and contact information is also available Rajeev Kapur online at VirtualizationReview.com. Junior Front-End Developer Casey Rysavy E-mail: To e-mail any member of the staff, please use Executive Producer, New Media Michael Domingo Chief Operating Officer the following form: [email protected] Henry Allain Irvine Office (weekdays, 9:00 a.m. – 5:00 p.m. PT) Office Manager & Site Assoc. James Bowling Telephone 949-265-1520; Fax 949-265-1528 Vice President & Chief Financial Officer 4 Venture, Suite 150, Irvine, CA 92618 Craig Rucker Corporate Office (weekdays, 8:30 a.m. – 5:30 p.m. PT) Telephone 818-814-5200; Fax 818-734-1522 Chief Technology Officer 9201 Oakdale Avenue, Suite 101, Chatsworth, CA 91311 Erik A. Lindgren The opinions expressed within the articles and other contentsherein do not necessarily express those of Executive Vice President the publisher. Michael J. Valenti Chairman of the Board Jeffrey S. Klein

2 | December 2016 | Virtualization Review | VirtualizationReview.com

news • trends • analysis

day-to-day operation of a vSphere vSphere 6.5 environment. VMware claims the following vSphere 6.5: An Overview benefits of the new vSphere client: VMware’s flagship hypervisor just got a major update. • Consistent UI built on the new VMware Clarity UI standards that Here’s what you need to know. will be adopted across the VMware By Tom Fenton portfolio. • Cross-browser and cross- platform application compatibility, Sphere 6.5 was announced Figure 1 shows the topography of a being built on HTML5. at VMworld 2016 US, and was VCHA deployment. • No browser plug-ins to install Vofficially released on Nov. and manage. 15, 2016. This was an evolutionary HTML5-Based GUI • Integration into vCenter Server release of vSphere, but it did include Years ago, VMware Inc. decreed 6.5 and full support. some features that raise the bar that the C# client, also known as the • Full support for Enhanced for enterprise-grade hypervisors, native client or the Windows client, Linked Mode. including: would be phased out. With the release • Extremely positive feedback on • vCenter High Availability of vSphere 6.5, they finally mean it, performance from users of the Fling. • HTML5-based vSphere client because the native client does not While using the new vSphere Client, • New security features ship with this vSphere release. I found that it was quicker, the layout • vSphere Integrated Containers It does, however, ship with three was cleaner, and the home screen • vSphere Predictive DRS Web browser-based clients: the older was arranged in such a way to bring • vSAN 6.5 flash-based vSphere Web client, the the most useful items conveniently • Improvements to VVols new HTML-5 based vSphere client up front and easily accessible. I also • vSphere Fault Tolerance and, in order to connect to an ESXi found that I could use it from a • Reduction in vSphere replication host directly from a Web browser, Chrome browser running on my Mac Following is a closer look at these the VMware host client. The new and Linux desktop. However, don’t changes. vSphere client doesn’t have all the plan on it immediately replacing the same features as the older vSphere vSphere Web client, as it’s not yet vCenter High Availability Web client, but it does have many feature-complete and is missing some (VCHA) of the functions needed for the important functionality aspects. A VCHA made the top of my list of the Replication best new features in vSphere 6.5 Platform Services Platform Services Controller Controller due to its simplicity and usefulness. 30 Seconds VCHA allows an existing vCenter server appliance to be protected by Load Balancer creating a passive one that will take vCenter Server vCenter Server over if the active one goes offline. FQDM/IP Address FQDM/IP Address VCHA is a very powerful tool, but Virtual it’s also easy to install and configure IP Address using the built-in wizard. It took me less than 20 minutes Synchronous DB Replication to install, configure and protect my vCenter Server vCenter Server Appliance Appliance existing vCenter Server. VCHA does Active Node Asynchronous File Replication Passive Node have some requirements, and will be most useful to smaller deployments, Private Network but overall it provides solid bang

for the buck. Only a single vCenter vCenter Server Server license is needed when you Appliance Witness Node protect your vCenter Server Virtual Appliance (VCSA) with VCHA. Figure 1. The vCenter High Availability topography.

4 | December 2016 | Virtualization Review | VirtualizationReview.com CONTENT SPONSORED BY VEEAM

SPECIAL PULLOUT SECTION

Backing up Your Data Is as Easy as 3-2-1

If you believe data loss can’t happen to you, you’re in for a rude awakening. Here’s what you need to know. By Trevor Pott

t’s one of my favorite IT aphorisms: “If your data doesn’t exist in at least two places, then it does not exist.” Computers are fragile; I modern data storage even more so. Backups are critical, and that absolutely must include off- site backups. There are, of course, those individuals who will maintain the illusion of “it can’t happen to me,” but those who believe they’re cosmically immune are complicated topics. Practical concerns meet economics, and these to data loss are gambling against inevitability. in turn interact with the steady evolution of technology to present Hard drives die, SSDs reach the end of their complicated and uncertain options. write life, and tape is notorious for quirky Fortunately, the IT industry has had some time to develop good behavior that can lead to a perfectly good tape rules of thumb. not being read by a different drive than the one that wrote the data in the first place (even if the 3-2-1 two drives in question are the same model). The One of the more coherent pieces of advice regarding backups is importance of keeping a copy of your data on referred to in the industry as “3-2-1.” Originally a concept developed more than one physical device is pretty clear: A by the vendor Veeam Software, the advice proved solid enough to be single physical device is vulnerable. Multiple referenced and at least unofficially adopted by the industry. physical devices are somewhat less so. The 3-2-1 philosophy is simple: In order for your data to be safe it By now, most of us have experienced a data- should exist in three locations, on two different types of media, with loss event. At the very least, it’s highly likely one of those copies being off-site. To my knowledge, there is one (and we all know someone who has. Some data is only one) exception to this rule. more important than other data, yet it’s safe to If an organization uses a disaster-proof storage medium, (such as say that—personal or professional—data loss is the fireproof and waterproof ioSafe,) and the organization doesn’t something most of us want to avoid. mind that regaining access to its data may take up to a month after a Assuming that the reality and importance of disaster event, then the “one of those copies being off-site” portion of these facts is accepted, the question of what to the equation can be omitted. That’s a pretty narrow exception space,

SHUTTERSTOCK do about it looms. Backups and disaster recovery however, and 3-2-1 should otherwise be practiced by all organizations.

VirtualizationReview.com | Virtualization Review | December 2016 | V-1 Corruption of data doesn’t have to be malicious.

INSIDER’S GUIDE TO BACKING UP YOUR DATA CONTENT SPONSORED BY VEEAM

If your data doesn’t exist in at least two places, then it does not exist.

Not All Data Is the Same or destroyed? If you have multiple manufacturing plants It’s easy to advise companies to back up their data. If you’re that can take up the load, then by all means copy that a backup vendor or a cloud vendor selling off-site storage, customer data elsewhere! If you’re a smaller shop, however, there’s an added incentive to advise companies to back up with only the one manufacturing plant, then having copies all their data, and to back it up as frequently as possible. of customer order data isn’t going to mean much if you have The real world is a lot more complicated, however, and it’s no way of fulfilling those orders. important to pay attention to what needs to be backed up. That not all data is equally important is itself important. Some things, like financial data, everyone needs to back Very few organizations can afford to aggressively back up the up. You may be in a space where, if the business burns totality of the data they generate each month off-site. Most or- down, you just take the insurance money and retire. The tax ganizations rely on jumped-up consumer-class broadband con- man, however, will want to see those financial records, and nectivity that’s expensive, slow and often has data caps in place. your business burning down won’t be an acceptable excuse. Having to regularly back up only critical data off-site can Similarly, there are workloads—and possibly their make proper backup regimens more palatable to smaller associated data—that make sense to back up locally, but for organizations, and can help focus systems administrators on which there is no rationale for off-site backups. Plenty of creating a sane disaster recovery process. businesses have workloads related to on-site-only concerns, such as managing sensors or running heavy machinery. Locality, Locality, Locality While it might make sense to have a copy of an initial While what data gets backed up off-site and how frequently known good copy of those workloads somewhere off-site to are worth considering, where the data ends up is equally make rebuilding easier, regularly updating those workloads important. This is the realm of data locality: ensuring the will in many cases be pointless. What good is calibration right data is in the right place, or sometimes simply ensuring data when the building has flooded or burned down? At best, it’s not in the wrong place. you’re going to have to recalibrate the unit, at worst you’re When data locality is discussed, data sovereignty going to be buying a whole new machine. concerns are top of mind. Europe, for example, has recently Does backing up the order data for all inbound orders to had to tear down its Safe Harbour agreement with the your manufacturing plant make sense if the plant is offline United States. The replacement, Privacy Shield, is widely expected to be challenged and torn down after the one-year grace period has expired. Data sovereignty is only the tip of the data locality iceberg. If off-site backups are part of a disaster recovery solution, having all relevant data where there’s compute capacity to engage workloads in a disaster scenario is important. It doesn’t help to have copies of the workloads sitting on

The physical-copy-plus-courier approach is just dumb.

V-2 | December 2016 | Virtualization Review | VirtualizationReview.com CONTENT SPONSORED BY VEEAM

servers ready to be fired up if the data they’re to act upon Services Provider Bonus is on another continent. Similarly, DNS settings have to be With all the press and advertising public cloud providers get, considered so that incoming new information can feed into it’s easy to forget the humble services provider. The managed the disaster recovery workloads instead of trying to get to the services providers and value added resellers of yore have offline originals. become the cloud services providers of today. While smaller than an Amazon or a , smaller services providers Off-Site Backup Methods offer value the big players will find hard to match. Off-site backups are accomplished in one of two ways. Location is the services provider’s biggest advantage. A physical copy is made and is then removed from the There are services providers everywhere, and that means building, or a copy of the data is sent out over a network organizations can select one in the same legal jurisdiction, (usually the Internet) to another location. This other bypassing any questions of data sovereignty. Similarly, a location can be a dedicated backup facility, a box of disks in nearby small services provider is generally a lot more willing a colocation facility, one of the major public cloud providers’ to copy your multiple terabytes of backups to a hard drive clouds, or a services provider’s network. and drive it down the road to you than the big players will be. The physical-copy-plus-courier approach is just dumb. Humans are fallible, and all known Hidden Benefits physical storage media are either fragile, There are potential hidden economic benefits, as well. picky about temperatures and Geographically proximate services providers often have humidity, or both. Almost every connectivity with the major local Internet Services Providers

Dedicated backup facilities with leased, private lines connecting to headquarters are the gold standard, but they’re also outrageously expensive and, thus, only for the privileged few.

horror story I’ve heard about backups in my career has (ISPs). ISPs tend to charge less for “on net” traffic, offering boiled down to “someone forgot to rotate the tapes” or “the the possibility for lower bandwidth costs to a services courier lost the hard drive.” Do not do. provider than to a public cloud provider. That leaves us with network-transmission as the means Working closely with your services provider can make to accomplish off-site backups. As always, dedicated devising a backup and disaster recovery solution that’s backup facilities with leased, private lines connecting tailored to the unique needs of your organization easier. to headquarters are the gold standard, but they’re also This can drive down the costs of backups and a well-planned outrageously expensive, and thus only for the privileged few. disaster recovery solution significantly. A box of drives at a colocation facility might seem Regardless of the solutions chosen, back up your data, with appealing, but there is still an up-front capital expenditure the 3-2-1 rule in mind. Keep three copies of your data, on two to acquire this equipment. Additionally, someone from your different mediums, and have one of those copies live off-site. IT team will have to tend to the care and feeding of these Remember: If your data doesn’t exist in at least two places, units, whose sole purpose is to exist “just in case.” then is simply does not exist. This sort of thing is most efficiently handled in aggregate, making services providers or public cloud providers the Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. optimal solution. Costs can be kept down with economies He splits his time between systems administration, technology of scale, along with the fact that a cloud or backup provider writing and consulting. As a consultant he helps Silicon Valley doesn’t have to maintain 1-to-1 capacity to light up all clients’ startups better understand systems administrators and how to workloads. sell to them.

VirtualizationReview.com | Virtualization Review | December 2016 | V-3

Paul Schnackenburg, MCSE, MCT, MCTS and MCITP, started in IT in the days of DOS and 286 computers. He runs IT consultancy Expert IT Solutions, which is focused on Windows, Hyper-V and Exchange Server solutions.

Online: 7 Hot Hyper-V Tips, bit.ly/1N0S8rk UpFront vSphere 6.5 (cont.)

list of the features that are yet to be Required. Disabled mode will not vSphere Predictive DRS supported in the vSphere Client can encrypt a vMotion, Opportunistic A feature getting a lot of buzz is be found at bit.ly/2gPdHCP. mode will encrypt it if both system vSphere Predictive Distributed Here’s a summary of the clients: support it and Required mode won’t Resource Scheduler (DRS). Being • C# or native client. The old allow vMotion if it’s unsupported by tightly integrated with vRealize Windows client; now depreciated. both systems. One of the reasons Operations (vROPs), DRS will take This was an application running on a system may not support vMotion care of resource contention issues Windows. encryption is if it’s running a before they happen. It accomplishes • vSphere Web client. The old previous version of vSphere. this by using the historical data Flash-based Web client. It will be kept in vROPs to move loads before depreciated over time and can be vSphere Integrated resource contention takes place. accessed by going to https:///vsphere-client. VMware knows that containers are hot becomes more and more accurate • vSphere client. The new HTML5 and that its customers will be using at predicting when it needs to move Web client, which will be the vSphere them; and, of course, the company loads around. client moving forward. It can be wants customers to use containers in found at https:///ui. their existing vSphere environment, Fault Tolerance • VMware host client. The HTML5 known as vSphere Integrated Fault Tolerance (FT), which was client that connects directly to an Containers (VIC). VIC extends vSphere greatly improved in vSphere 6.0, ESXi host. It’s at https:///ui. capabilities to run container workloads allows a shadow VM to take over in a vSphere environment. with zero downtime in the event Virtual Machine Encryption VIC allows containers to be that the primary VM goes offl ine. One of the features that VMware managed using the vSphere With vSphere 6.5, VMware FT is customers have been asking for since client, and is composed of three more tightly integrated with DRS, the early days is the ability to encrypt components: which allows it to make better virtual machines (VMs); this release • Engine, which provides the core placement decisions in terms of includes the ability to do this. VM container runtime. where the shadow VM will be encryption happens at the hypervisor • Harbor, an enterprise registry placed. FT in 6.5 has improved level and works with all supported for container images. network latency, which makes it guest OSes, as well as with all of • Admiral, a portal for container suitable for applications that are VMware’s datastore type. To encrypt management by development teams. latency-sensitive. a VM, set a policy on the VM via the vSphere client (Figure 2).

Encrypted vMotion When I was a sales engineer at VMware, one of the questions I was asked frequently was if a vMotion was encrypted. I always answered that no, it wasn’t, and in most cases it would be overkill to do so, as most vMotions take place over networks dedicated to inter-datacenter communication. But with the advent of long-distance vMotion, it’s no longer guaranteed that a vMotion will take place inside the datacenter. Encrypted vMotion is set on a per- VM basis. Figure 3 shows the three modes for vMotion encryption: Disabled, Opportunistic and Figure 2. Virtual machine encryption policy.

VirtualizationReview.com | Virtualization Review | December 2016 | 5 UpFront Learn by doing: VirtualizationReview.com/HowTo

vSphere 6.5 (cont.)

vSphere Replication Five-Minute RPO vSphere replication now supports fi ve-minute recovery point objective (RPO) for VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVol and vSAN 6.5. If you haven’t yet used vSphere replication, it’s worth looking into as a way to protect mission-critical VMs.

Confi guration Maximum Every release of vSphere brings an increase in the maximum amount of resources that vSphere can use. Some of these maximums, such as the amount of RAM and logical CPUs Figure 3. Specifying vMotion encryption for a virtual machine. per host, are geared to future-proof vSphere for forthcoming hardware, vSAN 6.5 versions. This is a great boon to non- while other maximums, such as the vSAN 6.5 has two new features and enterprise licensed customers, and a video memory per VM, is designed a change in licensing that are worth nod to how much Flash-based storage to address current issues. mentioning: vSAN iSCSI targets and is taking over the datacenter. (As a In Table 1, I went through and Node Direct Connect. side note, VMware has now offi cially pulled out some of the maximums I vSAN now has the ability to create accepted vSAN as an approved found most striking and compared and present iSCSI targets. VMware acronym for vSphere virtual SAN.) them side-by-side with vSphere has stated that vSAN iSCSI targets 6.0. The offi cial VMware list of aren’t meant to replace your existing VVols confi guration maximums can be iSCSI array, but this capability is The big news with VVols in vSphere found at intended for smaller use cases. 6.5 is that array-based replication bit.ly/2fXQuue. Node Direct Connect is a remote is now supported. Many customers offi ce feature that allows two have been asking for this ability, and A Better Hypervisor ESXi servers running vSAN to be now they have it. It does differ from vSphere continues to invest in its connected by using simple crossover legacy array-based replication in that hypervisor and make it more useful cable instead of a switch. This will it doesn’t require that the datastore be and critical to the operation of the lower the cost of implementing vSAN specifi ed because it’s on a VM-by-VM datacenter. In addition, vSphere and simplify the installation process. basis. Alternatively, for convenience, a 6.5 will make the datacenter more All-Flash hardware is now group of VMs can be bunched together secure (with vMotion and VM supported in all vSAN editions, in a “Replication Group” and managed encryption), protected (with vSphere instead of just the higher-priced as a single object. replication and FT), performant (with predictive DRS and VVols) and Feature vSphere 6.5 vSphere 6.0 future-ready (with VIC, maximum RAM per VM 6,128GB 4,080GB confi gurations).VR Video memory per VM 2GB 512MB Tom Fenton works in VMware’s Logical CPUs per host 576 480 education department as a senior RAM per host 12TB 6TB course developer. He has a wealth LUNs per server 512 256 of hands-on IT experience gained Host per vCenter Server 2,000 1,000 over the past 20 years in a variety of Powered on VMs per vCenter Server 25,000 10,000 technologies, with the past 10 years focused on virtualization and storage. Table 1. Comparing confi guration maximums in vSphere 6.0 and vSphere 6.5. He’s on Twitter: @vDoppler.

6 | December 2016 | Virtualization Review | VirtualizationReview.com Turbonomic Autonomic Platform Your business and customers depend on increasingly complex applications. Turbonomic enables you to deliver better applications faster regardless of cloud, infrastructure or architecture.

“There was a huge increase in CapEx and OpEx managing our sprawling virtual infrastructure of 5,000+ VMs, it wasn't sustainable. After implementing Turbonomic, we successfully consolidated our SQL workloads onto the fewest number of hosts possible, lowering TOC dramatically. I'd say by year's end we will have saved north of seven figures in licensing alone.”

Mike Campbell, VP Hosted Operations ACI Worldwide

Turbonomic enables your team to run faster, build quicker and plan smarter with one platform. The software continuously analyzes real-time workload demand and matches it to compute, storage and network resources in a virtualized, private or public cloud environment. With real-time control you’ll assure application quality of service and effectively scale on any cloud. Turbonomic enables workload self-management providing placement, sizing, and provisioning actions across hosts, clusters, datastores, data centers and clouds. The platform delivers the quality of service users expect and prevents queuing, latency, and I/O contention — no chasing alerts! Execute actions manually, automate on a scschedule, or automate 24/7. Turbonomic improves application performance by 30% or more. Learn more at https://www.turbonomic.com or call us at (844) 438-8872. FEATURE | The 2016 Virtualization Review Editor’s Choice Awards

By Keith Ward

2016VIRTUALIZATION REVIEW EDITOR’S CHOICE AWARDS

s we close the book on 2016 and start writing a new one for 2017, it’s a good time to refl ect on the products we’ve liked best over the past year. In these pages, you’ll fi nd old friends, stalwart standbys and newcomers you may not have even thought about. Our contributors are experts in the fi elds of virtualization and . They work with and study this stuff on a daily basis, so a product has to be top-notch to make their lists. But note that this isn’t a “best of” type of list; it’s merely an account of the technologies they rely on to get their jobs done, or maybe products they think are especially cool or noteworthy. There’s no criteria in use here, nor any agenda at work. There was no voting, no “popularity contest” mindset. These are simply each writer’s individual choices for the most useful or interesting technology they used in 2016. As you can see, it was a fi ne year to work in virtualization.

8 | December 2016 | Virtualization Review | VirtualizationReview.com FEATURE | The 2016 Virtualization Review Editor’s Choice Awards

2016VIRTUALIZATION REVIEW SHUTTERSTOCK

VirtualizationReview.com | Virtualization Review | December 2015 | 9 DIGITAL DIALOGUE SPONSORED CONTENT

Highlights from a recent webcast on Modernizing Your Microsoft Business Applications with AWS AWS SPEEDS MIGRATION OF YOUR WINDOWS BUSINESS APPS TO THE CLOUD When designing Windows applications to run on Amazon EC2, customers can achieve rapid deployment with support, tools and strategies from AWS and its partners.

he Amazon Web Services cloud already using AWS in a hybrid cloud con- as well as for costs,” Lewis said. computing platform has been figuration with VPC, Direct Connect and Customers continue to cite the AWS T optimized for Windows-based Run Command. availability zone (AZ) architecture as a workloads. It provides a wide range of AWS customers can find depth and reason they are confident in the Windows scalable services aligned to ever-changing breadth of offerings, specific to Windows team’s ability to manage business criti- business needs. including: cal workloads that have high availability Amazon EC2 is a web service that pro- 41 different instance types – opti- requirements. vides resizable computing capacity that is mized for CPU, memory, IO, even graph- used to build and host software systems. ics with 10 different families The right tools for the job “We are focused on developing a secure, 31 different pre-defined AMIs for AWS provides tools to help customers reliable, high-performance, familiar, cost Windows offerings get on the platform to launch complex effective, extensive and flexible offering 132 different Windows ISV listings Microsoft deployments faster and more for Windows workloads,” said Sean Lewis, on the AWS Marketplace including key efficiently so they can be productive. AWS Solutions Marketing Lead. “Amazon enterprise offerings from F5, Trend Micro, The AWS Quick Starts for Microsoft Web Services fully supports Microsoft NetApp, Cisco, and many others are great tools for customers to Windows Server as infrastructure and a help speed deployment time. The platform. Our customers have success- Why Microsoft QuickStarts are: fully deployed virtually every Microsoft shops turn to AWS CloudFormation templates, which application available, including Microsoft Why are customers running Windows are JSON-formatted scripts that build the Exchange, SharePoint, Lync, Dynamics, workloads on AWS? They tell the reference architecture automatically, come and Remote Desktop Services.” Windows EC2 team that it’s the experi- with predefined AWS configuration opti- ence and track record for innovation. mized for each workload. Meeting customer requirements AWS brings knowledge gained in eight Reference architectures so customers for Windows workloads years supporting Windows workloads in can visualize the components they need to Innovations AWS brings to customers the cloud, which is longer than most cloud be running and how they work together. running Windows workloads include: providers have been in business. AWS With thousands of customers buying Offering the latest and greatest ver- provides more diverse service offerings in, the first questions are: How do I get sions of Windows Server and SQL Server and a deeper bench of capabilities than started with a migration? How do I make Maintaining backwards compatibility any other platform. Customers have a migrations more automated? AWS delivers – all the way back to Windows Server 2003 large set of options for running Windows tools like Application Discovery Service Enhancements to BYOL Microsoft workloads on the AWS platform. to help customers find the answers to those software “Customers tell us they value this questions. Deep management for applications because it allows them to optimize the Using AWS Directory Service allows and images with EC2 Run Command platform for each application they run – customers to integrate with the existing Hybrid Cloud support with customers both for response time and performance Active Directory deployment for deeply

Untitled-7 2 10/11/16 3:16 PM SPONSORED CONTENT

integrated access manage- ment controls across on- “Our customer focus continues to be prem and in AWS. a differentiator for us.” Customer focus — Sean Lewis, AWS Solutions Marketing Lead. creates broad ecosystem The large and expanding customer base means the skill set of Windows users Areas of focus for necessary skills, and organi- for AWS platform is growing rapidly. project success zational re-alignment. Support for customers is strengthened by Business Perspective focuses on iden- Process Perspective focuses on man- AWS partners and ISVs so customers can tifying, measuring, and creating busi- aging portfolios, programs and projects be confident that they will have a total ness value using technology services. to deliver expected business outcomes on solution. The Components and Activities within time and within budget, while keeping “Our customer focus continues to be the Business Perspective can help you risks at acceptable levels. a differentiator for us,” Lewis explained. develop a business case for cloud, align Operations Perspective focuses “The Trusted Advisor program is one of business and technology strategy, and on enabling the ongoing operation of the best examples where we are saving support stakeholder engagement. IT environments. Components and customers money by actually lowering Platform Perspective focuses on Activities guide operating procedures, costs through the use of best practices describing the structure and relation- service management, change manage- and architectural guidance.” ship of technology elements and ser- ment, and recovery. With this level of support, vices in complex IT environments. Security Perspective focuses on Windows implementations with AWS Components and Activities within the helping organizations achieve risk man- go way beyond simple testing and Perspective can help you develop con- agement and compliance goals, with development. Customers are running ceptual and functional models of your guidance enabling rigorous methods to enterprise workloads on the EC2 IT environment. describe structure of security and com- platform – across the entire suite of Maturity Perspective focuses pliance processes, systems, and person- Microsoft applications as well as ISV on defining the target state of an nel. Components and Activities assist and custom-built applications. organization’s capabilities, measuring with assessment, control selection, and maturity, and optimizing resources. compliance validation with DevSecOps AWS Cloud Adoption Framework Components within Maturity principles and automation. AWS Cloud Adoption Framework (AWS Perspective can help assess the By considering each of these perspec- CAF) offers structure to help organiza- organization’s maturity level, develop tives, determining the current state and tions develop an efficient and effective a heat map to prioritize initiatives, creating a time-sensitive roadmap to plan for their cloud adoption journey. and sequence initiatives to develop the the achieved target state, consistent and Guidance and best-practices prescribed roadmap for execution. repeatable success on projects associated within the framework help customers People Perspective focuses on organi- with cloud adoption can be achieved. build a comprehensive approach to cloud zational capacity, capability, and change computing across their organization, management functions required to imple- throughout the IT lifecycle. ment change throughout the organiza- SPONSORED BY: AWS Cloud Adoption Framework tion. Components and Activities in the breaks down the complex process Perspective assist with defining capabil- of planning into manageable areas ity and skill requirements, assessing of focus. current organizational state, acquiring

For more information, visit: aws.amazon.com/windows

Untitled-7 3 10/11/16 3:17 PM NSX Trevor Pott VMware Inc., bit.ly/nsxvmware HC3 Why I love it: NSX is the future. Love it or hate it, VMware has set the market for software-defi ned networking (SDN) for the Scale Computing, bit.ly/2fV8upk enterprise. More than just networking, NSX is the scaffolding Why I love it: Compute and storage together with no upon which next-generation IT security will be constructed. fi ghting; this is the promise of hyper-convergence, and What would make it even better: Price and ease of use are Scale succeeds at it. Scale clusters just work, are relatively the perennial bugbears of VMware, but time and competition inexpensive, and deal with power outages and other will solve this. unfortunate scenarios quite well. The next best product in this category: OpenStack What would make it Neutron even better: Scale needs a self-service portal, a virtual marketplace and a 5nine Manager UI that handles 1,000-plus virtual machines (VMs) 5nine Software Inc., bit.ly/5ninemanager more effi ciently. Why I love it: 5nine Manager makes Hyper-V The next best product usable for organizations that don’t have the bodies in this category: There are many human-computer to dedicate specialists to beating Microsoft System interaction (HCI) competitors in this space, and each has its Center into shape. own charms. Try before you buy to fi nd the right one for you. What would make it even better: At this point, 5nine Manager is a mature product. There are no major fl aws of which I am aware, and the vendor works hard to patch the Remote Desktop Manager (RDM) product regularly. Devolutions, bit.ly/1lzY0XU The next best product in this category: Microsoft Azure Why I love it: RDM changed my universe. Once, my world Stack was a series of administrative control panels and connectivity applications, each separate and distinct. After RDM, I connect to the world in well-organized tabbed groups. A single click opens a host of different applications into a collection of servers, allowing me to bring up a suite of connectivity options with no pain or fuss. What would make it even better: The remote desktop protocol (RDP) client can be a little glitchy, but when it is, there’s usually something pretty wrong with the graphics drivers on the system. The next best product in this category: Royal TSX

Hyperglance Mirai IoT Botnet, If I told you the URL, I’d probably go to jail Hyperglance Ltd., bit.ly/hyperglance Why I love it: The Mirai IoT botnet has already changed Why I love it: Hyperglance makes visualizing and exploring the world; and those were just the proof of concept tests! complex infrastructure easy. It makes identifying and After successful attacks against high-profi le targets, the diagnosing issues with infrastructure or networks very simple. Mirai botnet has become a fi xture of the “dark Web”: A cloud Nothing else on the market quite compares. Humans are provider to ne’er do wells that reportedly rivals major legal visual animals, and solving problems visually is just easier. cloud providers in ease of use. While the Mirai botnet would What would make it even better: Hyperglance is an early get recognition on its own for actually fi nding a use for all startup, so there’s a lot of room to improve; however, they’re the IoT garbage we keep connecting to the Internet, its true doing work that no one else seems to be doing, so it’s kind of benefi t is its service as both a sharp stick in the eye to the hard to complain. world’s apathetic legislators and an enabler for unlimited “I told you sos” from the tech community. The next best product in this category: So far as I know, there are no relevant competitors that aren’t still in the What would make it even better: As an IoT botnet, Mirai’s experimental stage. compute nodes are somewhat limited. It has no virtualization

12 | December 2016 | Virtualization Review | VirtualizationReview.com CONTENT SPONSORED BY STORAGECRAFT

SPECIAL PULLOUT SECTION

7 Reasons to Back up Your SaaS Apps

You may think that because your application lives in the cloud, it’s safe. You would be wrong. By Trevor Pott

he public cloud is not infallible. It’s comforting to think that it is; we pay someone else to make a given IT problem T simply “go away,” but like everything else in business, public cloud providers disclaim all responsibility. Like it or not, everything in the public cloud, including Software-as-a-Service (SaaS) applications, needs to be backed up. Unfortunately, it’s easy to forget or underestimate the importance of backups; to understand that no matter how well designed the computer network or how excellent the systems administrator, cloud providers deliver infrastructure, but that infrastructure is unexpected things can and do happen. often unaware of the applications that run on top of it. A public cloud Using an IT solution provided via the public provider could snapshot or clone the underlying storage on a regular cloud is in many ways no different from using an basis, but there are a number of applications where application-aware IT solution delivered by your own internal IT team. backups are required. Yes, most people turn to the public cloud because These application-aware backups are usually important where the delivery of public cloud applications is faster complex or multiple databases are in play. If you trigger a backup and more automated than that of the average process when only part of a record has been entered into the database, internal IT team; but this doesn’t mean those it can cause issues. Once some SaaS applications have reached a certain public cloud providers offer adequate backups. size, there are enough users using the application at any given time that Things get even more complicated when talking it becomes likely errors will be introduced. about SaaS solutions. SaaS is often provided not by With that in mind, here are the top seven reasons to back up your the public cloud provider itself, but by a third-party SaaS applications, even if the services provider claims that some form vendor offering a software solution built using the of backup is included in the service. various infrastructure services offered by that public cloud provider. 1. Withdrawal of Offering Depending on the SaaS application in question, The SaaS application you’re relying on can be withdrawn from it may not even be possible for a public cloud availability at any time. Companies are bought by other companies and

SHUTTERSTOCK provider to provide adequate backups. Public reorganized all the time. Companies go out of business on a regular

VirtualizationReview.com | Virtualization Review | December 2016 | S-1 INSIDER’S GUIDE TO HYBRID IT ENVIRONMENTS: PROS, CONS AND BEST PRACTICES CONTENT SPONSORED BY STORAGECRAFT

Corruption of data doesn’t have to be malicious.

try to uncover and use a colleague’s credentials to place the blame on them; attribution for computer crimes can be basis. And sometimes companies simply decide a given difficult, even in an insider threat scenario. product isn’t worth the time and effort. That SaaS application you’re using today may well not 3. Oopsie McFumblefingers be offered tomorrow. Sometimes when a company goes Corruption of data doesn’t have to be malicious. It can under it’s abrupt, with no warning; when that happens, the simply be that someone pushes the wrong button, or customers are left stranded. innocently deletes data they honestly believed was Oftentimes—usually with larger companies—there’s some supposed to be deleted. This happens all the time; in fact, warning when a given SaaS application is being withdrawn. it’s the most frequent reason backups are accessed. That doesn’t mean, however, that you’ll have time to get all of your data out of the application before the cutoff date. 4. Compliance Depending on how much data you have in there, it could Many organizations need to meet the requirements of take quite a long time to pull it all down, or transfer to various certifications, regulations or laws. There may be another cloud. Also bear in mind that you’ll be trying to do compliance requirements to have backups of your data, and that at the same time as everyone else using the service. those compliance issues rarely go away just because you outsourced some of your IT to a SaaS provider. Taking and 2. Insider Threats storing regular backups may be a matter of law. What if someone with legitimate credentials deletes your stuff? This can and does happen, even if we don’t like to 5. Intermittency think about it. Most of the IT-related security threats that Many SaaS applications aren’t entirely in the public cloud. an organization faces aren’t due to the hackers breaking in Hybrid SaaS applications have an installable component from outside, but from either human error or malicious acts for those users who do a lot of traveling. Despite what the perpetrated by authorized individuals. technorati will have you believe, the world isn’t blanketed The human error part is usually something like an in Internet connectivity, and once out of visual range of a individual clicking on the wrong thing and downloading fancy coffee shop, having the ability to work offline can be ransomware onto their computer. With the rise of SaaS important. applications, there are now malware threats that are SaaS Every so often, mobile users can cause conflicts as application-aware; they’ll grab your SaaS credentials and they synchronize their data with the public cloud. Two pwn your SaaS app. people may have named a file the same thing, or entered Equally likely is a discontented worker trying to sabotage a customer record with the same ID. Conflicts happen, and the organization by corrupting the data in a SaaS application, end users are always finding new and creative ways to or canceling the subscription altogether. They may even cause issues that application developers haven’t thought of.

Trusting any vendor implicitly is not a long-term corporate survival strategy.

S-2 | December 2016 | Virtualization Review | VirtualizationReview.com CONTENT SPONSORED BY STORAGECRAFT

Having backups allows application specialists to unpick the In most situations, SaaS providers aren’t obligated to help errors and potentially save someone a lot of work. recover your data at all. Or they may be allowed to charge shockingly high fees for doing so, even when restoring from 6. Developer Error backups you took. Software developers aren’t perfect. Sometimes they write Many SaaS providers make downloading data reasonably applications that have errors. Sometimes these errors eat easy, usually to make sure that their applications meet some sort your data. When this happens, having a backup is usually of compliance standard. However, restoring that data can be a lot important. trickier, especially if only a partial restore is required. Many of the reasons listed here for backing up data boil 7. Vendor Control Issues down to a need to take the data out of the existing SaaS Vendors aren’t saints. Large developers, especially, frequently offering and either switch vendors, or load it into an on- engage in customer-hostile behavior. One needn’t look farther premises offering. How doable is this? Does the backup than Oracle’s licencing for their software, or Microsoft’s solution you’re using provide a translation layer, or store the removal of their customers’ control over their own OS to data it pulls into a format other applications can understand? realize that many vendors do what benefi ts the vendor and, Backing up metadata is also important. SaaS application quite frequently, their plans don’t include what’s good for you. metadata can include everything from confi guration settings In a SaaS scenario, you’re often dealing with two vendors: to tags and labels on data to complicated security settings the developer of the SaaS application and the public cloud or important details defi ning collaboration and inter-user provider that stands up the infrastructure upon which the control. Backing up data may not be enough if you use your SaaS application is developed. Sometimes these are one and SaaS application to share data with customers; these access the same, but just as often they aren’t. controls might need to be backed up, as well. Any of the vendors along the chain could at any time decide Finally, always ask yourself how secure your data is while it’s to radically raise prices, remove functionality, create some moving in and out of the cloud. Are your backups encrypted, and form of regulation-breaking lock-in or otherwise engage in is that encryption key itself backed up? Where are you backing customer-hostile behavior. Trusting any vendor implicitly is up to, and how secure is that storage location? not a long-term corporate survival strategy. You Are Still Responsible SaaS Backup Considerations Your responsibility to make backups doesn’t end just because Hopefully, you’ve now been convinced that backups are you use SaaS instead of internal IT. Backups are always critical, even for SaaS applications. But the journey doesn’t critical, and planning could be the difference between end there; not all backups are equal, and there are a few corporate survival and miserable failure. practical items about SaaS backups worth considering. Perhaps the most important question is how big a pain the Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. restore process will be. Many users perceive SaaS backup and He splits his time between systems administration, technology data recovery to be the vendor’s responsibility; but the reality writing and consulting. As a consultant he helps Silicon Valley is that a lot of what goes on in the public cloud simply doesn’t startups better understand systems administrators and how to work the way we think. sell to them.

Despite what the Trusting any vendor technorati will have you implicitly is not a believe, the world isn’t long-term corporate blanketed in Internet survival strategy. connectivity.

VirtualizationReview.com | Virtualization Review | December 2016 | S-3

Paul Schnackenburg, MCSE, MCT, MCTS and MCITP, started in IT in the days of DOS and 286 computers. He runs IT consultancy Expert IT Solutions, which is focused on Windows, Hyper-V and Now Available StorageCraft VirtualBoot For vSphere

Fast recovery of backup images directly on vSphere StorageCraft® VirtualBoot™ for vSphere is the first technology of its kind that enables direct and instant virtualization of StorageCraft ShadowProtect® SPX backup images as guest VMs directly within the VMware ESXi hypervisor for rapid recovery. Developed in joint collaboration and participation with VMware though the vSphere APIs for I/O Filtering (VAIO) Program, this patented technology also lets you backfill the data from the image set and permanently transition to ESXi from any physical device or virtual machine on any source hypervisor while the new VM is running.

Best of VMworld 2016 Gold Award Winner for Data Protection

Download a FREE trial of SPX

www.StorageCraft.com/VIRVSP

StorageCraft and StorageCraft ShadowProtect are trademarks or registered trademarks of StorageCraft Technology Corporation. All other brands and product names are trademarks or registered trademarks of their respective owners. support (either containers or hypervisor-based), requiring topology or storage interconnects. This flew in the face of users to script their attacks using commands native to the much of the woo around converged and hyper-converged botnet. As the compute nodes are almost all Unix derivatives, storage, whose pitchmen attributed slow VM performance to however, this isn’t a huge impediment, and they generally storage I/O latency—especially in shared platforms connected respond to the same commands. to servers via Fibre Channel links. While it is true that I like DataCore simply for being an The next best product in this category: The Nitol Botnet upstart that proved all of the big players in the storage and Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. the virtualization industries to be wrong about slow VM He splits his time between systems administration, technology performance being the fault of storage I/O latency, the company writing and consulting. As a consultant he helps Silicon Valley has done something even more important. Its work has opened startups better understand systems administrators and how to the door to a broader consideration of what functionality should sell to them. be included in a properly defined SDS stack. In DataCore’s view, SDS should be more than an instantiation on a server of a stack of software services that used to be hosted Jon Toigo on an array controller. The SDS stack should also include the virtualization of all storage infrastructure, so that capacity can [Jon decided to forgo categories. In the holiday spirit, we’ve be allocated independently of hypervisor silos to any workload decided to not punish him for doing so.—Ed.] The good news in the form of logical volumes. And, of course, any decent stack about 2016 is that the year saw the emergence of promising should include RAW I/O acceleration at the north end of the technologies that could really move the ball toward realizing the storage I/O bus to support system-wide performance. vision and promise of both cloud computing and virtualization. DataCore hasn’t engendered a lot of love, however, from Innovators like DataCore Software, Acronis Software, StrongBOX the storage or hypervisor vendor communities with its Data Solutions, StarWind Software, and ioFABRIC contributed demonstration of 5 million IOPS from a commodity Intel server technology that challenged what has become a party line in the using SAS/SATA and non-NVMe FLASH devices, all connected software-defined storage (SDS) space and expanded notions of via Fibre Channel link. But it is well ahead of anyone in what could be done with cloud storage from the standpoint of this space. IBM may have the capabilities in its SPECTRUM agility, resiliency, scalability and cost containment. portfolio to catch up, but the company would first need to get a number of product managers of different component technologies to work and play well together. Adaptive Parallel I/O Technology DataCore Software, bit.ly/datacore Acronis Storage In January, DataCore Software provided proof that the central marketing rationale for transitioning shared storage Acronis International GmbH, bit.ly/acronisstorage (SAN, NAS and so on) to direct-attached/software-defined Most server admins of the last decade remember Acronis as the kits was inherently bogus. DataCore Adaptive Parallel I/O source of must-have server administration tools. More recently, technology was put to the test on multiple occasions in 2016 it seemed that the company was fighting for a top spot in the by the Storage Performance Council, always with the same very crowded server backup space. result: parallelization of RAW I/O significantly improved the Recently, however, the company released its own SDS stack, performance of VMs and databases without changing storage Acronis Storage, that caught my attention and made my brain cells jiggly. The reason traces back to problems with SDS architecture since the idea was pushed into the market (again) by VMware a couple years back. Many have bristled at the arbitrary and self-serving nature of VMware’s definition of SDS, which much of the industry seemed to adopt whole cloth and without much push back. Simply put, VMware’s choices about which storage functionality to instantiate in a server- side software stack and which to leave on the array controller made little technical sense. RAID, for example, was a function left on the controller, probably to support the architecture of VMware’s biggest stockholder at the time, EMC. From a technical standpoint, this choice didn’t make much sense because RAID was beginning to trend downward, losing its appeal to consumers in the face of larger storage media and longer rebuild times following a media failure.

VirtualizationReview.com | Virtualization Review | December 2016 | 13 Flat storage advocates argue that all data movement after initial write produces unwanted latency. So, backup and archive are out, as is hierarchical storage and periodic data migrations between tiers. As an alternative to archive and backup, advocates suggest a strategy of shelter-in- place; when the data on a flash array is no longer being re-referenced, just power down the array, leave it in place and roll out another storage node. Such an architectural model fails to acknowledge the vulnerability of keeping archival or backup data in the same location as active production data, where it will be exposed to common threats and disaster potentials. Tacitly, Vicinity seems to acknowledge that many hypervisor computing jockeys know next to nothing about storage, so even the basic functions of data migration across storage pools needs to be With Acronis Storage, smart folks at Acronis were able to automated. Vicinity does a nice job of this. rewrite that choice by introducing CloudRAID into its SDS stack. My only concern is whether Vicinity is sufficiently robust CloudRAID enables a more robust and reliable data protection to provide the underlayment for intelligent and granular scheme than one can expect from traditional software or data movement—based on the business value of data—across hardware RAID functionality implemented siloed arrays. Acronis infrastructure. But, at least, the technology provides an elegant Storage’s use of the term RAID may strike some as confusing, starter solution to the knotty problem of managing data across especially given that the company has embraced erasure coding increasingly siloed storage over time. as a primary method of data protection. Regardless of the moniker, creating data recoverability via the disassembly and reassembly of objects from distributed piece parts makes more sense than RAID parity striping given the large sizes of storage media today and the potentially huge delays in recovering petabytes and exabytes of data through traditional RAID recovery methods. Acronis also introduced some forward-looking technology to support BlockChain directly in its stack. This is very interesting because it reflects an investment in one of the hottest new cloud memes: a systemic distributed ledger and trust infrastructure. IBM and others are also hot for BlockChain, but Acronis has put the technology within reach of the smaller firm (or cloud services provider) that can’t afford the price tag for big iron.

Vicinity ioFabric Inc., bit.ly/iofabric Virtual Tape Library (VTL) Another disruptor in the SDS space is ioFabric, whose Vicinity StarWind Software Inc., bit.ly/starwindvtl product is called “software-defined storage 2.0” by its vendor. As with ioFabric, StarWind Software is another independent The primary value of the approach taken by this vendor is the SDS developer. Its virtual SAN competes well in every respect automation of the pooling of storage and the migration of data with VMware vSAN and similar products, but StarWind has between pools, whether for load balancing or for hierarchical gone beyond the delivery of generic virtual SAN software (a storage management. very crowded field) with the introduction in 2016 of a VTL ioFabric’s addition of automated pooling and tiering to the solution. SDS functional stack is very interesting, in part because it Basically, the StarWind VTL is a software-defined virtual tape counters the rather ridiculous idea of “flat and frictionless” library created as a VM with an SDS stack. The Web site offers storage that has cropped up in some corners of the IT a somewhat predictable business case for VTL, including its community, courtesy of over-zealous marketing by all-flash potential as a gateway to cloud-based data backup, and the VTL array vendors. itself is available on some clouds as an “app.”

14 | December 2016 | Virtualization Review | VirtualizationReview.com ARE YOU COGNITIVE?

Automate Your Data Management. STOP Letting It Manage You.

Take control of your data with StrongLINK, the ultimate software-based data and storage management solution:

• SEARCH Performs intelligent metadata discovery, harvesting and indexing for data classification and powerful search capabilities.

• MOBILITY Enables a global namespace connecting heterogeneous storage and applications, allowing seamless data migration. Find it. Use it. Share it. Own it. • ORCHESTRATION Simplifies workflows and operations (backup, data protection and easy-to-scale architecture) through automated policies and AI technologies.

LEARN MORE @ dternity.solutions/stronglink | | What I like about the idea (which has a long pedigree back of Toigo Partners International, an IT industry watchdog to software VTLs in the mainframe world three decades ago) is and consumer advocacy. He is also the chairman of the Data the convenience it provides for backing up data, especially in Management Institute, which focuses on the development of data branch offi ces and remote offi ces that may lack the staff skills management as a professional discipline. to make a good job of backup. The StarWind VTL can be plugged into clusters of SDS storage, whether converged or hyper-converged, and used as a Brien M. Posey copy target. Then the appliance can replicate its data without creating latency in the production storage environment to Virtualization Manager another VTL, whether in the corporate datacenter or in a cloud. SolarWinds Worldwide LLC, bit.ly/vmmonitoring It’s a cool and purposeful implementation of a data protection service leveraging what’s good about SDS. And thus far, I Why I love it: SolarWinds Virtualization Manager is the haven’t seen a lot of competitors in the market. best tool around for managing heterogeneous virtualized environments. The software works equally well for managing StrongLINK both VMware and Hyper-V. In version 7.0, SolarWinds introduced the concept of StrongBox Data Solutions, bit.ly/stronglink recommendations. The SolarWinds Orion Console displays a All of my previous picks have fallen within the boundaries series of recommendations of things you can do to make your of the marketing term “software-defi ned storage.” SDS is virtualization infrastructure perform better. The software also important insofar as it enables greater agility, elasticity monitors the environment’s performance and generates alerts and resiliency; (in a manner similar to Microsoft System Center Operations plus lower costs in Manager) when there’s an outage, or when performance drops storage platform below a predefi ned threshold. Virtualization Manager also has a and storage service rich reporting engine and can even help with capacity planning. delivery. However, What would make it even better: The software works really while all products well, but sometimes lags behind the environment, presumably discussed thus far because of the way the polling process works. It would be great have made meaningful if the software responded to network events in real time. improvements in the The next best product in this category: Splunk Enterprise defi nition, structure, and function of storage platform and service management, they do not address the key challenge of data management, which is where the battle to cope with the Turbonomic coming zettabyte wave will ultimately be waged. Turbonomic Inc., bit.ly/turbonomic With projections of between 10ZB and 60ZB of new data requiring storage by 2020, more than just capacity pools of Why I love it: Turbonomic (which was previously VMTurbo) fl ash and disk will be required. Also, more than simple load is probably the best solution out there for virtualization management will be required to economize on storage costs. infrastructure automation. The software monitors hosts, Companies and cloud services providers are fi nally going to need VMs and dependencies in an effort to assess application to get real about managing data itself. performance. Because the software has such a thorough That’s the main reason I’m so keen on technologies like understanding of the environment, it can add or remove StrongLINK from StrongBox Data Solutions. StrongLINK can resources such as CPU or memory on an as-needed basis. The best be described as cognitive data management technology. software can also automatically scale workloads by spinning It combines a cognitive computing processor with an Internet VM instances up (or down) when necessary. These types of of Things architecture to collect real-time data about data, and about storage infrastructure, interconnects, and services in order to implement policy-based rules for hosting, protecting, and preserving fi le and object data over the life of the data. StrongLINK is a pioneering technology for automating the granular management of data itself, regardless of the characteristics, topology or location of data storage assets. Only IBM, at this point, appears to have the capabilities to develop a competing technology, but only if Big Blue can cajole numerous product managers to work and play well together. For now, if you want to see what can be done with cognitive data management, StrongLINK is ahead of the curve.

Jon Toigo is a 30-year veteran of IT, and the managing partner

16 | December 2016 | Virtualization Review | VirtualizationReview.com operations can be performed for on-premises workloads, or for monitor your workloads—no matter where they’re running— workloads running in the public cloud. will be crucial. And OMS continues to surprise with the depth of insight into network latency, how long it takes to What would make it even better: Nothing install patches or Offi ce 365 activity that can be had using The next best product in this category: Jams Enterprise log data. Recently, the ability to upload custom log data Job Scheduling from any source was added, increasing the scope of OMS. Add to this Azure Backup and Automation and OMS is now the best way to monitor, protect and manage any workload in System Center 2016 VM Manager any cloud. Microsoft, bit.ly/syscentervm The best reason to love OMS, however, is the “free forever” (max 500MB/day, seven-day data retention) tier, Why I love it: VM Manager is hands down the best tool for which means you don’t have to take my word for it; you can managing Microsoft Hyper-V. Of course, this is to be expected, simply sign up, connect a few servers and see how easy it is because VM Manager is a Microsoft product. Microsoft has to gain insights. added quite a bit of new (and very welcome) functionality to the 2016 version. For example, the checkpoint feature has been What would make it even better: The ability to monitor updated to use the Volume Shadow Copy Services, making more third-party workloads would be benefi cial and is checkpoints suitable for production use. Microsoft has also probably coming, given the cloud cadence development pace. added support for Nano Server, and has even made it possible The next best product in this category: Splunk to deploy the SDN stack from a service template. What would make it even better: VM Manager 2016 would be even better if it had better support for managing Windows Server 2016 VMware environments. Microsoft has always provided a Microsoft, bit.ly/winserv16 degree of VMware support, but this support exists more as a Why I love it: After a very long gestation, Windows Server convenience feature rather than as a viable management tool. 2016 is fi nally here. There are many new and improved features; The next best product in this category: The second best top of the list are improvements in Hyper-V such as 24TB tool for managing Hyper-V is PowerShell, with Microsoft’s memory and up to 512 cores with hyperthreading in a host, Hyper-V Manager being third best. Both PowerShell and along with 240 virtual processors and up to 12TB of memory in Hyper-V Manager are natively included with Windows. a VM. On the storage side, there’s now Storage Spaces Direct, which uses internal HDD, SSD or NVMe devices in each host to Brien M. Posey is a seven-time Microsoft MVP with over two create a pool of storage on two to 16 hosts. decades of IT experience. As a freelance writer, Posey has written many thousands of articles and written or contributed to several dozen books on a wide variety of IT topics.

Paul Schnackenburg Operations Management Suite Microsoft, bit.ly/opermgr Why I love it: Operations Management Suite (OMS) has been on a cloud-fueled growth spurt since it was born just 18 months ago. From its humble beginnings in basic log analysis, there are now 25 “solutions”: packs that take log data and give you actionable insight for Active Directory, SQL Server, Containers, Offi ce 365, patch status, antimalware status, Azure Site Recovery and networking, as well as VMware monitoring (plus a lot more). Both Linux and Windows workloads are covered and they can be anywhere; on- premises, in Azure or in another public cloud. If you have System Center Operations Manager (SCOM), OMS will use the SCOM agents to upload the data to your central SCOM server and from there to the cloud. There’s no doubt that as public cloud and hybrid cloud become the new normal, being able to

18 | December 20152016 | Virtualization Review | VirtualizationReview.com Where you need us most.

virtualizationreview.com Creating fi xed-size virtual hard disks and merging checkpoints is lightning fast with the new fi lesystem, ReFS, which has been substantially improved since 2012 and is now the preferred fi le system for VM storage and backup. Backup is completely rewritten with Resilient Change Tracking now a feature of the platform itself. Fast networking, 10 Gbps or 40 Gbps RDMA NICs, can now be used for both storage and VM traffi c simultaneously, minimizing the number of interfaces each host requires. PowerShell Direct allows you to run cmdlets in VMs from the host without having to set up remoting. Upgrading is easier than ever with Rolling Cluster Upgrades where you can gradually add Windows Server 2016 nodes to a 2012 R2 cluster until all nodes are upgraded. And then there are Shielded VMs, which bar fabric administrators from access to VMs, a totally unique feature to Hyper-V. And all this can be done on the new deployment fl avor of Windows Server—Nano server, a headless, streamlined server with minimal disk and memory footprint. Hyper-V was already the most innovative of the main Azure Stack virtualization platforms and this new version dials it up to 11, especially when you include Windows Server/Hyper-V Microsoft, bit.ly/msazurestack Containers. Why I love it: Even though Azure Stack is only in its second What would make it even better: Support for Live Technical Preview (due out mid-2017), it promises to be a Migration to and from Azure would be cool. game-changer for private cloud. Today, public Azure makes no secret of the fact that it runs on Hyper-V, which provides for The next best product in this category: VMware VSphere easy portability of on-premises workloads. But the deployment and management stack around on-premises and public Azure are mostly different. Azure Stack provides a true Azure System Center 2016 VM Manager experience every step of the way, with ARM templates working Microsoft, bit.ly/syscentervm identically across both platforms, which, combined with the fulfi llment of a true hybrid cloud deployment model, large Why I love it: Building on all the new features in Windows enterprises will appreciate. Server 2016, System Center VM Manager lets you bring in bare metal servers and automatically deploy them as Hyper-V What would make it even better: The ability to run it on or storage hosts. This also adds the ability to create hyper- the hardware of your own choosing, instead of only prebuilt converged clusters where the Hyper-V hosts also act as storage OEM systems. hosts using Storage Spaces Direct (S2D). The next best product in this category: Azure Pack VM Manager also manages the Host Guardian Service to provides an “Azure like” experience, but not true code build Guarded Fabric and Shielded VM, as well as the new compatibility. Storage Replica (SR) feature. SR lets you replicate any storage Paul Schnackenburg, MCSE, MCT, MCTS and MCITP, (DAS or SAN) to any other location, either synchronously (zero started in IT in the days of DOS and 286 computers. He runs IT data loss) or asynchronously for longer distances. consultancy Expert IT Solutions, which is focused on Windows, Software-defi ned everything is the newest buzzword, and Hyper-V and Exchange Server solutions. VM Manager doesn’t disappoint with full orchestration of the new Network Controller role, which provides a Software Load Balancer (SLB) and Windows Server Gateway (WSG) for Dan Kusnetzky tenant VPN connectivity. VM Manager also manages the new centralized storage Quality of Service policies that can be used to control the storage IOPS hunger of “noisy neighbors.” SANsymphony and Hyper-converged Data Protection Manager lets you back up VMs on S2D Virtual SAN clusters; when combined with the ReFS fi le system, it provides DataCore Software, bit.ly/datacore 30 percent to 40 percent storage space savings, along with 70 percent faster backups. Why I love it: DataCore is a company I’ve tracked for a very long time. The company’s products include ways to enhance What would make it even better: VM Manager 2016 can storage optimization, storage effi ciency and to make the most manage VSphere 5.5 and 5.8, but support for 6.0 would be nice. fl exible use of today’s hyper-converged systems. The next best product in this category: VMware VCenter The technology supports physical storage, virtual

20 | December 2016 | Virtualization Review | VirtualizationReview.com storage or cloud storage in whatever combination fi ts the to enable SQL Server-based application mobility based on customer’s business requirements. The technology supports containers. RackSpace and DH2i have been working together workloads running directly on physical systems, in VMs or to provide a SQL Server Container-as-a-Service offering, in containers. which should be of interest to companies interested in The company’s Parallel I/O technology, by breaking down allowing their workloads to burst out of their internal OS-based storage silos, makes it possible for customers to get network into a RackSpace-hosted cloud environment. higher levels of performance from a server than many would What would make it even better: I’m always surprised believe possible (just look at the benchmark data if you don’t when a client whose infrastructure is based on SQL Server believe me). This, by the way, also means that smaller, less- hasn’t heard of DH2i and its products. Clearly, the company costly server confi gurations can support large workloads. would do better if it could just make the industry aware of what What would make it even better: I can’t think of it’s doing. anything. The next best product in this category: Microsoft offers Next best product in this category: VMware vSAN SQL Server as an Azure service, and also makes it possible for its database to execute on physical or virtual systems. Apollo Cloud Red Hat Containers PROMISE Technology Inc., Red Hat Inc., bit.ly/rhcontainers bit.ly/apollocloud Why I love it: The IT industry really took notice of Why I love it: I’ve had the containers in 2016. Red Hat has integrated containers into opportunity to try out cloud many of its products (Red Hat Linux, Red Hat OpenShift, Red storage services and have Hat Virtualization and so on) and made it production-ready found that some caused me for customers. Red Hat did the hard work of integrating concerns about the terms and containers into its management, security and cloud computing conditions under which the environments. And if a customer really wants to conduct a service is being supplied, how computer science project, Red Hat is there, too. private the data I would store in those cloud services was What would make it even better: Nothing, really. going to be or how secure the What impressed me was the company’s desire to move this data really was. technology from a computer science project into a production- Apollo Cloud is based on the ready tool without removing customer choice. If a customer use of a local storage device. This means that my data stays wants to tweak internal settings or do its own integration and local even though it can be accessed remotely. Furthermore, testing work, Red Hat is happy to help them do it. the device offers easy-to-use local management software, The next best product in this category: Others in the as well. Data can be accessed by systems running Windows, community, including Docker Inc., IBM, HP and SUSE are Linux, macOS, iOS and Android. I found it quite easy to share offering containers. Red Hat just did a better job of putting documents between and among all of these different types together a production-ready computing environment. of systems and between staff and clients. What would make it even better: At this time, special client software must be installed on some systems to overcome limitations built into the OSes that support those environments. It would be better if a way to connect to the storage server without having to add an app or driver to each and every device would make this solution far easier to use for midsize companies. Next best product in this category: Nexsan Transporter

DxEnterprise Dan Kusnetzky writes the Dan’s Take column for Virtualization Review magazine. A reformed software engineer and product bit.ly/dxenterprise DH2i, manager, he founded Kusnetzky Group LLC in 2006. Kusnetzky’s Why I love it: DH2i has focused on making it possible literally written the book on virtualization, and often comments for SQL Server-based applications to move freely on cloud computing, mobility and systems software. He has been and transparently among physical, virtual and cloud a business unit manager at a hardware company, and head of environments. Most recently, it announced an approach corporate marketing and strategy at a software company. VR

VirtualizationReview.com | Virtualization Review | December 2016 | 21 7 LOCATIONS TO CHOOSE FROM JOIN US

SUPPORTED BY PRODUCED BY

magazine

Untitled-2 2 10/11/16 11:46 AM CONNECT WITH US vslive.com twitter.com/vslive – facebook.com – linkedin.com – Join the @VSLive Search “VSLive” “Visual Studio Live” group!

Untitled-2 3 10/11/16 11:47 AM FEATURE | Top Stories 2016

1 SHUTTERSTOCK

24 | December 2016 | Virtualization Review | VirtualizationReview.com VMware gets a new owner, Citrix gets a new leader and the future gets even cloudier. TOP It was a busy news year. Reasons

to WaitBy Keith Ward 10Virtualization Stories of 2016

irtualization had a banner year in 2016. Lots and lots of stuff happened, including the biggest IT merger in history, and a historic IPO. In fact, the argument could be made that the year was a top-5 all-timer in terms of news that affected the virtualization industry. There were way more than 10 big news stories this year, but we’ve narrowed it down to what we feel are the 10 biggest, most important stories. One note: Because VMware Inc. is Vthe dominant company in this space, the vast majority of stories will revolve around it. That will be the case most years, but in 2016 it was especially so. VMware Offi cially Becomes Part of Dell 1 The announcement came, appropriately enough, during VMworld 2016: Final approval came through for the mother of all tech mergers. Dell Inc., one of the world’s biggest PC makers, purchased EMC Corp., one of the world’s biggest storage makers. The price tag was $67 billion, and was nearly a year from initial announcement to the signing of the papers. That’s a long time to wait, but given the scope of the deal, it wasn’t surprising. Over the years, EMC became much more than just a storage company, buying companies like RSA Security, Pivotal, VCE and Virtustream. But the jewel in EMC’s crown was VMware (there was some speculation, in fact, that VMware is what Dell really wanted all along).

VirtualizationReview.com | Virtualization Review | December 2016 | 25 FEATURE | Top Stories 2016

Along the way, there were plenty of hurdles to get over. board. Elliott Management had been urging Citrix to spin They included Dell getting the financing together to pay off its non-core businesses to improve its bottom line. Citrix for the massive deal, and securing various government seems to have heeded Elliott Management’s advice. approvals. There was even a rumor going around last Tatarinov came to Citrix from Microsoft, where he’d been spring that VMware CEO Pat Gelsinger was planning on for 13 years, including his most recent stint as executive leaving the company after the merger (the rumor circulated VP of the Business Solutions Division. He led that division’s for a few days before dying fairly quickly). transformation to the cloud. Prior to Microsoft, Tatarinov Initially, investors were bearish on the move, and served at CTO at BMC. VMware stock tanked badly. It’s recovered much of its value In his short time at the helm, Tatarinov has worked hard in the ensuing year, but there are still open questions about to reestablish closeness with Microsoft, and taken sharp how quickly the integration of such gigantic companies aim at main competitor VMware in the virtual desktop can be accomplished. Another question: How will its stock infrastructure (VDI) and mobile device management areas. ultimately shape up, given that its new corporate parent is Whether all these changes will turn the company around, a private company? only time will tell. The biggest question of all—at least from VMware’s point of view, now that the deal is in the books—is whether Dell Nutanix Has an IPO maintains the same hands-off attitude toward VMware 3 A hyper-convergence vendor having an IPO? It that EMC did (which is often cited as a key reason that happened in 2016, and it’s a strong indicator of VMware has done so well since its acquisition by EMC). Dell the meteoric rise of that part of the industry, and cloud CEO Michael Dell was asked that question directly during computing in general. a press conference at VMworld, and reiterated that he Hyper-converged appliances combine storage, compute wouldn’t interfere with VMware operations. and networking in a single device, and are undergirding In fact, during a press conference at VMworld 2016, Dell much of the exploding cloud computing infrastructure. said that there had already been a conflict between a Nutanix has long been considered one of the leading partnership VMware wanted to pursue, and the fact that companies in this arena, and its well-received IPO Dell considers the partner company a main rival. In the end, confirmed that perception. Dell let VMware make the deal with the competitor. The past year hasn’t been a good one for tech company That fits with all indications as of the time of this IPOs in general: At the time of Nutanix’s offering, there writing; it appears that VMware is had only been 14 tech IPOs in 2016. It goes back still operating independently, without even further than that, however; as the Web interference from Dell in its day-to-day 2 site Motley Fool wrote (bit.ly/2fe1NxG), “Prior to operations. Nutanix’s IPO, there had been a drought of tech IPOs over the past 18 months or so. Nutanix Kirill Tatarinov had even delayed its IPO by nine months since 2 Is Named New market conditions did not appear receptive. ” Citrix CEO Conditions obviously improved, as shares This may not have made huge waves in of Nutanix closed 130 percent higher on its the virtualization world at the time, but initial day of trading. MarketWatch (on.mktw. its long-term effect may be larger than net/2fSje8o) called it “… the best first-day stock any other on this list. Kirill Tatarinov pop for a tech company since Castlight Health was named president and CEO on Jan. 20, 2016, and took Inc. gained 149% in 2014 and the largest overall since Seres over Jan. 25. He replaced Interim CEO Robert Calderoni, who Therapeutics gained 186% in June 2015.” replaced Inc. founder Mark Templeton in July Although Nutanix’s share price dropped some in the days 2015. Templeton helmed Citrix for 14 years. following the IPO, it’s mostly done well since then. Shares Tatarinov took over at a crucial time for Citrix. The were originally offered at $16, and sat at just more than $31 company, many felt, had ventured too far from its at the time of this writing. roots, and needed to focus on what it did best and most Nutanix is in a space with heavy competition, and its profitably—use virtualization to drive remote computing. continuing strong performance bodes well for the future of Citrix divested itself of, among other things, its GoTo hyper-convergence. line of virtual conferencing applications, and its cloud management products, CloudPlatform and CloudPortal VMware Execs Race Business Manager. 4 for the Door The changes stem from mid-2015, when Citrix gave This may or may not be related to the Dell activist investor Elliott Management Corp. a seat on its acquisition, but at the very least, the timing was suspicious.

26 | December 2016 | Virtualization Review | VirtualizationReview.com The Dell purchase was announced in October 2015. Starting asking for, in terms of moving workloads in the cloud, but immediately in 2016, a slew of VMware executives announced making as few changes as possible to their applications.” they were moving on. And they weren’t mid-level managers, but high-powered IBM and VMware Partner leaders. It began with CFO/COO Jonathan Chadwick, who left 6 in the Cloud in January and was replaced by EMC CFO Zane Rowe. AWS wasn’t the only cloud partnership The next month it was Senior VP Martin Casado, who VMware entered into. Just six weeks earlier, at VMworld developed the software-defined networking (SDN) technology 2016, VMware announced a similar agreement with IBM. that eventually became NSX, one of VMware’s most important Through it, customers can extend their VMware workloads products. Casado left to join venture capital firm Andreessen into the IBM cloud. Horowitz as a general partner. Although less impactful, given that Literally the week after Casado’s the IBM cloud presence is considerably departure was announced came perhaps the 6 smaller than that of AWS, it still sent biggest blow of all: COO Carl Eschenbach, rumbles through the industry. At the the No. 2 man at VMware. Like Casado, time of the Aug. 29 announcement, Eschenbach left for a VC. He’s now a partner more than 500 mutual clients had begun at Sequoia Capital. He wasn’t replaced moving their VMware environments by one, single executive; his duties were to IBM Cloud. They included such divided among four others. He’d been with heavy hitters as Marriott International, VMware since 2002. Clarient Global and Monitise. That’s a lot of leadership to lose. Even Moving on-premises workloads to the though the executive exodus has slowed cloud without a major infrastructure since then, losing so much extremely high-level experience overhaul is the goal. To reach it, IBM said it’s training more can’t help but have an impact on a company. than 4,000 service professionals to provide clients with the expertise to extend VMware environments to its cloud. VMware Partners with AWS The news wasn’t unexpected in this case. The original 5 VMware originally hoped its cloud platform, partnership was announced in February 2016 at IBM’s vCloud Air, would compete with the big dogs like annual cloud and mobile technology conference in Las Amazon Web Services (AWS) and Microsoft Azure. To put Vegas, Nevada. Robert LeBlanc, senior vice president of it politely, that didn’t happen. While vCloud Air saw some IBM Cloud, told CIO Journal at the time that 80 percent of success as an on-premises and hybrid cloud solution, it enterprise clients are looking for this kind of hybrid cloud made virtually no dent in the public cloud space. strategy. “We’re moving to the next phase of the cloud. And In the spirit of “if you can’t beat ’em, join ’em,” in October this is going to accelerate that shift.” 2016 VMware joined forces with its former cloud nemesis when it announced “VMware Cloud on AWS.” VMware VMworld 2016 products, including vSAN and NSX, will run on the AWS cloud. 7 Cloud Announcements The service will be optimized to run on dedicated, bare-metal These cloud partnerships wouldn’t be possible AWS infrastructure built specifically for the service. without the VMware infrastructure, which came in the form The new partnership means that AWS will be VMware’s of two major cloud initiatives unveiled at VMworld 2016 in primary public cloud infrastructure partner, and VMware August/September: Cross-Cloud Architecture and Cloud will be AWS’s primary private cloud partner, AWS CEO Foundation. Cloud Foundation is VMware’s Infrastructure-as- Andy Jassy said at the time of the announcement. The idea a-Service (IaaS) piece, while Cross-Cloud Architecture helps is that VMware Cloud on AWS will relieve both companies’ migrate and manage all the moving parts, most especially the customers from the need to choose between the two. virtual machines (VMs) that house the workloads. “Our customers faced a binary decision,” Jassy said. Cross-Cloud Architecture, as explained by VMware CTO “Either I use the VMware software and it’s hard to actually (and Virtualization Review columnist) Chris Wolf, uses use AWS for public cloud, or I use AWS and public cloud and a “selective single pane” of glass management, making it I have to leave behind VMware software. Understandably, slightly different than the classic “single pane” of glass. they didn’t like that choice.” “That selective single pane of glass,” he blogged on IDC analyst Al Hilwa sees the partnership as a “win-win” VMware’s news site, Radius (bit.ly/2eGaIvC), “… will for both companies’ customers. “It enables customers to centralize the key functions required to successfully

run their existing applications using the two companies’ operate enterprise IT across multiple clouds, including: SHUTTERSTOCK products and services,” he told Virtualization Review at the •. An SLA/Availability dashboard time. “I think that’s what a lot of VMware customers are •. Policy-based placement and optimization

VirtualizationReview.com | Virtualization Review | December 2016 | 27 FEATURE | Top Stories 2016

•. UI and API-driven cloud service broker a long time. Predictive DRS takes it a step further, by •. Automated discovery integrating with VROps for better planning. VMware •. Centralized multi-cloud cost accounting Chief Technologist for Storage & Availability Duncan •. Workload migration” Epping gave an example of what Predictive DRS can do Cloud Foundation supplies the underlying plumbing. (bit.ly/2fWoPZV): The main pieces are the vSphere hypervisor, vSAN “You can imagine a VM currently using 4GB of memory software-defined storage (SDS) and NSX SDN. Those (demand), however, every day around the same time three components make up VMware’s larger vision of the a SQL Job runs, which makes the memory demand software-defined datacenter (SDDC). spike up to 8GB. This data is available through VROps It’s a big bet—and huge undertaking—by VMware. But now and as such when making placement/balancing the company realized that it needed to be a major player recommendations this predicted resource spike can now in the cloud computing world, and with earlier efforts be taken into consideration.” sputtering, the partnership angle may be the direction Other major upgrades include the vCenter Server it’s been seeking for years. Appliance (VCSA), which is deployed as a VM and uses Linux as the OS. With vSphere 6.5, VCSA becomes the VMware Lays off official default method for deploying a vCenter Server. In 8 800 Employees addition, the new vSphere Client, based on HTML5, rather Earnings reports are usually pretty dull affairs. than outmoded and insecure Flash, was released. Not so for the VMware report of Jan. 26, 2016. In that one, two significant things were learned: No. 1, that CFO/COO Windows Server 2016 Hits Jonathan Chadwick had resigned (see item No. 4). The 10 the Streets second major news to come out was that VMware was Microsoft is a strong second to VMware in the laying off 800 employees. virtualization market; consider that Hyper-V sits side- There was much speculation at by-side with vSphere in many datacenters, and the time that the layoffs, like the Azure is the No. 2 public cloud platform, behind executive departures, were related only AWS, and it’s clear that when a new version to the Dell/EMC announcement. of Windows Server is released, it’s big. Of course, company officials Windows Server 2016 certainly fits that would never confirm something description. The final version came out Oct. like that, but the timing was again 12, and the number of virtualization-related suspicious. upgrades is significant. For instance, it moves “We are restructuring gently into some hyper-convergence areas approximately 800 jobs over the 8 to compete more directly with VMware with course of the first half of 2016 and Storage Spaces Direct (S2D). S2D uses internal are reinvesting the associated storage in nodes to create Storage Spaces savings in field, technical and volumes, and challenges vSAN. Another: there’s support resources associated with our growth products,” a new SDN stack, based on technologies born in Azure Chadwick said during the earnings call in explaining the such as the Network Controller, Software Load Balancer reason behind the layoffs. The cuts represented just less (SLB), Network Function Virtualization (NFV) and RAS than 5 percent of VMware’s total workforce at the time. Gateway for SDN. While these are still immature and aren’t yet ready vSphere 6.5 Is Released to overtake VMware’s SDDC, it gives Azure 9 vSphere 6.5, the latest version of VMware’s clients another option. flagship hypervisor, was announced at VMworld Windows Server 2016 also dives deep into containers, Europe 2016 in Barcelona. That surprised some, who another burgeoning area for VMware. Hyper-V expected it to be unveiled at the U.S. show in Las Vegas. containers make their debut; each container runs in a vSphere 6.5 officially hit the streets on Nov. 15, along lightweight, separate VM. They still boot very quickly, with a bunch of related products, including vSAN 6.5, but provide the security isolation of a full VM. vRealize Log Insight 4 and vRealize Operations (VROps) That’s only a taste of the virtualization goodness 6.4. But the star of the show is vSphere 6.5, the key piece offered in Windows Server 2016; check out the of infrastructure underlying datacenters worldwide. September issue of Virtualization Review for a more One of the new technologies featured in vSphere 6.5 complete account. VR is vSphere Predictive DRS. “DRS” stands for “Distributed Resource Scheduler,” and has been part of vSphere for Keith Ward is editor in chief of Virtualization Review. SHUTTERSTOCK

28 | December 2016 | Virtualization Review | VirtualizationReview.com Get daily news & analysis from Virtualization Review in your inbox – Virtualization Review Get all of the latest subscribe to virtualization news, products, options, tips and more in ONE one of our newsletter.

newsletters! AWS Cloud Report Covers industry trends in cloud computing.

virtualizationreview.com/newsletters FEATURE | Internet of Things

WIRED>>The Internet of Things Is Coming. Plan Accordingly. he concept of the Internet of Things (IoT) is starting to gain Winners and Losers a lot of momentum; and, as usual with a hot trend, you’re seeing a great deal of hype Suppliers are jumping in to offer their own products and services, and are hoping that they can set the industry and hyperbole intermixed with a few factual statements. You see articles, reports, standards and force others to play their game. The name of papers, and presentations opining on both the wonders and horrors of this technology this game, by the way? “We win and you lose.” This rush to claim territory is extremely similar to what Tin just about every industry journal and event. you’ve seen each and every time a new, interesting use of Nearly every research fi rm has published its own taxonomy showing how it sees the technology emerges. As usual, the technology is based on things you’ve seen before, but that won’t stop suppliers market break down into submarkets, revenue opportunities and, of course, discussions of how and research fi rms from coming up with new and different enterprises should go about the business of deploying this technology. buzzwords and catch phrases. SHUTTERSTOCK

30 | December 2016 | Virtualization Review | VirtualizationReview.com IoT is here, but isn’t yet well developed. Before jumping in, consider carefully what’s available now and what your goals are for the future. By Dan Kusnetzky

Historical Roots If you focus a bit more tightly, as with previous industry trends, the roots of IoT have been in manufacturing for decades. Numerically controlled manufacturing tools have been chugging away on shop fl oors for quite some time. As computers got smaller, faster and less expensive, they were built into more and more devices and tools. As inexpensive forms of networking were developed, they, too, were adopted and built into devices. Now we live in a world in which very powerful, but tiny, computers are available that can control systems and communicate with the world. They’re cheap, too; many are available for less than $100, and in some cases, even less than $25. Raspberry Pi and computers housed in a USB stick are plentiful. And as networked devices get ever more powerful and less costly, developers have been coming up with new ways to use the technology. Some of these “innovations” appear useful and are likely to be popular. Others appear iffy enough that they’re likely to die early in life. (That, by the way, won’t mean that the idea behind them will die; they just might reincarnate when better, faster, cheaper and more capable technology appears later.)

Security and Privacy As with many other tools that connect people and things, you’re also seeing concerns emerge about privacy and security. You’ve seen reports of televisions “spying” on consumers by listening to everything said in a room, and then forwarding that data to some unknown server out on a network. >>The Internet of Things Is Coming. Plan Accordingly. The suppliers haven’t explained where the recordings of WIRED voices heard in the room are being sent, nor do they explain how long that data is kept or how that data is being used. While Winners and Losers having televisions respond to voice commands is an interesting Suppliers are jumping in to offer their own products and extension of the basic functionality of a TV, do customers want services, and are hoping that they can set the industry everything they say sent to some unknown place in the network standards and force others to play their game. The name of for analysis and later use? You’ve seen similar capabilities added this game, by the way? “We win and you lose.” to mobile phones, vehicles and household appliances. This rush to claim territory is extremely similar to what you’ve seen each and every time a new, interesting use of Defi ning IoT technology emerges. As usual, the technology is based on Let’s stop for a moment and defi ne a few terms. As with other things you’ve seen before, but that won’t stop suppliers emerging industry topics, there are many different defi nitions and research fi rms from coming up with new and different of what the “Internet of Things” means. It’s very early in the buzzwords and catch phrases. adoption cycle for this technology.

VirtualizationReview.com | Virtualization Review | December 2016 | 31 FEATURE | Internet of Things

Suppliers, obviously, are doing their best to paint their their products. This makes customization of products difficult offerings in the area of connected devices as early examples and costly. If the manufacturing line was more intelligent, of this trend. They’re also doing their best to present it would be easily possible for one product to be built with themselves as uniquely qualified to lead the charge to a different features than the next one coming down the line. If more connected future. With so many definitions, it’s easy to you add in 3-D printing to the mix, manufacturing lines could understand why many in the industry are confused. be drastically simplified and the longtime promise of just-in- If you examine a collection of these definitions, several time inventory to be finally realized. common threads emerge: •. Retail. Point-of-sale and consumer personal productivity • Devices of all sorts now include intelligence to support devices are increasingly linked together in new ways, and guide their activities. New automobiles have dozens making it far easier to simplify stocking and selling products. of “electronic control modules,” or computers that allow In some cases, inventory can largely take care of itself. the vendor to offer new, interesting features and make it As supplies are consumed, the retail establishment can possible for the vehicles to be more useful, more efficient automatically order more. and offer greater value to consumers. •. Health care. More and more, patients can be Glancing around a typical home, you also see similar “instrumented” by wearing special clothing or carrying changes to appliances, entertainment devices and even devices that monitor, in real time, all vital statistics and things like clocks. Manufacturing has also seen numerically instantly notify caregivers when something unusual is controlled devices in nearly every market and industry; these devices are more and more functional, and some can operate autonomously a great deal of the time. With so many definitions, • Networking capabilities are being added to these devices, it’s easy to understand offering the hope of real-time operational control and custom usage of the device. This makes it possible for operational why many in the industry data to be collected and analyzed, and for enterprises and are confused. individuals to learn more about what they’re doing and how they’re doing it. Devices can speak with one another, making happening. Doctors, nurses, therapists and other support it possible, for example, for a television show to follow an personnel can often instantly communicate with one another; individual as they move from room to room. in addition, detailed records can be kept without burdening •. Devices can be personalized and demonstrate different health care professionals. It’s getting easier for specialists to behavior to different individuals, very much like today’s provide care regardless of their proximity to the patient. mobile phones. Apps that offer new capabilities can be sold •. Vehicles. Vehicles already have become moving networks. separately from the devices themselves, allowing individuals It may be possible for all of the operational characteristics of to fine-tune their own environments. vehicles to be monitored in real time to improve efficiency, •. Suppliers are looking forward to being able to track offer new capabilities and reduce or eliminate the possibility everyone and learn what users are purchasing and rejecting, of minor or catastrophic failure. Vehicles also could making it possible to target advertising even more finely. operate autonomously based on their location, the weather, Refrigerators will be able to track what food individuals buy, surrounding conditions and other parameters, making travel and order products when supplies are running low. Health safer, more efficient and less costly. monitors will examine exercise or sleep patterns and report •. Appliances. Appliances are starting to learn consumer to doctors (and possibly insurers, hospitals and suppliers of preferences and adapt themselves to operate more in accord health care products). with individual requirements. This provides the ability to If you consider what the industry is talking about today, monitor consumption of supplies and automatically order it’s clear that IoT is having an impact now, and also changing replacements. what can be imagined in the future, in numerous areas of If we examine all the devices we use on a daily basis, it’s personal and public life, including: obvious that intelligence can be added to many of them, •. Personal productivity. We’re constantly learning about making our lives more convenient in the process. new intelligent devices that suppliers hope we’ll love and While the vision is lovely, how to implement these dreams carry with us all the time. Smartphones and tablets have is still far too complex and device- and application-specific. become standard accessories carried by nearly everyone. Vendors are looking for new ways to connect users to the Emerging Frameworks network. Watches are a recent competitive battleground. Each supplier has its own view of what IoT really means and Who knows what’s next? how the technology should be developed. Unifying standards •. Process control in manufacturing. Manufacturers have are emerging slowly. There are many IT suppliers offering long had to deploy different technology and processes to build tools, development frameworks, communication and

32 | December 2016 | Virtualization Review | VirtualizationReview.com networking tools that are tuned to specific requirements. environments. What human and development languages Suppliers and enterprises have to sift through all of these must they support? Whose development libraries must be offerings to determine if they should use off-the-shelf used? What embedded databases should be selected? Will technology or build their own. these devices be able to support virtualization technology? If Here are a few IoT development frameworks that have so, what standards will be supported? come to the attention of Kusnetzky Group analysts. This is It’s clear that IoT is well established in manufacturing, but by no means a comprehensive or exhaustive list, nor is it in it’s just emerging in other markets and applications. Because any specific order. the requirements of a numerically controlled manufacturing •. Eclipse.org has started an IoT working group that, in the device are very different than what’s likely to be seen words of the organization, “fosters the creation of extensible in a home appliance or a vehicle, cross-platform, cross- services and frameworks that enable IoT applications on top application standards may be slow to emerge. of open APIs.” Many suppliers and academic institutions are using Eclipse as a foundation for their own work. Enterprise Planning Advice •. Kaa is offering its own open source IoT middleware Standards and frameworks are emerging slowly and platform, and is hoping to capture the attention of many differently in each vertical market. Once they emerge, developing communities to build solutions in specific markets. cross-market/cross-platform standards will emerge. •. Many suppliers are starting with the embedded Web Enterprises, however, are unlikely to wait. Many have and Web App services they’ve built into intelligent devices. already begun planning, and in some cases, proof-of- They’re hoping that they’ll be able to stake out a claim by concept projects are well underway. enabling today’s Web developers to build device-specific Here are a few rules of thumb for your enterprise apps and easily deploy them. planning: •. Publish/Subscribe frameworks have been offered by IBM •. First and most important, the enterprise must Corp., Qualcomm Technologies Inc. and others to address how determine what it’s trying to accomplish before selecting a IoT applications can be built and sent to remote systems. set of tools. Just because you have a hammer doesn’t mean •. Resin.IO recently offered a complete development and that everything is a nail. runtime environment for Linux-based devices. Docker, Git, •. Take the time to determine how IoT data will enhance Yocto and other open source technologies are used as part of enterprise operations, profitability and so on. If it isn’t going the Resin.IO framework. to improve the enterprise in some way, it might be better to •. Apple Inc., Cisco Systems Inc., Microsoft, Samsung, and wait while standards emerge. Don’t forget to consider how a host of others are offering frameworks for IoT development IoT data will enhance Big Data projects currently underway. focused on their devices and software. While each offers •. Don’t forget privacy and security regulations and interesting capabilities, developers wanting cross-platform requirements when thinking about customer data collection solutions might be forced to build separate solutions using projects. each of these technologies, or force endpoint users to select •. Here’s a big one: How will IoT deployments improve specific devices. customer and staff experience? If jumping into IoT means making customer experience poor, the project is unlikely to Standards succeed. Making customers do the hokey pokey to accomplish While this vision offers many interesting possibilities, their goals isn’t what it’s all about. Jumping on the newest there are few standards in place. Because of that, suppliers technological trend might be fun for the staff, but if it makes have had free reign to select processors, memory, storage, life harder for customers, they’ll just go somewhere else. networking capabilities, OSes, development frameworks and databases. Flash-Flood Warning Because they’re building their own computing environments, While the vision of IoT is enticing, it would be wise for interoperability and compatibility from device to device and enterprises to take their time to decide what they want to do, from supplier to supplier is still questionable Efforts are how it will improve the enterprise and how customers will underway to create standards for machine-to-machine (M2M) benefit before leaping into the river. IoT has the potential of communication, message queuing and security. taking the enterprise where it wants to go, or just causing it A challenge for the emerging IoT field is that integrating to get all washed up. VR into today’s computing environments is going to be a complex, tedious task; today’s computing environments are often built Daniel Kusnetzky, a reformed software engineer and product upon multiple hardware platforms, OSes, development tools, manager, founded Kusnetzky Group LLC in 2006. He’s literally databases, storage and networking standards. written the book on virtualization and often comments on cloud This means developers must consider how their products computing, mobility and systems software. In his spare time, he’s will fit into today’s management, security and development also the managing partner of Lux Sonus LLC, an investment firm.

VirtualizationReview.com | Virtualization Review | December 2016 | 33 ROYAL PACIFIC RESORT AT DEC UNIVERSAL ORLANDO 5-9

IT Training that Finis hes First

TechMentor offers in-depth training for IT Pros, from System and Network Administrators to IT Managers and Directors, giving you the perfect balance of the tools you need today, while preparing you for tomorrow. With zero marketing speak, a strong emphasis on doing more with the technology you already own, and solid coverage of what's just around the corner, you'll win with the week of jam-packed content.

twitter.com/live360 facebook.com linkedin.com @live360 Search "Live 360" Join the "Live! 360" group!

EVENT PARTNERS PLATINUM SPONSORS GOLD SPONSORS SUPPORTED BY

Untitled-5 2 11/4/16 12:43 PM LAST CHANCE

*UHDƜ&RQIHUHQFHƖ *UHDƜ3ULFĠ

TechMentor Orlando is part of Live! 360, the Ultimate Education Destination. This means you’ll have access to fi ve (5) other co-located events at no additional cost:

Whether you are an

³ IT Pro ³ DBA ³ Administrator You will walk away from this event having expanded your IT skills and the ability to bring more value to your organization, and your career!

NEW! REGISTER WITH DISCOUNT CODE L360DEC AND Six (6) events and hundreds of SAVE $300! sessions to choose from – mix and match sessions to create your own, Must use discount code custom event line-up – it’s like no L360DEC other conference available today! Scan the QR code to register or for more TURN THE PAGE FOR event details. MORE EVENT DETAILS

PRODUCED BY TECHMENTOREVENTS.COM

Untitled-5 3 11/4/16 12:43 PM ROYAL PACIFIC RESORT AT DEC UNIVERSAL ORLANDO 5-9

Check Out the Additional Sessions for IT Pros at Live! 360

SQL Server Live! features 20+ IT PRO sessions, including:

• Workshop: Performance Tune SQL Server: Query Optimizer, Indexes, the Plan Cache, and Execution Plans - Bradley Ball • Would You Just Load Already?! Maximizing Your SSIS Data Load - Chris Bell • Production SQL Server 2016--Lessons from the Field - Joseph D'Antoni • Performance in 60 Seconds - SQL Tricks Everybody MUST Know - Pinal Dave • Secrets of SQL Server - Database Worst Practices - Pinal Dave • Does Your Performance Tuning Need a 12-step program? - Janis Griffi n • Confi guring SQL Server for Performance - Like a Microsoft Certifi ed Master - Thomas LaRock • Workshop: Design and Implement SQL Server HA/DR Hybrid Solutions with Microsoft Azure - Edwin Sarmiento

Offi ce & SharePoint Live! features 14+ IT PRO sessions, including:

• Workshop: Installing and Confi guring SharePoint Server 2016 - Vlad Catrinescu • Optimizing SQL Server for SharePoint - Brian Alderman • IT Pros Guide to Managing SharePoint Search - Matthew McDermott • PowerShell for Offi ce 365 - Vlad Catrinescu • Scripting SharePoint 2016 Tasks with PowerShell - Ben Stegink • Implementing and Managing Offi ce 365 - Ben Stegink • Setting up Directory Synchronization for Offi ce 365 - Scott Hoag / Dan Usher • Workshop: 10 Steps to be Successful with Enterprise Search - Agnes Molnar

SEE THE FULL AGENDA AT LIVE360EVENTS.COM )HDWXUHĘ/LYĠ6SHDNHUƖ

YUNG CHOU JANIS GRIFFIN JASON HELMICK JEFFERY HICKS RICHARD HICKS DON JONES

SAMI LAIHO BRUCE MATTHEW MARK MINASI AGNES MOLNAR GREG SHIELDS MACKENZIE-LOW MCDERMOTT

Untitled-5 4 11/4/16 12:44 PM AGENDA AT-A-GLANCE

Server / Datacenter Client DevOps IT Soft Skills The Real Cloud Security

START TIME END TIME TechMentor Pre-Conference: Sunday, December 4, 2016 4:00 PM 8:00 PM Pre-Conference Registration - 5R\DO3DFLÀF5HVRUW&RQIHUHQFH&HQWHU 6:00 PM 9:00 PM Dine-A-Round Dinner @ Universal CityWalk - 6:00pm

START TIME END TIME TechMentor Pre-Conference Workshops: Monday, December 5, 2016 TMM01 Workshop: Sixty-seven VMware vSphere Tricks That’ll Pay for This TMM02 Workshop: Demystifying the Blue Screen of Death - Bring Your Own 8:00 AM 5:00 PM Conference! - *UHJ6KLHOGV Laptop Hands-On Lab (BYOL - HOL) - %UXFH0DFNHQ]LH/RZ 5:00 PM 6:00 PM EXPO Preview Live! 360 Keynote: Digital Transformation - Is Your IT Career on Track as Businesses Look to Become More Agile? 6:00 PM 7:00 PM - 'DYLG)RRWH&RIRXQGHU&KLHI$QDO\VWDQG&KLHI5HVHDUFK2IÀFHU)RRWH3DUWQHUV

START TIME END TIME TechMentor Day 1: Tuesday, December 6, 2016 TechMentor Keynote: Sweet Sixteen, or Just Server 2012R3? A Glance at the Awesome, the Irritating, 8:00 AM 9:00 AM the Improved and the Expensive in Server 2016 - 0DUN0LQDVL,7&RQVXOWDQW$XWKRU6SHDNHU05 ' 9:00 AM 9:30 AM Networking Break • Visit the EXPO TMT01 Secure Access Everywhere! TMT02 Linux on Azure for the Microsoft Specialist TMT03 The Absolute Beginner’s Guide to 9:30 AM 10:45 AM Implementing DirectAccess in Windows Server 2016 - 7LPRWK\:DUQHU $GYDQFHG&HUWLÀFDWH6HUYLFHV*UHJ6KLHOGV - 5LFKDUG+LFNV TMT04 DirectAccess Troubleshooting Deep Dive TMT05 Container Technology and its Impact on TMT06 Master Camtasia and Build Your Own 11:00 AM 12:15 PM - 5LFKDUG+LFNV Datacenter and Cloud Management - 1HLO3HWHUVRQ Training in 75 Minutes or Less - *UHJ6KLHOGV 12:15 PM 2:00 PM Lunch • Visit the EXPO TMT08$FKLHYLQJ,7(IÀFLHQF\ &RQWURO7KURXJK TMT07 Windows as a Service Explained: Really, I've TMT09 Getting Started with Nano Server 2:00 PM 3:15 PM Automated, Self-Service Provisioning on Hyper-V Got to Upgrade Every Year? - 0DUN0LQDVL - -HIIHU\+LFNV - $PMDG$IDQDKDQG+DQGR\R6XWDQWR 3:15 PM 4:15 PM Networking Break • Visit the EXPO TMT11 Azure Point-to-Site VPN Sucks! Fix It TMT10 Troubleshooting Client Communications TMT12 Implementing Hyper-V Failover Clusters in 4:15 PM 5:30 PM with Windows Server 2016 VPN in the Cloud with Wireshark - 7LPRWK\:DUQHU Windows Server 2016 - %UXFH0DFNHQ]LH/RZ - 5LFKDUG+LFNV 5:30 PM 7:30 PM Exhibitor Reception

START TIME END TIME TechMentor Day 2: Wednesday, December 7, 2016 TMW01 Creating Advanced Functions in TMW02 Implementing Azure AD for TMW03 Managing Windows 10 Using the MDM 8:00 AM 9:15 AM PowerShell - 0LFKDHO:LOH\ Hybrid Identity - 7LPRWK\:DUQHU 3URWRFRODQG&RQÀJ0JU6WHYHQ5DFKXL TMW04 Creating Class-Based PowerShell Tools TMW05 Fully Integrated Azure Resource Manager TMW06%XLOGLQJ$SSOLFDWLRQVLQ&RQÀJ0JU 9:30 AM 10:45 AM - -HIIHU\+LFNV Deployments - 1HLO3HWHUVRQ Tips and Tricks - 6WHYHQ5DFKXL 10:45 AM 11:15 AM Networking Break • Visit the EXPO

11:15 AM 12:15 PM Live! 360 Keynote: Mobile - Choosing a Direction for Your Company - -RKQ3DSD-RKQ3DSDQHW//&

12:15 PM 1:45 PM Birds-of-a-Feather Lunch • Visit the EXPO TMW07 Creating WPF-Based Graphical TMW09 SysInternals Tools: Process Explorer and 1:45 PM 3:00 PM TMW08 Session to be Announced PowerShell Tools - -HIIHU\+LFNV Process Monitor - 6DPL/DLKR 3:00 PM 4:00 PM 1HWZRUNLQJ%UHDN‡9LVLWWKH(;32‡([SR5DIÁH#SP TMW10 Harvesting the Web: Using PowerShell to TMW11 In-Depth Introduction to Docker TMW12 War Driving: How it Happens, How to 4:00 PM 5:15 PM Scrape Screens, Exploit Web Services, and Save Time - 1HLO3HWHUVRQ Protect Yourself - 'DOH0HUHGLWK - 0DUN0LQDVL 8:00 PM 10:00 PM Live! 360 Dessert Luau - Wantilan Pavilion

START TIME END TIME TechMentor Day 3: Thursday, December 8, 2016 TMH03 Windows Clusters for Beginners: From TMH013RZHU6KHOODQG:RUNÁRZ TMH02 Pen-Testing Like an IT Superhero 8:00 AM 9:15 AM Highly Fearful to Highly Reliable in 75 Minutes! Magic Together! - 0LFKDHO:LOH\ - 'DOH0HUHGLWK - 0DUN0LQDVL TMH04 You're Writing Your PowerShell Functions TMH05 Mobile Devices and Security: The Bane of TMH062IÀFH5LVN0LWLJDWLRQ 9:30 AM 10:45 AM :URQJ6WRS,W'RQ-RQHV the IT Superhero - 'DOH0HUHGLWK - -3HWHU%UX]]HVH TMH08 Understanding Windows 10/2016's Super TMH073RZHU6KHOO'HVLUHG6WDWH&RQÀJXUDWLRQ 11:00 AM 12:15 PM Security: VSM, Credential Guard, Trustlets and More TMH09 Evolving as an IT Pro - -3HWHU%UX]]HVH (DSC) for the IT Ops Guy - -DVRQ+HOPLFN - 0DUN0LQDVL 12:15 PM 1:30 PM Lunch on the Lanai TMH11 Facing Increasing Malware Threats and TMH10 PowerShell Unplugged: Stump Don TMH12 The Labyrinth of Exchange 1:30 PM 2:45 PM a Growing Trend of BYOD with a New Approach - 'RQ-RQHV Migration Options - -3HWHU%UX]]HVH of PC Security -

START TIME END TIME TechMentor Post-Conference Workshops: Friday, December 9, 2016 TMF01 Workshop: Boost Your IT Career, 2017 Edition: The Don and Jason Show 8:00 AM 5:00 PM TMF02 Workshop: BlackBelt - Windows Security Internals - 6DPL/DLKR - 'RQ-RQHVDQG-DVRQ+HOPLFN 6SHDNHUVDQGVHVVLRQVVXEMHFWWRFKDQJH

Connect with TechMentor! twitter.com/techmentor facebook.com linkedin.com – Join the @techmentorevent Search “TechMentor” “TechMentor” group! TECHMENTOREVENTS.COM

Untitled-5 5 11/4/16 12:44 PM DAN’STAKE By Dan Kusnetzky application that executed on a single host. While the promise was most certainly there, many enterprises gave up on this approach because they couldn’t get the performance or reliability that they were seeking. They also dis- covered, much to their chagrin, that SOA was inherently more complex to monitor and manage Containers: than the monolithic application it was meant SOA in Disguise to replace. Dan’s Take: Back to the Future Recently, I’ve had conversations with several supplier The industry has a rather short memory, and it representatives who are proponents of recreating applications forgets what worked before and why. Similar or creating new applications using “microservices” that are language is being used, but with a new name. developed and deployed within containers, a form of OS It’s been changed to microservices, and the virtualization and partitioning. underlying architecture has been changed They often point out that this approach can simplify the to the use of either virtual machines (VMs) creation of complex applications, reduce or eliminate errors and or OS virtualization and partitioning, aka make it possible to build new applications more quickly in the “containers.” future. They seldom, of course, discuss the time it will take to Once again, the goal is making it possible build, test and document all of the necessary microservices. for developers to easily stand up a function These conversations remind me of similar conversations with in a neutral, platform-independent fashion vendor representatives who were pushing the service-oriented and then deploy it on a single host, multiple architecture (SOA) more than a decade ago. hosts or hosts in multiple datacenters. As one might expect, cloud services providers are What Is SOA? very happy to promote this approach because SOA can be described as a structured development style in it will make it much simpler in the end for which applications are decomposed into individual, stand- enterprises to re-host all or part of a workload alone services that can be developed, tested and documented onto a system in one of their datacenters. separately. Typically, these services represent a single business This time, however, some remember function that’s self-contained, and can be considered a “black the challenges imposed by SOA and have box” once it’s fi nished. worked to address the challenges. I just These business functions were developed using well-defi ned heard an interesting presentation by Big inputs and outputs that could be delivered across the network in Switch Networks on how their networking a vendor- and platform-neutral way. tools can make it possible for developers What was often forgotten was that network-oriented and administrators to monitor what’s going communications architectures were nearly always slower on at the container level, and also see how than inter-process communications architectures used to the entire network is running. I’ve heard communicate from one function to another within a monolithic presentations by Red Hat about how its version, OpenShift, can make it easier for developers to put together and manage application components based on Docker containers and the Kubernetes container cluster manager. Only time will tell if microservices—that is, containers—will work better than SOA on top of Windows, Unix and so on, as was tried in the past. VR

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He’s literally written the book on virtualization and often comments on cloud

SHUTTERSTOCK computing, mobility and systems software.

38 | December 2016 | Virtualization Review | VirtualizationReview.com TheCranky Admin By Trevor Pott

The Cost of Flooring Depending on the level of IT service you’re trying to provide, the amount of infrastructure you need just to get off the ground and start running your first workload can be quite high. This floor cost is important. Key Factors A fairly standard midsize VM will run about $800 per year on Amazon Web Services (AWS). in Choosing Lighting up your own infrastructure will probably be in the $50,000 range for something I would On-Premises IT consider minimally resilient. Because I’m talking real-world numbers here, not mythical marketing mumbo-jumbo, I’m going to throw out the three- vs. Public Cloud year refresh cycle and replace it with a five-year Public cloud computing costs more than do-it-yourself datacen- one that small organizations actually use. ters. Except when it doesn’t. On a per-VM basis, standing up public cloud Infrastructure-as-a-Service (IaaS) instances for 24x7 use is Virtualizing the egregiously expensive per VM, but the floor cost can’t be beat. management If I want to stand up a workable small business network, I need several infrastructure components. I need, at a minimum, a DNS applications on server, a DHCP server, storage and something to run workloads. the cluster they’re If I’m planning to expand my business at all before the refresh managing can and does on that hardware is up, then I’m probably going to want to use virtualization, as it’s still the only rational way to spin up and cause problems. down workloads as needed for on-premises deployments. To make the on-premises choice for workloads worth it, you need to be running 13 workloads The Importance of Multiple Clusters over a five-year period. Public cloud evangelists In theory, I could do DHCP and DNS off of my switching or might argue that public cloud will go down in routing infrastructure, but that doesn’t exactly provide high price over those five years, because it did so availability, and for core infrastructure components I like high during the period of aggressive market expansion availability. I also would kind of like to have a directory service and establishment during the past 10 years (the so that I could have centralized passwords, security and so on. British might disagree with that assessment; see In the real world, this means a Microsoft Active Directory (AD) bit.ly/2g3hyaT). domain controller (DC). AD is what the overwhelming majority of businesses use, and Predicting the Future for a good reason. And a single DC can host AD, DNS and DHCP, all Thirteen actual workloads is a lot, especially integrated and easy to use. Toss that on to a virtualization cluster considering many small businesses consume basic and you can make it highly available, fairly easily. core workloads such as Software-as-a-Service Of course, once you have a virtualization cluster you need applications. For young companies, the choice the management infrastructure to manage that cluster. Hard between on-premises and public cloud computing lessons learned have shown that virtualizing the management is really one of growth prediction. What will you applications on the cluster they’re managing can and does cause need, and when? And when does it make sense problems. That’s before you even touch more complicated chicken- to invest? Remember either way to factor in all and-egg scenarios, such as which should boot first: vSphere the costs. Determining your cost per VM can be Server, NSX or vCloud virtual machines (VMs). tricky, but even minor differences really add up If running VMware, you probably need two virtualization over the span of a five-year refresh cycle. clusters to make a reasonably resilient entry-level on-premises IT infrastructure capable of surviving the most common issues. Trevor Pott is a full-time nerd from Edmonton, A vendor like Scale Computing for the virtualization layer can Alberta, Canada. He splits his time between systems save you this because its management layer is distributed; but administration, technology writing, and consulting. then you’re using KVM as the hypervisor. In the real world that’s As a consultant he helps Silicon Valley startups not a problem, but there are some independent software vendors better understand systems administrators and how who might yell at you for it because they’re from the past. to sell to them. VR

VirtualizationReview.com | Virtualization Review | December 2016 | 39 TAKE FIVE 5 TIPS AND TRICKS TO TAKE WITH YOU :: By Tom Fenton

TAKE The container community is vibrant. Every week there are exciting developments in the container world, including new products and techniques. At KubeCon, some pretty exciting announcements were made, and people were busy chatting about how they’re using the 2technology and about the tools that help them use it most efficiently. Thoughts on the

Container Industry TAKE There are knowledgeable constituents. It’s not an exaggeration to say that KubeCon was pretty much a geek-fest. This was definitely not an event for people looking containers are the hottest trend in IT. to make the business case for container KubeCon only confirmed that perception. technology. The attendees were the folks that 3 work and live it every day.

The annual Kubernetes conference, KubeCon, which was hosted by the Cloud Native Computing Foundation (CNCF), was held in downtown Seattle Industry support is tremendous. Cisco, Google, on Nov. 8 and 9. Leaders and users of Kubernetes, Red Hat, Box and other companies all sent executives Docker and cloud-native architecture technology to speak at the event. Additionally, VMware, Intel, gathered at this event to discuss the current state Microsoft, Rancher, Huawei, CoreOS and many more TAKE big name companies were sponsors of the event. One and the future of these technologies. 4 thing’s for sure: You don’t get that kind of commitment from such KubeCon is a chance for those who are currently a range of companies unless they see genuine value and potential using or interested in using these technologies to in a technology. get together with the supplying vendors. I was able to attend KubeCon this year and thought I’d share my thoughts on the event. I won’t be going over technology or announcements made at the event, TAKE as I will cover those in other articles; instead, I’ll Container people are nice people. Everyone I be sharing my five observations and thoughts that interacted with at the conference—presenters, vendors stuck out to me most. and attendees—were great. They tolerated my naivety of the technology, and loved—I mean really, really loved— chatting about the technology and what they were working TAKE There is huge amount of interest in 5 on, as well as sharing stories about how they’re using it. containers. This year’s KubeCon was Maybe because it’s a still a relatively small conference or such a new double the size of last year, and sold out technology, or maybe because it was a rare sunny day in Seattle, people months in advance. They’re predicting seemed really happy at KubeCon. continued growth of the convention, and 1 believe that KubeCon 2017 will be three times as large as this year’s. This is proof that people are Next year, there will be two KubeCons: the first in Berlin, Germany, at the end interested in and excited about container technology, of March, and the second in Austin, Texas, the first week in December. If you’re and are eager to learn more. going to either, make sure you register early, as they’ll sell out quickly.

Tom Fenton works in the VMware Education department as a senior course developer. He has a wealth of hands-on IT experience gained over the past 20 years in a variety of technologies, with the past 10 years focused on virtualization and storage. Before re-joining VMware, Fenton was a senior validation engineer with Taneja Group Inc. He’s on Twitter: @vDoppler. SHUTTERSTOCK

40 | December 2016 | Virtualization Review | VirtualizationReview.com CHANGE THE ECONOMICS OF I.T.

We help more than 90% of Fortune Global 500 companies* solve business challenges, align their IT and business strategies, and prepare for the future of technology.

We're honored the editors of Virtualization Review Magazine chose 3 Red Hat® products for its top honor, the Editor's Choice award.

2016 EDITOR'S CHOICE AWARD WINNERS

redhat.com

*Red Hat client data and Fortune Global 500 list, 2015

Copyright © 2016 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, and JBoss are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.